Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | See [model versions](../concepts/model-versions.md) to learn about how Azure Ope | gpt-4 (0314) | | East US <br> France Central <br> South Central US <br> UK South | | gpt-4 (0613) | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | East US <br> East US 2 <br> Japan East <br> UK South | | gpt-4 (1106-preview) | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | | -| gpt-4 (vision-preview) | | Sweden Central <br> Switzerland North<br>Australia East <br> West US | --> [!NOTE] -> As a temporary measure, GPT-4 Turbo with Vision is currently unavailable to new customers. +| gpt-4 (vision-preview) | Sweden Central <br> West US| Switzerland North <br> Australia East | ### GPT-3.5 models |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md | POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen |--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | | ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls. |-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. | +| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD or YYYY-MM-DD-preview format. | **Supported versions** curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM #### Example response ```console-{"id":"chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9", -"object":"chat.completion","created":1679072642, -"model":"gpt-35-turbo", -"usage":{"prompt_tokens":58, -"completion_tokens":68, -"total_tokens":126}, -"choices":[{"message":{"role":"assistant", -"content":"Yes, other Azure AI services also support customer managed keys. Azure AI services offer multiple options for customers to manage keys, such as using Azure Key Vault, customer-managed keys in Azure Key Vault or customer-managed keys through Azure Storage service. This helps customers ensure that their data is secure and access to their services is controlled."},"finish_reason":"stop","index":0}]} +{ + "id": "chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9", + "object": "chat.completion", + "created": 1679072642, + "model": "gpt-35-turbo", + "usage": + { + "prompt_tokens": 58, + "completion_tokens": 68, + "total_tokens": 126 + }, + "choices": + [ + { + "message": + { + "role": "assistant", + "content": "Yes, other Azure AI services also support customer managed keys. + Azure AI services offer multiple options for customers to manage keys, such as + using Azure Key Vault, customer-managed keys in Azure Key Vault or + customer-managed keys through Azure Storage service. This helps customers ensure + that their data is secure and access to their services is controlled." + }, + "finish_reason": "stop", + "index": 0 + } + ] +} ```+Output formatting adjusted for ease of reading, actual output is a single block of text without line breaks. In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring. -Output formatting adjusted for ease of reading, actual output is a single block of text without line breaks. - > [!IMPORTANT] > The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. The replacement for `functions` is the `tools` parameter. The replacement for `function_call` is the `tool_choice` parameter. Parallel function calling which was introduced as part of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) is only supported with `gpt-35-turbo` (1106) and `gpt-4` (1106-preview) also known as GPT-4 Turbo Preview. | Parameter | Type | Required? | Default | Description | |--|--|--|--|--| | ```messages``` | array | Required | | The collection of context messages associated with this chat completions request. Typical usage begins with a [chat message](#chatmessage) for the System role that provides instructions for the behavior of the assistant, followed by alternating messages between the User and Assistant roles.|-| ```temperature```| number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\nWe generally recommend altering this or `top_p` but not both. | +| ```temperature```| number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | | ```n``` | integer | Optional | 1 | How many chat completion choices to generate for each input message. | | ```stream``` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message." | | ```stop``` | string or array | Optional | null | Up to 4 sequences where the API will stop generating further tokens.| |
ai-studio | Prompt Flow Tools Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md | -The following table provides an index of tools in prompt flow. If existing tools don't meet your requirements, you can [develop your own custom tool and make a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). +The following table provides an index of tools in prompt flow. | Tool name | Description | Environment | Package name | ||--|-|--| The following table provides an index of tools in prompt flow. If existing tools To discover more custom tools developed by the open-source community, see [More custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html). +## Remarks +- If existing tools don't meet your requirements, you can [develop your own custom tool and make a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). +- To install the custom tools, if you are using the automatic runtime, you can readily install the package by adding the custom tool package name into the `requirements.txt` file in the flow folder. Then select the **Save and install** button to start installation. After completion, you can see the custom tools displayed in the tool list. To learn more, see [How to create and manage a runtime](../create-manage-runtime.md). + ## Next steps - [Create a flow](../flow-develop.md) |
aks | Use Azure Ad Pod Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md | Microsoft Entra pod-managed identities use Kubernetes primitives to associate [m > Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >-> The open source Microsoft Entra pod-managed identity (preview) in Azure Kubernetes Service was deprecated on 10/24/2022, and the project archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on was deprecated in Sept. 2024. +> The open source Microsoft Entra pod-managed identity (preview) in Azure Kubernetes Service was deprecated on 10/24/2022, and the project archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2024. > > To disable the AKS Managed add-on, use the following command: `az feature unregister --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"`. |
aks | Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md | Azure Kubernetes Service (AKS) clusters require an identity to access Azure reso AKS doesn't automatically create a [service principal](kubernetes-service-principal.md), so you have to create one. Clusters that use a service principal eventually expire, and the service principal must be renewed to avoid impacting cluster authentication with the identity. Managing service principals adds complexity, so it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities. Managed identities use certificate-based authentication. Each managed identity's credentials have an expiration of *90 days* and are rolled after *45 days*. AKS uses both system-assigned and user-assigned managed identity types, and these identities are immutable. > [!IMPORTANT]-> The open source [Microsoft Entra pod-managed identity][entra-id-pod-managed-identity] (preview) in Azure Kubernetes Service was deprecated on 10/24/2022, and the project archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on was deprecated in Sept. 2024. +> The open source [Microsoft Entra pod-managed identity][entra-id-pod-managed-identity] (preview) in Azure Kubernetes Service was deprecated on 10/24/2022, and the project archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2024. > > We recommend you first review [Microsoft Entra Workload ID][workload-identity-overview] overview. This authentication method replaces Microsoft Entra pod-managed identity (preview) and is the recommended method. |
api-management | Rate Limit Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-policy.md | To understand the difference between rate limits and quotas, [see Rate limits an remaining-calls-header-name="header name" remaining-calls-variable-name="policy expression variable name" total-calls-header-name="header name">- <api name="API name" id="API id" calls="number" renewal-period="seconds" /> + <api name="API name" id="API id" calls="number" renewal-period="seconds" > <operation name="operation name" id="operation id" calls="number" renewal-period="seconds" /> </api> </rate-limit> |
api-management | Validate Client Certificate Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-client-certificate-policy.md | For more information about custom CA certificates and certificate authorities, s validate-not-after="true | false" ignore-error="true | false"> <identities>- <identityΓÇ» + <identity thumbprint="certificate thumbprint" serial-number="certificate serial number" common-name="certificate common name" For more information about custom CA certificates and certificate authorities, s dns-name="certificate DNS name" issuer-subject="certificate issuer" issuer-thumbprint="certificate issuer thumbprint"- issuer-certificate-id="certificate identifier"ΓÇ»/> + issuer-certificate-id="certificate identifier"/> </identities> </validate-client-certificate> ``` The following example validates a client certificate to match the policy's defau * [API Management access restriction policies](api-management-access-restriction-policies.md) |
api-management | Validate Content Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md | The policy validates the following content in the request or response against th ## Policy statement ```xml-<validate-content unspecified-content-type-action="ignore | prevent | detect" max-size="size in bytes" size-exceeded-action="ignore | prevent | detect" errors-variable-name="variable name"> +<validate-content unspecified-content-type-action="ignore | prevent | detect" max-size="size in bytes" size-exceeded-action="ignore | prevent | detect" errors-variable-name="variable name"> <content-type-map any-content-type-value="content type string" missing-content-type-value="content type string"> <type from | when="content type string" to="content type string" /> </content-type-map>- <content type="content type string" validate-as="json | xml | soap" schema-id="schema id" schema-ref="#/local/reference/path" action="ignore | prevent | detect" allow-additional-properties="true | false" /> + <content type="content type string" validate-as="json | xml | soap" schema-id="schema id" schema-ref="#/local/reference/path" action="ignore | prevent | detect" allow-additional-properties="true | false" /> </validate-content> ``` |
app-service | Configure Basic Auth Disable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-basic-auth-disable.md | Title: Disable basic authentication for deployment description: Learn how to secure App Service deployment by disabling basic authentication. keywords: azure app service, security, deployment, FTP, MsDeploy Previously updated : 11/05/2023 Last updated : 01/26/2024 -App Service provides basic authentication for FTP and WebDeploy clients to connect to it by using [deployment credentials](deploy-configure-credentials.md). These APIs are great for browsing your site’s file system, uploading drivers and utilities, and deploying with MsBuild. However, enterprises often require more secure deployment methods than basic authentication, such as [Microsoft Entra ID](/entr)). Entra ID uses OAuth 2.0 token-based authorization and has many benefits and improvements that help mitigate the issues in basic authentication. For example, OAuth access tokens have a limited usable lifetime, and are specific to the applications and resources for which they're issued, so they can't be reused. Entra ID also lets you deploy from other Azure services using managed identities. +App Service provides basic authentication for FTP and WebDeploy clients to connect to it by using [deployment credentials](deploy-configure-credentials.md). These APIs are great for browsing your site’s file system, uploading drivers and utilities, and deploying with MsBuild. However, enterprises often require more secure deployment methods than basic authentication, such as [Microsoft Entra ID](/entr)). Microsoft Entra uses OAuth 2.0 token-based authorization and has many benefits and improvements that help mitigate the issues in basic authentication. For example, OAuth access tokens have a limited usable lifetime, and are specific to the applications and resources for which they're issued, so they can't be reused. Microsoft Entra also lets you deploy from other Azure services using managed identities. ## Disable basic authentication To confirm that Git access is blocked, try [local Git deployment](deploy-local-g ## Deployment without basic authentication -When you disable basic authentication, deployment methods based on basic authentication stop working, such as FTP and local Git deployment. For alternate deployment methods, see [Authentication types by deployment methods in Azure App Service](deploy-authentication-types.md). --<!-- Azure Pipelines with App Service deploy task (manual config) need the newer version hosted agent that supports vs2022. -OIDC GitHub actions --> +When you disable basic authentication, deployment methods that depend on basic authentication stop working. The following table shows how various deployment methods behave when basic authentication is disabled, and if there's any fallback mechanism. For more information, see [Authentication types by deployment methods in Azure App Service](deploy-authentication-types.md). ++| Deployment method | When basic authentication is disabled | +|-|-| +| Visual Studio deployment | Doesn't work. | +| [FTP](deploy-ftp.md) | Doesn't work. | +| [Local Git](deploy-local-git.md) | Doesn't work. | +| Azure CLI | In Azure CLI 2.48.1 or higher, the following commands fall back to Microsoft Entra authentication:<br/>- [az webapp up](/cli/azure/webapp#az-webapp-up)<br/>- [az webapp deploy](/cli/azure/webapp#az-webapp-deploy)<br/>- [az webapp deployment source config-zip](/cli/azure/webapp/deployment/source#az-webapp-deployment-source-config-zip)<br/>- [az webapp log deployment show](/cli/azure/webapp/log/deployment#az-webapp-log-deployment-show)<br/>- [az webapp log deployment list](/cli/azure/webapp/log/deployment#az-webapp-log-deployment-list)<br/>- [az webapp log download](/cli/azure/webapp/log#az-webapp-log-download)<br/>- [az webapp log tail](/cli/azure/webapp/log#az-webapp-log-tail)<br/>- [az webapp browse](/cli/azure/webapp#az-webapp-browse)<br/>- [az webapp create-remote-connection](/cli/azure/webapp#az-webapp-create-remote-connection)<br/>- [az webapp ssh](/cli/azure/webapp#az-webapp-ssh)<br/>- [az functionapp deploy](/cli/azure/functionapp#az-functionapp-deploy)<br/>- [az functionapp log deployment list](/cli/azure/functionapp/log/deployment#az-functionapp-log-deployment-list)<br/>- [az functionapp log deployment show](/cli/azure/functionapp/log/deployment#az-functionapp-log-deployment-show)<br/>- [az functionapp deployment source config-zip](/cli/azure/functionapp/deployment/source#az-functionapp-deployment-source-config-zip) | +| [Maven plugin](https://github.com/microsoft/azure-maven-plugins) or [Gradle plugin](https://github.com/microsoft/azure-gradle-plugins) | Works. | +| [GitHub with App Service Build Service](deploy-continuous-deployment.md?tabs=github) | Doesn't work. | +| [GitHub Actions](deploy-continuous-deployment.md?tabs=github) | - An existing GitHub Actions workflow that uses **basic authentication** can't authenticate. In the Deployment Center, disconnect the existing GitHub configuration and create a new GitHub Actions configuration with the **user-assigned identity** option instead. <br/> - If the existing GitHub Actions deployment is [manually configured](deploy-github-actions.md), try using a service principal or OpenID Connect instead. <br/> - For new GitHub Actions configuration in the Deployment Center, use the **user-assigned identity** option. | +| Deployment in [create wizard](https://portal.azure.com/#create/Microsoft.WebSite) | When **Basic authentication** is set to **Disable** and **Continuous deployment** set to **Enable**, GitHub Actions is configured with the **user-assigned identity** option (OpenID Connect). | +| [Azure Repos with App Service Build Service](deploy-continuous-deployment.md?tabs=github) | Doesn't work. | +| [BitBucket](deploy-continuous-deployment.md?tabs=bitbucket) | Doesn't work. | +| [Azure Pipelines](deploy-azure-pipelines.md) with [AzureWebApp](/azure/devops/pipelines/tasks/reference/azure-web-app-v1) task | Works. | +| [Azure Pipelines](deploy-azure-pipelines.md) with [AzureRmWebAppDeployment](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-deployment) task | - Use the latest AzureRmWebAppDeployment task to get fallback behavior. <br/> - The **Publish Profile (`PublishProfile`)** connection type doesn't work, because it uses basic authentication. Change the connection type to **Azure Resource Manager (`AzureRM`)**. <br/> - On non-Windows Pipelines agents, authentication works. <br/> - On Windows agents, the [deployment method used by the task](/azure/devops/pipelines/tasks/reference/azure-rm-web-app-deployment-v4#deployment-methods) might need to be modified. When Web Deploy is used (`DeploymentType: 'webDeploy'`) and basic authentication is disabled, the task authenticates with a Microsoft Entra token. There are additional requirements if you're not using the `windows-latest` agent or if you're using a self-hosted agent. For more information, see [I can't Web Deploy to my Azure App Service using Microsoft Entra authentication from my Windows agent](/azure/devops/pipelines/tasks/reference/azure-rm-web-app-deployment-v4#i-cant-web-deploy-to-my-azure-app-service-using-microsoft-entra-id-authentication-from-my-windows-agent).<br/> - Other deployment methods work, such as **zip deploy** or **run from package**. | ## Create a custom role with no permissions for basic authentication The following are corresponding policies for slots: - [Remediation policy for FTP](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff493116f-3b7f-4ab3-bf80-0c2af35e46c2) - [Remediation policy for SCM](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c034a29-2a5f-4857-b120-f800fe5549ae) +## Frequently asked questions ++#### Why do I get a warning in Visual Studio saying that basic authentication is disabled? ++Visual Studio requires basic authentication to deploy to Azure App Service. The warning reminds you that the configuration on your app changed and you can no longer deploy to it. Either you disabled basic authentication on the app yourself, or your organization policy enforces that basic authentication is disabled for App Service apps. |
app-service | Deploy Authentication Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-authentication-types.md | Title: Authentication types by deployment methods description: Learn the available types of authentication with Azure App Service when deploying your application code. Previously updated : 07/31/2023 Last updated : 01/26/2024 Azure App Service lets you deploy your web application code and configuration by |Deployment method|Authentication  |Reference Documents | |:-|:-|:-|-|Azure CLI |Microsoft Entra authentication | In Azure CLI, version 2.48.1 or higher, the following commands have been modified to use Microsoft Entra authentication if basic authentication is turned off for your web app or function app:<br/>- [az webapp up](/cli/azure/webapp#az-webapp-up)<br/>- [az webapp deploy](/cli/azure/webapp#az-webapp-deploy)<br/>- [az webapp deployment source config-zip](/cli/azure/webapp/deployment/source#az-webapp-deployment-source-config-zip)<br/>- [az webapp log deployment show](/cli/azure/webapp/log/deployment#az-webapp-log-deployment-show)<br/>- [az webapp log deployment list](/cli/azure/webapp/log/deployment#az-webapp-log-deployment-list)<br/>- [az webapp log download](/cli/azure/webapp/log#az-webapp-log-download)<br/>- [az webapp log tail](/cli/azure/webapp/log#az-webapp-log-tail)<br/>- [az webapp browse](/cli/azure/webapp#az-webapp-browse)<br/>- [az webapp create-remote-connection](/cli/azure/webapp#az-webapp-create-remote-connection)<br/>- [az webapp ssh](/cli/azure/webapp#az-webapp-ssh)<br/>- [az functionapp deploy](/cli/azure/functionapp#az-functionapp-deploy)<br/>- [az functionapp log deployment list](/cli/azure/functionapp/log/deployment#az-functionapp-log-deployment-list)<br/>- [az functionapp log deployment show](/cli/azure/functionapp/log/deployment#az-functionapp-log-deployment-show)<br/>- [az functionapp deployment source config-zip](/cli/azure/functionapp/deployment/source#az-functionapp-deployment-source-config-zip)<br/>For more information, see [az appservice](/cli/azure/appservice) and [az webapp](/cli/azure/webapp). | -|Azure PowerShell |Microsoft Entra authentication | In Azure PowerShell, version 9.7.1 or above, Microsoft Entra authentication is available for App Service. For more information, see [PowerShell samples for Azure App Service](samples-powershell.md). | -|SCM/Kudu/OneDeploy REST endpoint |Basic authentication, Microsoft Entra authentication |[Deploy files to App Service](deploy-zip.md) | -|Kudu UI |Basic authentication, Microsoft Entra authentication |[Deploy files to App Service](deploy-zip.md)| +|Azure CLI |Microsoft Entra ID | In Azure CLI, version 2.48.1 or higher, the following commands have been modified to use Microsoft Entra if basic authentication is turned off for your web app or function app:<br/>- [az webapp up](/cli/azure/webapp#az-webapp-up)<br/>- [az webapp deploy](/cli/azure/webapp#az-webapp-deploy)<br/>- [az webapp deployment source config-zip](/cli/azure/webapp/deployment/source#az-webapp-deployment-source-config-zip)<br/>- [az webapp log deployment show](/cli/azure/webapp/log/deployment#az-webapp-log-deployment-show)<br/>- [az webapp log deployment list](/cli/azure/webapp/log/deployment#az-webapp-log-deployment-list)<br/>- [az webapp log download](/cli/azure/webapp/log#az-webapp-log-download)<br/>- [az webapp log tail](/cli/azure/webapp/log#az-webapp-log-tail)<br/>- [az webapp browse](/cli/azure/webapp#az-webapp-browse)<br/>- [az webapp create-remote-connection](/cli/azure/webapp#az-webapp-create-remote-connection)<br/>- [az webapp ssh](/cli/azure/webapp#az-webapp-ssh)<br/>- [az functionapp deploy](/cli/azure/functionapp#az-functionapp-deploy)<br/>- [az functionapp log deployment list](/cli/azure/functionapp/log/deployment#az-functionapp-log-deployment-list)<br/>- [az functionapp log deployment show](/cli/azure/functionapp/log/deployment#az-functionapp-log-deployment-show)<br/>- [az functionapp deployment source config-zip](/cli/azure/functionapp/deployment/source#az-functionapp-deployment-source-config-zip)<br/>For more information, see [az appservice](/cli/azure/appservice) and [az webapp](/cli/azure/webapp). | +|Azure PowerShell |Microsoft Entra | In Azure PowerShell, version 9.7.1 or above, Microsoft Entra is available for App Service. For more information, see [PowerShell samples for Azure App Service](samples-powershell.md). | +|SCM/Kudu/OneDeploy REST endpoint |Basic authentication<br/>Microsoft Entra |[Deploy files to App Service](deploy-zip.md) | +|Kudu UI |Basic authentication<br/>Microsoft Entra |[Deploy files to App Service](deploy-zip.md)| |FTP\FTPS |Basic authentication |[Deploy your app to Azure App Service using FTP/S](deploy-ftp.md) | |Visual Studio |Basic authentication  |[Quickstart: Deploy an ASP.NET web app](quickstart-dotnetcore.md)<br/>[Develop and deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md)<br/>[Troubleshoot an app in Azure App Service using Visual Studio](troubleshoot-dotnet-visual-studio.md)<br/>[GitHub Actions integration in Visual Studio](/visualstudio/azure/overview-github-actions)<br/>[Deploy your application to Azure using GitHub Actions workflows created by Visual Studio](/visualstudio/deployment/azure-deployment-using-github-actions) |-|Visual Studio Code|Microsoft Entra authentication |[Quickstart: Deploy an ASP.NET web app](quickstart-dotnetcore.md)<br/> [Working with GitHub in VS Code](https://code.visualstudio.com/docs/sourcecontrol/github) | -|GitHub with GitHub Actions |Publish profile, service principal, OpenID Connect |[Deploy to App Service using GitHub Actions](deploy-github-actions.md) | -|GitHub with App Service build service as build engine|Publish profile |[Continuous deployment to Azure App Service](deploy-continuous-deployment.md) | -|GitHub with Azure Pipelines as build engine|Publish profile, Azure DevOps service connection |[Deploy to App Service using Azure Pipelines](deploy-azure-pipelines.md) | -|Azure Repos with App Service build service as build engine|Publish profile |[Continuous deployment to Azure App Service](deploy-continuous-deployment.md) | -|Azure Repos with Azure Pipelines as build engine |Publish profile, Azure DevOps service connection |[Deploy to App Service using GitHub Actions](deploy-github-actions.md) | -|Bitbucket |Publish profile |[Continuous deployment to Azure App Service](deploy-continuous-deployment.md) | -|Local Git |Publish profile |[Local Git deployment to Azure App Service](deploy-local-git.md) | -|External Git repository|Publish profile |[Setting up continuous deployment using manual steps](https://github.com/projectkudu/kudu/wiki/Continuous-deployment#setting-up-continuous-deployment-using-manual-steps) | -|Run directly from an uploaded ZIP file |Microsoft Entra authentication |[Run your app in Azure App Service directly from a ZIP package](deploy-run-package.md) | -|Run directly from external URL |Storage account key, managed identity |[Run from external URL instead](deploy-run-package.md#run-from-external-url-instead) | -|Azure Web app plugin for Maven (Java) |Microsoft Entra authentication |[Quickstart: Create a Java app on Azure App Service](quickstart-java.md)| -|Azure WebApp Plugin for Gradle (Java) |Microsoft Entra authentication |[Configure a Java app for Azure App Service](configure-language-java.md)| -|Webhooks |Publish profile |[Web hooks](https://github.com/projectkudu/kudu/wiki/Web-hooks) | +|Visual Studio Code|Microsoft Entra |[Quickstart: Deploy an ASP.NET web app](quickstart-dotnetcore.md)<br/> [Working with GitHub in VS Code](https://code.visualstudio.com/docs/sourcecontrol/github) | +|GitHub with GitHub Actions |Publish profile (basic authentication)<br/>Service principal (Microsoft Entra)<br/>OpenID Connect (Microsoft Entra) |[Deploy to App Service using GitHub Actions](deploy-github-actions.md) | +|GitHub with App Service build service as build engine| Basic authentication|[Continuous deployment to Azure App Service](deploy-continuous-deployment.md) | +|GitHub with Azure Pipelines as build engine|Publish profile (basic authentication)<br/>Azure DevOps service connection |[Deploy to App Service using Azure Pipelines](deploy-azure-pipelines.md) | +|Azure Repos with App Service build service as build engine| Basic authentication |[Continuous deployment to Azure App Service](deploy-continuous-deployment.md) | +|Azure Repos with Azure Pipelines as build engine |Publish profile (basic authentication)<br/>Azure DevOps service connection |[Deploy to App Service using GitHub Actions](deploy-github-actions.md) | +|Bitbucket | Basic authentication |[Continuous deployment to Azure App Service](deploy-continuous-deployment.md) | +|Local Git | Basic authentication |[Local Git deployment to Azure App Service](deploy-local-git.md) | +|External Git repository| Basic authentication |[Setting up continuous deployment using manual steps](https://github.com/projectkudu/kudu/wiki/Continuous-deployment#setting-up-continuous-deployment-using-manual-steps) | +|Run directly from an uploaded ZIP file |Microsoft Entra |[Run your app in Azure App Service directly from a ZIP package](deploy-run-package.md) | +|Run directly from external URL |Not applicable (outbound connection) |[Run from external URL instead](deploy-run-package.md#run-from-external-url-instead) | +|Azure Web app plugin for Maven (Java) |Microsoft Entra |[Quickstart: Create a Java app on Azure App Service](quickstart-java.md)| +|Azure WebApp Plugin for Gradle (Java) |Microsoft Entra |[Configure a Java app for Azure App Service](configure-language-java.md)| +|Webhooks | Basic authentication |[Web hooks](https://github.com/projectkudu/kudu/wiki/Web-hooks) | |App Service migration assistant |Basic authentication |[Azure App Service migration tools](https://azure.microsoft.com/products/app-service/migration-tools/) | |App Service migration assistant for PowerShell scripts |Basic authentication |[Azure App Service migration tools](https://azure.microsoft.com/products/app-service/migration-tools/) |-|Azure Migrate App Service discovery/assessment/migration |Microsoft Entra authentication |[Tutorial: Assess ASP.NET web apps for migration to Azure App Service](../migrate/tutorial-assess-webapps.md)<br/>[Modernize ASP.NET web apps to Azure App Service code](../migrate/tutorial-modernize-asp-net-appservice-code.md) | +|Azure Migrate App Service discovery/assessment/migration |Microsoft Entra |[Tutorial: Assess ASP.NET web apps for migration to Azure App Service](../migrate/tutorial-assess-webapps.md)<br/>[Modernize ASP.NET web apps to Azure App Service code](../migrate/tutorial-modernize-asp-net-appservice-code.md) | |
app-service | Deploy Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-best-practices.md | Every development team has unique requirements that can make implementing an eff ### Deployment Source -A deployment source is the location of your application code. For production apps, the deployment source is usually a repository hosted by version control software such as [GitHub, BitBucket, or Azure Repos](deploy-continuous-deployment.md). For development and test scenarios, the deployment source may be [a project on your local machine](deploy-local-git.md). App Service also supports [OneDrive and Dropbox folders](deploy-content-sync.md) as deployment sources. While cloud folders can make it easy to get started with App Service, it's not typically recommended to use this source for enterprise-level production applications. +A deployment source is the location of your application code. For production apps, the deployment source is usually a repository hosted by version control software such as [GitHub, BitBucket, or Azure Repos](deploy-continuous-deployment.md). For development and test scenarios, the deployment source may be [a project on your local machine](deploy-local-git.md). ### Build Pipeline |
app-service | Deploy Configure Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-configure-credentials.md | Title: Configure deployment credentials description: Learn what types of deployment credentials are in Azure App Service and how to configure and use them. Previously updated : 02/11/2021 Last updated : 01/26/2024 and [FTP/S deployment](deploy-ftp.md). These credentials are not the same as you [!INCLUDE [app-service-deploy-credentials](../../includes/app-service-deploy-credentials.md)] > [!NOTE]-> The **Development Center (Classic)** page in the Azure portal, which is the old deployment experience, will be deprecated in March, 2021. This change will not affect any existing deployment settings in your app, and you can continue to manage app deployment in the **Deployment Center** page. +> When [basic authentication is disabled](configure-basic-auth-disable.md), you can't view or configure deployment credentials in the Deployment Center. ## <a name="userscope"></a>Configure user-scope credentials |
app-service | Deploy Content Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-content-sync.md | - Title: Sync content from a cloud folder -description: Learn how to deploy your app to Azure App Service via content sync from a cloud folder, including OneDrive or Dropbox. - Previously updated : 02/25/2021-------# Sync content from a cloud folder to Azure App Service -This article shows you how to sync your content to [Azure App Service](./overview.md) from Dropbox and OneDrive. --With the content sync approach, your work with your app code and content in a designated cloud folder to make sure it's in a ready-to-deploy state, and then sync to App Service with the click of a button. --Because of underlying differences in the APIs, **OneDrive for Business** is not supported at this time. --> [!NOTE] -> The **Development Center (Classic)** page in the Azure portal, which is the old deployment experience, will be deprecated in March, 2021. This change will not affect any existing deployment settings in your app, and you can continue to manage app deployment in the **Deployment Center** page. --## Enable content sync deployment --1. In the [Azure portal](https://portal.azure.com), navigate to the management page for your App Service app. --1. From the left menu, click **Deployment Center** > **Settings**. --1. In **Source**, select **OneDrive** or **Dropbox**. --1. Click **Authorize** and follow the authorization prompts. -- ![Shows how to authorize OneDrive or Dropbox in the Deployment Center in the Azure portal.](media/app-service-deploy-content-sync/choose-source.png) -- You only need to authorize with OneDrive or Dropbox once for your Azure account. To authorize a different OneDrive or Dropbox account for an app, click **Change account**. --1. In **Folder**, select the folder to synchronize. This folder is created under the following designated content path in OneDrive or Dropbox. - - * **OneDrive**: `Apps\Azure Web Apps` - * **Dropbox**: `Apps\Azure` - -1. Click **Save**. --## Synchronize content --# [Azure portal](#tab/portal) --1. In the [Azure portal](https://portal.azure.com), navigate to the management page for your App Service app. --1. From the left menu, click **Deployment Center** > **Redeploy/Sync**. -- ![Shows how to sync your cloud folder with App Service.](media/app-service-deploy-content-sync/synchronize.png) - -1. Click **OK** to confirm the sync. --# [Azure CLI](#tab/cli) --Start a sync by running the following command and replacing \<group-name> and \<app-name>: --```azurecli-interactive -az webapp deployment source sync --resource-group <group-name> --name <app-name> -``` --# [Azure PowerShell](#tab/powershell) --Start a sync by running the following command and replacing \<group-name> and \<app-name>: --```azurepowershell-interactive -Invoke-AzureRmResourceAction -ResourceGroupName <group-name> -ResourceType Microsoft.Web/sites -ResourceName <app-name> -Action sync -ApiVersion 2019-08-01 -Force ΓÇôVerbose -``` ---## Disable content sync deployment --1. In the [Azure portal](https://portal.azure.com), navigate to the management page for your App Service app. --1. From the left menu, click **Deployment Center** > **Settings** > **Disconnect**. -- ![Shows how to disconnect your cloud folder sync with your App Service app in the Azure portal.](media/app-service-deploy-content-sync/disable.png) ---## OneDrive and Dropbox integration retirements --On September 30th, 2023 the integrations for Microsoft OneDrive and Dropbox for Azure App Service and Azure Functions will be retired. If you are using OneDrive or Dropbox, you should [disable content sync deployments](#disable-content-sync-deployment) from OneDrive and Dropbox. Then, you can set up deployments from any of the following alternatives --- [GitHub Actions](deploy-github-actions.md)-- [Azure DevOps Pipelines](/azure/devops/pipelines/targets/webapp)-- [Azure CLI](./deploy-zip.md?tabs=cli)-- [VS Code](./deploy-zip.md?tabs=cli)-- [Local Git Repository](./deploy-local-git.md?tabs=cli)--## Next steps --> [!div class="nextstepaction"] -> [Deploy from local Git repo](deploy-local-git.md) |
app-service | Deploy Continuous Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-continuous-deployment.md | Title: Configure continuous deployment description: Learn how to enable CI/CD to Azure App Service from GitHub, Bitbucket, Azure Repos, or other repos. Select the build pipeline that fits your needs. ms.assetid: 6adb5c84-6cf3-424e-a336-c554f23b4000 Previously updated : 12/12/2023 Last updated : 01/26/2024 -> [!NOTE] -> The **Development Center (Classic)** page in the Azure portal, an earlier version of the deployment functionality, was deprecated in March 2021. This change doesn't affect existing deployment settings in your app, and you can continue to manage app deployment from the **Deployment Center** page in the portal. - [!INCLUDE [Prepare repository](../../includes/app-service-deploy-prepare-repo.md)] ## Configure the deployment source Select the tab that corresponds to your build provider to continue. # [GitHub](#tab/github) -4. [GitHub Actions](#how-does-the-github-actions-build-provider-work) is the default build provider. To change the provider, select **Change provider** > **App Service Build Service** (Kudu) > **OK**. -- > [!NOTE] - > To use Azure Pipelines as the build provider for your App Service app, configure CI/CD directly from Azure Pipelines. Don't configure it in App Service. The **Azure Pipelines** option just points you in the right direction. +4. [GitHub Actions](?tabs=githubactions#what-are-the-build-providers) is the default build provider. To change the provider, select **Change provider** > **App Service Build Service** > **OK**. 1. If you're deploying from GitHub for the first time, select **Authorize** and follow the authorization prompts. If you want to deploy from a different user's repository, select **Change Account**. -1. After you authorize your Azure account with GitHub, select the **Organization**, **Repository**, and **Branch** to configure CI/CD for. +1. After you authorize your Azure account with GitHub, select the **Organization**, **Repository**, and **Branch** you want. If you canΓÇÖt find an organization or repository, you might need to enable more permissions on GitHub. For more information, see [Managing access to your organization's repositories](https://docs.github.com/organizations/managing-access-to-your-organizations-repositories). -1. (Preview) Under **Authentication type**, select **User-assigned identity** for better security. For more information, see [frequently asked questions](). --1. When **GitHub Actions** is selected as the build provider, you can select the workflow file you want by using the **Runtime stack** and **Version** dropdown lists. Azure commits this workflow file into your selected GitHub repository to handle build and deploy tasks. To see the file before saving your changes, select **Preview file**. +1. Under **Authentication type**, select **User-assigned identity** for better security. For more information, see [frequently asked questions](#frequently-asked-questions). > [!NOTE]- > App Service detects the [language stack setting](configure-common.md#configure-language-stack-settings) of your app and selects the most appropriate workflow template. If you choose a different template, it might deploy an app that doesn't run properly. For more information, see [How the GitHub Actions build provider works](#how-does-the-github-actions-build-provider-work). + > If your Azure account has the [required permissions](#why-do-i-see-the-error-you-do-not-have-sufficient-permissions-on-this-app-to-assign-role-based-access-to-a-managed-identity-and-configure-federated-credentials) for the **User-assigned identity** option, Azure creates a [user-assigned managed identity](#what-does-the-user-assigned-identity-option-do-for-github-actions) for you. If you don't, work with your Azure administrator to create an [identity with the required role on your app](#why-do-i-see-the-error-this-identity-does-not-have-write-permissions-on-this-app-please-select-a-different-identity-or-work-with-your-admin-to-grant-the-website-contributor-role-to-your-identity-on-this-app), then select it here in the dropdown. ++1. (Optional) To see the file before saving your changes, select **Preview file**. App Service selects a workflow template based on the [language stack setting](configure-common.md#configure-language-stack-settings) of your app and commits it into your selected GitHub repository. 1. Select **Save**. Select the tab that corresponds to your build provider to continue. # [Bitbucket](#tab/bitbucket) -The Bitbucket integration uses the App Service Build Services (Kudu) for build automation. +The Bitbucket integration uses the App Service Build Services for build automation. 4. If you're deploying from Bitbucket for the first time, select **Authorize** and follow the authorization prompts. If you want to deploy from a different user's repository, select **Change Account**. The Bitbucket integration uses the App Service Build Services (Kudu) for build a 1. Select **Save**. New commits in the selected repository and branch now deploy continuously into your App Service app. You can track the commits and deployments on the **Logs** tab.- + # [Local Git](#tab/local) See [Local Git deployment to Azure App Service](deploy-local-git.md). # [Azure Repos](#tab/repos)- - > [!NOTE] - > Azure Repos is supported as a deployment source for Windows apps. - > -4. App Service Build Service (Kudu) is the default build provider. +4. App Service Build Service is the default build provider. > [!NOTE]- > To use Azure Pipelines as the build provider for your App Service app, configure CI/CD directly from Azure Pipelines. Don't configure it in App Service. The **Azure Pipelines** option just points you in the right direction. + > To use Azure Pipelines as the build provider for your App Service app, configure it directly from Azure Pipelines. Don't configure it in App Service. The **Azure Pipelines** option just points you in the right direction. 1. Select the **Azure DevOps Organization**, **Project**, **Repository**, and **Branch** you want to deploy continuously. If your DevOps organization isn't listed, it's not yet linked to your Azure subscription. For more information, see [Create an Azure service connection](/azure/devops/pipelines/library/connect-to-azure). +# [Other repositories](#tab/others) ++For Windows apps, you can manually configure continuous deployment from a cloud Git or Mercurial repository that the portal doesn't directly support, like [GitLab](https://gitlab.com/). You do that by selecting **External Git** in the **Source** dropdown list. For more information, see [Set up continuous deployment using manual steps](https://github.com/projectkudu/kudu/wiki/Continuous-deployment#setting-up-continuous-deployment-using-manual-steps). + -- ## Disable continuous deployment See [Local Git deployment to Azure App Service](deploy-local-git.md). 1. Select **OK**. +## What are the build providers? -## Frequently asked questions +Depending on your deployment source in the Deployment Center, you might see a few options to select for build providers. Build providers help you build a CI/CD solution with Azure App Service by automating build, test, and deployment. -- [How does the GitHub Actions build provider work?](#how-does-the-github-actions-build-provider-work)-- [How do I configure continuous deployment without basic authentication?](#how-do-i-configure-continuous-deployment-without-basic-authentication)-- [What does the user-assigned identity option do for GitHub Actions?](#what-does-the-user-assigned-identity-option-do-for-github-actions)-- [I see "You do not have sufficient permissions on this app to assign role-based access to a managed identity and configure federated credentials." when I select the user-assigned identity option with GitHub Actions.](#i-see-you-do-not-have-sufficient-permissions-on-this-app-to-assign-role-based-access-to-a-managed-identity-and-configure-federated-credentials-when-i-select-the-user-assigned-identity-option-with-github-actions)-- [How do I deploy from other repositories](#how-do-i-deploy-from-other-repositories)+You're not limited to the build provider options found in the Deployment Center, but App Service lets you set them up quickly and offers some integrated deployment logging experience. -#### How does the GitHub Actions build provider work? +# [GitHub Actions](#tab/githubactions) -The GitHub Actions build provider is an option for [CI/CD from GitHub](#configure-the-deployment-source). It completes these actions to set up CI/CD: +The GitHub Actions build provider is available only for [GitHub deployment](?tabs=github#configure-the-deployment-source). When configured from the app's Deployment Center, it completes these actions to set up CI/CD: - Deposits a GitHub Actions workflow file into your GitHub repository to handle build and deploy tasks to App Service.-- Adds the publishing profile for your app as a GitHub secret. The workflow file uses this secret to authenticate with App Service.-- Captures information from the [workflow run logs](https://docs.github.com/actions/managing-workflow-runs/using-workflow-run-logs) and displays it on the **Logs** tab in your app's Deployment Center.+- For basic authentication, adds the publish profile for your app as a GitHub secret. The workflow file uses this secret to authenticate with App Service. +- For user-assigned identity, see [What does the user-assigned identity option do for GitHub Actions?](#what-does-the-user-assigned-identity-option-do-for-github-actions) +- Captures information from the [workflow run logs](https://docs.github.com/actions/managing-workflow-runs/using-workflow-run-logs) and displays it on the **Logs** tab in the Deployment Center. You can customize the GitHub Actions build provider in these ways: - Customize the workflow file after it's generated in your GitHub repository. For more information, see [Workflow syntax for GitHub Actions](https://docs.github.com/actions/reference/workflow-syntax-for-github-actions). Just make sure that the workflow deploys to App Service with the [azure/webapps-deploy](https://github.com/Azure/webapps-deploy) action. - If the selected branch is protected, you can still preview the workflow file without saving the configuration and then manually add it into your repository. This method doesn't give you log integration with the Azure portal.-- Instead of using a user-assigned managed identity or the publishing profile, you can also deploy by using a [service principal](deploy-github-actions.md?tabs=userlevel) in Microsoft Entra ID.+- Instead of using basic authentication or a user-assigned identity, you can also deploy by using a [service principal](deploy-github-actions.md?tabs=userlevel) in Microsoft Entra ID. This can't be configured in the portal. ++# [App Service Build Service](#tab/appservice) ++> [!NOTE] +> App Service Build Service requires [basic authentication to be enabled](configure-basic-auth-disable.md) for the webhook to work. For more information, see [Deployment without basic authentication](configure-basic-auth-disable.md#deployment-without-basic-authentication). ++App Service Build Service is the deployment and build engine native to App Service, otherwise known as Kudu. When this option is selected, App Service adds a webhook into the repository you authorized. Any code push to the repository triggers the webhook, and App Service pulls the changes into its repository and performs any deployment tasks. For more information, see [Deploying from GitHub (Kudu)](https://github.com/projectkudu/kudu/wiki/Deploying-from-GitHub). ++Resources: ++* [Investigate common problems with continuous deployment](https://github.com/projectkudu/kudu/wiki/Investigating-continuous-deployment) +* [Project Kudu](https://github.com/projectkudu/kudu/wiki) -#### How do I configure continuous deployment without basic authentication? +# [Azure Pipelines](#tab/pipelines) -To configure continuous deployment [without basic authentication](configure-basic-auth-disable.md), try using GitHub Actions with the **user-assigned identity** option. +Azure Pipelines is part of Azure DevOps. You can configure a pipeline to build, test, and deploy your app to App Service from [any supported source repository](/azure/devops/pipelines/repos). ++To use Azure Pipelines as the build provider, don't configure it in App Service, but [go to Azure DevOps directly](https://go.microsoft.com/fwlink/?linkid=2245703). In the Deployment Center, the **Azure Pipelines** option just points you in the right direction. ++For more information, see [Deploy to App Service using Azure Pipelines](deploy-azure-pipelines.md). ++-- +++## Frequently asked questions ++- [Does the GitHub Actions build provider work with basic authentication if basic authentication is disabled?](#does-the-github-actions-build-provider-work-with-basic-authentication-if-basic-authentication-is-disabled) +- [What does the user-assigned identity option do for GitHub Actions?](#what-does-the-user-assigned-identity-option-do-for-github-actions) +- [Why do I see the error, "This identity does not have write permissions on this app. Please select a different identity, or work with your admin to grant the Website Contributor role to your identity on this app"?](#why-do-i-see-the-error-this-identity-does-not-have-write-permissions-on-this-app-please-select-a-different-identity-or-work-with-your-admin-to-grant-the-website-contributor-role-to-your-identity-on-this-app) +- [Why do I see the error, "This identity does not have write permissions on this app. Please select a different identity, or work with your admin to grant the Website Contributor role to your identity on this app"?](#why-do-i-see-the-error-this-identity-does-not-have-write-permissions-on-this-app-please-select-a-different-identity-or-work-with-your-admin-to-grant-the-website-contributor-role-to-your-identity-on-this-app) ++#### Does the GitHub Actions build provider work with basic authentication if basic authentication is disabled? ++No. Try using GitHub Actions with the **user-assigned identity** option. ++For more information, see [Deployment without basic authentication](configure-basic-auth-disable.md#deployment-without-basic-authentication). #### What does the user-assigned identity option do for GitHub Actions? -When you select **user-assigned identity** under the **GitHub Actions** source, Azure creates a [user-managed identity](/en-us/entra/identity/managed-identities-azure-resources/overview#managed-identity-types) for you and [federates it with GitHub as an authorized client](/entra/workload-id/workload-identity-federation-create-trust-user-assigned-managed-identity?pivots=identity-wif-mi-methods-azp). This user-managed identity isn't shown in the **Identities** page for your app. +When you select **user-assigned identity** under the **GitHub Actions** source, App Service configures all the necessary resources in Azure and in GitHub to enable the recommended OpenID Connect authentication with GitHub Actions. -This automatically created user-managed identity should be used only for the GitHub Actions deployment. Using it for other configurations isn't supported. +Specifically, App Service does the following operations: -#### I see "You do not have sufficient permissions on this app to assign role-based access to a managed identity and configure federated credentials." when I select the user-assigned identity option with GitHub Actions. +- [Creates a federated credential](/entra/workload-id/workload-identity-federation-create-trust-user-assigned-managed-identity?pivots=identity-wif-mi-methods-azp) between a user-assigned managed identity in Azure and your selected repository and branch in GitHub. +- Creates the secrets `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID` from the federated credential in your selected GitHub repository. +- Assigns the identity to your app. -To use the **user-assigned identity** option for your GitHub Actions deployment, you need the `Microsoft.Authorization/roleAssignments/write` permission on your app. By default, the **User Access Administrator** role and **Owner** role have this permission already, but the **Contributor** role doesn't. +In a GitHub Actions workflow in your GitHub repository, you can then use the [Azure/login](https://github.com/Azure/login) action to authenticate with your app by using OpenID Connect. For examples, see [Add the workflow file to your GitHub repository](deploy-github-actions.md#3-add-the-workflow-file-to-your-github-repository). -#### How do I deploy from other repositories +If your Azure account has the [required permissions](#why-do-i-see-the-error-you-do-not-have-sufficient-permissions-on-this-app-to-assign-role-based-access-to-a-managed-identity-and-configure-federated-credentials), App Service creates a user-assigned managed identity and configures it for you. This identity isn't shown in the **Identities** page of your app. If your Azure account doesn't have the required permissions, you must select an [existing identity with the required role](#why-do-i-see-the-error-this-identity-does-not-have-write-permissions-on-this-app-please-select-a-different-identity-or-work-with-your-admin-to-grant-the-website-contributor-role-to-your-identity-on-this-app). -For Windows apps, you can manually configure continuous deployment from a cloud Git or Mercurial repository that the portal doesn't directly support, like [GitLab](https://gitlab.com/). You do that by selecting **External Git** in the **Source** dropdown list. For more information, see [Set up continuous deployment using manual steps](https://github.com/projectkudu/kudu/wiki/Continuous-deployment#setting-up-continuous-deployment-using-manual-steps). +#### Why do I see the error, "You do not have sufficient permissions on this app to assign role-based access to a managed identity and configure federated credentials"? ++The message indicates that your Azure account doesn't have the required permissions to create a user-assigned managed identity for the GitHub Actions. The required permissions (scoped to your app) are: ++- `Microsoft.Authorization/roleAssignments/write` +- `Microsoft.ManagedIdentity/userAssignedIdentities/write` ++By default, the **User Access Administrator** role and **Owner** role have these permissions already, but the **Contributor** role doesn't. If you don't have the required permissions, work with your Azure administrator to create a user-assigned managed identity with the [Website Contributor role](#why-do-i-see-the-error-this-identity-does-not-have-write-permissions-on-this-app-please-select-a-different-identity-or-work-with-your-admin-to-grant-the-website-contributor-role-to-your-identity-on-this-app). In the Deployment Center, you can then select the identity in the **GitHub** > **Identity** dropdown. ++For more information on the alternative steps, see [Deploy to App Service using GitHub Actions](deploy-github-actions.md). ++#### Why do I see the error, "This identity does not have write permissions on this app. Please select a different identity, or work with your admin to grant the Website Contributor role to your identity on this app"? ++The message indicates that the selected user-assigned managed identity doesn't have the required role [to enable OpenID Connect](#what-does-the-user-assigned-identity-option-do-for-github-actions) between the GitHub repository and the App Service app. The identity must have one of the following roles on the app: **Owner**, **Contributor**, **Websites Contributor**. The least privileged role that the identity needs is **Websites Contributor**. ## More resources -* [Deploy from Azure Pipelines to Azure App Services](/azure/devops/pipelines/apps/cd/deploy-webdeploy-webapps) -* [Investigate common problems with continuous deployment](https://github.com/projectkudu/kudu/wiki/Investigating-continuous-deployment) * [Use Azure PowerShell](/powershell/azure/)-* [Project Kudu](https://github.com/projectkudu/kudu/wiki) |
app-service | Deploy Ftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ftp.md | description: Learn how to deploy your app to Azure App Service using FTP or FTPS ms.assetid: ae78b410-1bc0-4d72-8fc4-ac69801247ae Previously updated : 02/26/2021 Last updated : 01/26/2024 or API app to [Azure App Service](./overview.md). The FTP/S endpoint for your app is already active. No configuration is necessary to enable FTP/S deployment. > [!NOTE]-> The **Development Center (Classic)** page in the Azure portal, which is the old deployment experience, will be deprecated in March, 2021. This change will not affect any existing deployment settings in your app, and you can continue to manage app deployment in the **Deployment Center** page. +> When [basic authentication is disabled](configure-basic-auth-disable.md), FTP/S deployment doesn't work, and you can't view or configure FTP credentials in the app's Deployment Center. ## Get deployment credentials For enhanced security, you should allow FTP over TLS/SSL only. You can also disa 1. In your app's resource page in [Azure portal](https://portal.azure.com), select **Configuration** > **General settings** from the left navigation. -2. To disable unencrypted FTP, select **FTPS Only** in **FTP state**. To disable both FTP and FTPS entirely, select **Disabled**. When finished, click **Save**. If using **FTPS Only**, you must enforce TLS 1.2 or higher by navigating to the **TLS/SSL settings** blade of your web app. TLS 1.0 and 1.1 are not supported with **FTPS Only**. +2. To disable unencrypted FTP, select **FTPS Only** in **FTP state**. To disable both FTP and FTPS entirely, select **Disabled**. When finished, select **Save**. If using **FTPS Only**, you must enforce TLS 1.2 or higher by navigating to the **TLS/SSL settings** page of your web app. TLS 1.0 and 1.1 aren't supported with **FTPS Only**. ![Disable FTP/S](./media/app-service-deploy-ftp/disable-ftp.png) A runtime application issue typically results in the right set of files deployed To determine a deployment or runtime issue, see [Deployment vs. runtime issues](https://github.com/projectkudu/kudu/wiki/Deployment-vs-runtime-issues). #### I'm not able to FTP and publish my code. How can I resolve the issue?-Check that you've entered the correct [hostname](#get-ftps-endpoint) and [credentials](#get-deployment-credentials). Check also that the following FTP ports on your machine are not blocked by a firewall: +Check that you entered the correct [hostname](#get-ftps-endpoint) and [credentials](#get-deployment-credentials). Check also that the following FTP ports on your machine aren't blocked by a firewall: - FTP control connection port: 21, 990 - FTP data connection port: 989, 10001-10300 Azure App Service supports connecting via both Active and Passive mode. Passive #### Why is my connection failing when attempting to connect over FTPS using explicit encryption? FTPS allows establishing the TLS secure connection in either an Explicit or Implicit way. + - If you connect with Implicit encryption, the connection is established via port 990. + - If you connect with Explicit encryption, the connection is established via port 21. -One thing to be aware that can affect your connection success is the URL used, this will depend on the Client Application used. -The portal will have just the URL as "ftps://" and you might need to change this. +The URL format you use can affect your connection success, and it also depends on the client application you use. The portal shows the URL as `ftps://`, but note: - Make sure to not mix both, such as attempting to connect to "ftps://" and using port 21, as it will fail to connect, even if you wish to do Explicit encryption. - The reason for this is due to an Explicit connection starting as a plain FTP connection before the AUTH method. + - If the URL you connect with starts with `ftp://`, the connection is implied to be on port 21. + - If it starts with `ftps://`, the connection is implied to be Implicit and on port 990. ++ Make sure not to mix both, such as attempting to connect to `ftps://` and using port 21, as it will fail to connect, even if you wish to do Explicit encryption. This is due to an Explicit connection starting as a plain FTP connection before the AUTH method. #### How can I determine the method that was used to deploy my Azure App Service?-Let us say you take over owning an app and you wish to find out how the Azure App Service was deployed so you can make changes and deploy them. You can determine how an Azure App Service was deployed by checking the application settings. If the app was deployed using an external package URL, you will see the WEBSITE_RUN_FROM_PACKAGE setting in the application settings with a URL value. Or if it was deployed using zip deploy, you will see the WEBSITE_RUN_FROM_PACKAGE setting with a value of 1. If the app was deployed using Azure DevOps, you will see the deployment history in the Azure DevOps portal. If Azure Functions Core Tools was used, you will see the deployment history in the Azure portal. +You can find out how an app was deployed by checking the application settings. If the app was deployed using an external package URL, you should see the `WEBSITE_RUN_FROM_PACKAGE` setting in the application settings with a URL value. Or if it was deployed using zip deploy, you should see the `WEBSITE_RUN_FROM_PACKAGE` setting with a value of `1`. If the app was deployed using Azure DevOps, you should see the deployment history in the Azure DevOps portal. If Azure Functions Core Tools is used, you should see the deployment history in the Azure portal. ## More resources |
app-service | Deploy Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md | Title: Configure CI/CD with GitHub Actions description: Learn how to deploy your code to Azure App Service from a CI/CD pipeline with GitHub Actions. Customize the build tasks and execute complex deployments. Previously updated : 12/14/2023 Last updated : 01/16/2024 Get started with [GitHub Actions](https://docs.github.com/en/actions/learn-githu ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A GitHub account. If you don't have one, sign up for [free](https://github.com/join). -- A working Azure App Service app. - - .NET: [Create an ASP.NET Core web app in Azure](quickstart-dotnetcore.md) - - ASP.NET: [Create an ASP.NET Framework web app in Azure](./quickstart-dotnetcore.md?tabs=netframework48) - - JavaScript: [Create a Node.js web app in Azure App Service](quickstart-nodejs.md) - - Java: [Create a Java app on Azure App Service](quickstart-java.md) - - Python: [Create a Python app in Azure App Service](quickstart-python.md) +- A GitHub account. If you don't have one, sign up for [free](https://github.com/join). -## Workflow file overview +## Set up GitHub Actions deployment when creating the app -A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow. +GitHub Actions deployment is integrated into the default [app creation wizard](https://portal.azure.com/#create/Microsoft.WebSite). You just need to set **Continuous deployment** to **Enable** in the Deployment tab, and configure the organization, repository, and branch you want. -The file has three sections: -|Section |Tasks | -||| -|**Authentication** | 1. Define a service principal or publish profile. <br /> 2. Create a GitHub secret. | -|**Build** | 1. Set up the environment. <br /> 2. Build the web app. | -|**Deploy** | 1. Deploy the web app. | +When you enable continuous deployment, the app creation wizard automatically picks the authentication method based on the basic authentication selection and configures your app and your GitHub repository accordingly: -## Use the Deployment Center +| Basic authentication selection | Authentication method | +|-|-| +|Disable| [User-assigned identity (OpenID Connect)](deploy-continuous-deployment.md#what-does-the-user-assigned-identity-option-do-for-github-actions) | +|Enable| [Basic authentication](configure-basic-auth-disable.md) | -You can quickly get started with GitHub Actions by using the App Service Deployment Center. This turn-key method automatically generates a workflow file based on your application stack and commits it to your GitHub repository in the correct directory. For more information, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md). +> [!NOTE] +> If you receive an error when creating your app saying that your Azure account doesn't have certain permissions, it may not have [the required permissions to create and configure the user-assigned identity](deploy-continuous-deployment.md#why-do-i-see-the-error-you-do-not-have-sufficient-permissions-on-this-app-to-assign-role-based-access-to-a-managed-identity-and-configure-federated-credentials). For an alternative, see [Set up GitHub Actions deployment from the Deployment Center](#set-up-github-actions-deployment-from-the-deployment-center). ++## Set up GitHub Actions deployment from the Deployment Center ++For an existing app, you can get started quickly with GitHub Actions by using the App Service Deployment Center. This turn-key method automatically generates a GitHub Actions workflow file based on your application stack and commits it to your GitHub repository. ++The Deployment Center also lets you easily configure the more secure OpenID Connect authentication with [the **user-assigned identity** option](deploy-continuous-deployment.md#what-does-the-user-assigned-identity-option-do-for-github-actions). ++If your Azure account has the [needed permissions](deploy-continuous-deployment.md#why-do-i-see-the-error-you-do-not-have-sufficient-permissions-on-this-app-to-assign-role-based-access-to-a-managed-identity-and-configure-federated-credentials), you can select to create a user-assigned identity. Otherwise, you can select an existing user-assigned managed identity in the **Identity** dropdown. You can work with your Azure administrator to create a user-assigned managed identity with the [Website Contributor role](deploy-continuous-deployment.md#why-do-i-see-the-error-this-identity-does-not-have-write-permissions-on-this-app-please-select-a-different-identity-or-work-with-your-admin-to-grant-the-website-contributor-role-to-your-identity-on-this-app). ++For more information, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md?tabs=github). -## Set up a workflow manually +## Set up a GitHub Actions workflow manually -You can also deploy a workflow without using the Deployment Center. To do so, you need to first generate deployment credentials. +You can also deploy a workflow without using the Deployment Center. -## Generate deployment credentials +1. [Generate deployment credentials](#1-generate-deployment-credentials) +1. [Configure the GitHub secret](#2-configure-the-github-secret) +1. [Add the workflow file to your GitHub repository](#3-add-the-workflow-file-to-your-github-repository) ++### 1. Generate deployment credentials The recommended way to authenticate with Azure App Services for GitHub Actions is with a user-defined managed identity, and the easiest way for that is by [configuring GitHub Actions deployment directly in the portal](deploy-continuous-deployment.md) instead and selecting **User-assigned managed identity**. In the previous example, replace the placeholders with your subscription ID, res # [OpenID Connect](#tab/openid) -OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security. +OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](/azure/developer/github/connect-from-azure) is more complex but offers hardened security. 1. If you don't have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application. To learn how to create a Create an active directory application, service princip -## Configure the GitHub secret +### 2. Configure the GitHub secret # [Publish profile](#tab/applevel) In [GitHub](https://github.com/), browse your repository. Select **Settings > Security > Secrets and variables > Actions > New repository secret**. -To use [app-level credentials](#generate-deployment-credentials), paste the contents of the downloaded publish profile file into the secret's value field. Name the secret `AZURE_WEBAPP_PUBLISH_PROFILE`. +To use [app-level credentials](#1-generate-deployment-credentials), paste the contents of the downloaded publish profile file into the secret's value field. Name the secret `AZURE_WEBAPP_PUBLISH_PROFILE`. -When you configure your GitHub workflow, you use the `AZURE_WEBAPP_PUBLISH_PROFILE` in the deploy Azure Web App action. For example: +When you configure the GitHub workflow file later, you use the `AZURE_WEBAPP_PUBLISH_PROFILE` in the deploy Azure Web App action. For example: ```yaml - uses: azure/webapps-deploy@v2 When you configure your GitHub workflow, you use the `AZURE_WEBAPP_PUBLISH_PROFI In [GitHub](https://github.com/), browse your repository. Select **Settings > Security > Secrets and variables > Actions > New repository secret**. -To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`. +To use [user-level credentials](#1-generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Name the secret `AZURE_CREDENTIALS`. -When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example: +When you configure the GitHub workflow file later, you use the secret for the input `creds` of the [Azure/login](https://github.com/marketplace/actions/azure-login). For example: ```yaml - uses: azure/login@v1 When you configure the workflow file later, you use the secret for the input `cr # [OpenID Connect](#tab/openid) -You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option. +You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the [Azure/login](https://github.com/marketplace/actions/azure-login) action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option. 1. Open your GitHub repository and go to **Settings > Security > Secrets and variables > Actions > New repository secret**. You need to provide your application's **Client ID**, **Tenant ID** and **Subscr -## Set up the environment --Setting up the environment can be done using one of the setup actions. --|**Language** |**Setup Action** | -||| -|**.NET** | `actions/setup-dotnet` | -|**ASP.NET** | `actions/setup-dotnet` | -|**Java** | `actions/setup-java` | -|**JavaScript** | `actions/setup-node` | -|**Python** | `actions/setup-python` | --The following examples show how to set up the environment for the different supported languages: --**.NET** --```yaml - - name: Setup Dotnet 6.0.x - uses: actions/setup-dotnet@v3 - with: - dotnet-version: '6.0.x' -``` --**ASP.NET** --```yaml - - name: Install Nuget - uses: nuget/setup-nuget@v1 - with: - nuget-version: ${{ env.NUGET_VERSION}} -``` --**Java** --```yaml - - name: Setup Java 1.8.x - uses: actions/setup-java@v3 - with: - # If your pom.xml <maven.compiler.source> version is not in 1.8.x, - # change the Java version to match the version in pom.xml <maven.compiler.source> - java-version: '1.8.x' -``` --**JavaScript** +### 3. Add the workflow file to your GitHub repository -```yaml -env: - NODE_VERSION: '18.x' # set this to the node version to use --jobs: - build-and-deploy: - name: Build and Deploy - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@main - - name: Use Node.js ${{ env.NODE_VERSION }} - uses: actions/setup-node@v4 - with: - node-version: ${{ env.NODE_VERSION }} -``` -**Python** +A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your GitHub repository. This definition contains the various steps and parameters that make up the workflow. -```yaml - - name: Setup Python 3.x - uses: actions/setup-python@v4 - with: - python-version: 3.x -``` +At a minimum, the workflow file would have the following distinct steps: -## Build the web app +1. Authenticate with App Service using the GitHub secret you created. +1. Build the web app. +1. Deploy the web app. -The process of building a web app and deploying to Azure App Service changes depending on the language. +To deploy your code to an App Service app, you use the [azure/webapps-deploy@v3](https://github.com/Azure/webapps-deploy/tree/releases/v3) action. The action requires the name of your web app in `app-name` and, depending on your language stack, the path of a *.zip, *.war, *.jar, or folder to deploy in `package`. For a complete list of possible inputs for the `azure/webapps-deploy@v3` action, see the [action.yml](https://github.com/Azure/webapps-deploy/blob/releases/v3/action.yml) definition. The following examples show the part of the workflow that builds the web app, in different supported languages. -For all languages, you can set the web app root directory with `working-directory`. --**.NET** --The environment variable `AZURE_WEBAPP_PACKAGE_PATH` sets the path to your web app project. --```yaml -- name: dotnet build and publish- run: | - dotnet restore - dotnet build --configuration Release - dotnet publish -c Release --property:PublishDir='${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp' -``` -**ASP.NET** --You can restore NuGet dependencies and run msbuild with `run`. --```yaml -- name: NuGet to restore dependencies as well as project-specific tools that are specified in the project file- run: nuget restore --- name: Add msbuild to PATH- uses: microsoft/setup-msbuild@v1.0.2 --- name: Run msbuild- run: msbuild .\SampleWebApplication.sln -``` --**Java** --```yaml -- name: Build with Maven- run: mvn package --file pom.xml -``` --**JavaScript** --For Node.js, you can set `working-directory` or change for npm directory in `pushd`. --```yaml -- name: npm install, build, and test- run: | - npm install - npm run build --if-present - npm run test --if-present - working-directory: my-app-folder # set to the folder with your app if it is not the root directory -``` --**Python** --```yaml -- name: Install dependencies- run: | - python -m pip install --upgrade pip - pip install -r requirements.txt -``` ---## Deploy to App Service --To deploy your code to an App Service app, use the `azure/webapps-deploy@v2` action. This action has four parameters: --| **Parameter** | **Explanation** | -||| -| **app-name** | (Required) Name of the App Service app | -| **publish-profile** | (Optional) Publish profile file contents with Web Deploy secrets | -| **package** | (Optional) Path to package or folder. The path can include *.zip, *.war, *.jar, or a folder to deploy | -| **slot-name** | (Optional) Enter an existing slot other than the production [slot](deploy-staging-slots.md) | -- # [Publish profile](#tab/applevel) -### .NET Core --Build and deploy a .NET Core app to Azure using an Azure publish profile. The `publish-profile` input references the `AZURE_WEBAPP_PUBLISH_PROFILE` secret that you created earlier. --```yaml -name: .NET Core CI --on: [push] --env: - AZURE_WEBAPP_NAME: my-app-name # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root - DOTNET_VERSION: '6.0.x' # set this to the dot net version to use --jobs: - build: - runs-on: ubuntu-latest -- steps: - # Checkout the repo - - uses: actions/checkout@main - - # Setup .NET Core SDK - - name: Setup .NET Core - uses: actions/setup-dotnet@v3 - with: - dotnet-version: ${{ env.DOTNET_VERSION }} - - # Run dotnet build and publish - - name: dotnet build and publish - run: | - dotnet restore - dotnet build --configuration Release - dotnet publish -c Release --property:PublishDir='${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp' - - # Deploy to Azure Web apps - - name: 'Run Azure webapp deploy action using publish profile credentials' - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name - publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} # Define secret variable in repository settings as per action documentation - package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp' -``` --### ASP.NET --Build and deploy an ASP.NET MVC app that uses NuGet and `publish-profile` for authentication. ---```yaml -name: Deploy ASP.NET MVC App deploy to Azure Web App --on: [push] --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root - NUGET_VERSION: '5.3.x' # set this to the dot net version to use --jobs: - build-and-deploy: - runs-on: windows-latest - steps: -- - uses: actions/checkout@main - - - name: Install Nuget - uses: nuget/setup-nuget@v1 - with: - nuget-version: ${{ env.NUGET_VERSION}} - - name: NuGet to restore dependencies as well as project-specific tools that are specified in the project file - run: nuget restore - - - name: Add msbuild to PATH - uses: microsoft/setup-msbuild@v1.0.2 -- - name: Run MSBuild - run: msbuild .\SampleWebApplication.sln - - - name: 'Run Azure webapp deploy action using publish profile credentials' - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name - publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} # Define secret variable in repository settings as per action documentation - package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/SampleWebApplication/' -``` --### Java --Build and deploy a Java Spring app to Azure using an Azure publish profile. The `publish-profile` input references the `AZURE_WEBAPP_PUBLISH_PROFILE` secret that you created earlier. --```yaml -name: Java CI with Maven --on: [push] --jobs: - build: -- runs-on: ubuntu-latest -- steps: - - uses: actions/checkout@v4 - - name: Set up JDK 1.8 - uses: actions/setup-java@v3 - with: - java-version: 1.8 - - name: Build with Maven - run: mvn -B package --file pom.xml - working-directory: my-app-path - - name: Azure WebApp - uses: Azure/webapps-deploy@v2 - with: - app-name: my-app-name - publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} - package: my/target/*.jar -``` --To deploy a `war` instead of a `jar`, change the `package` value. ---```yaml - - name: Azure WebApp - uses: Azure/webapps-deploy@v2 - with: - app-name: my-app-name - publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} - package: my/target/*.war -``` --### JavaScript --Build and deploy a Node.js app to Azure using the app's publish profile. The `publish-profile` input references the `AZURE_WEBAPP_PUBLISH_PROFILE` secret that you created earlier. --```yaml -# File: .github/workflows/workflow.yml -name: JavaScript CI --on: [push] --env: - AZURE_WEBAPP_NAME: my-app-name # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: 'my-app-path' # set this to the path to your web app project, defaults to the repository root - NODE_VERSION: '18.x' # set this to the node version to use --jobs: - build-and-deploy: - name: Build and Deploy - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@main - - name: Use Node.js ${{ env.NODE_VERSION }} - uses: actions/setup-node@v4 - with: - node-version: ${{ env.NODE_VERSION }} - - name: npm install, build, and test - run: | - # Build and test the project, then - # deploy to Azure Web App. - npm install - npm run build --if-present - npm run test --if-present - working-directory: my-app-path - - name: 'Deploy to Azure WebApp' - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} - publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} - package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }} -``` --### Python --Build and deploy a Python app to Azure using the app's publish profile. Note how the `publish-profile` input references the `AZURE_WEBAPP_PUBLISH_PROFILE` secret that you created earlier. --```yaml -name: Python CI --on: - [push] --env: - AZURE_WEBAPP_NAME: my-web-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root --jobs: - build: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - name: Set up Python 3.x - uses: actions/setup-python@v4 - with: - python-version: 3.x - - name: Install dependencies - run: | - python -m pip install --upgrade pip - pip install -r requirements.txt - - name: Building web app - uses: azure/appservice-build@v2 - - name: Deploy web App using GH Action azure/webapps-deploy - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} - publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} - package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }} -``` # [Service principal](#tab/userlevel) -### .NET Core --Build and deploy a .NET Core app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier. ---```yaml -name: .NET Core --on: [push] --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root - DOTNET_VERSION: '6.0.x' # set this to the dot net version to use --jobs: - build: - runs-on: ubuntu-latest -- steps: - # Checkout the repo - - uses: actions/checkout@main - - uses: azure/login@v1 - with: - creds: ${{ secrets.AZURE_CREDENTIALS }} -- - # Setup .NET Core SDK - - name: Setup .NET Core - uses: actions/setup-dotnet@v3 - with: - dotnet-version: ${{ env.DOTNET_VERSION }} - - # Run dotnet build and publish - - name: dotnet build and publish - run: | - dotnet restore - dotnet build --configuration Release - dotnet publish -c Release --property:PublishDir='${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp' - - # Deploy to Azure Web apps - - name: 'Run Azure webapp deploy action using Azure Credentials' - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name - package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp' - - - name: logout - run: | - az logout -``` --### ASP.NET --Build and deploy a ASP.NET MVC app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier. --```yaml -name: Deploy ASP.NET MVC App deploy to Azure Web App --on: [push] --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root - NUGET_VERSION: '5.3.x' # set this to the dot net version to use --jobs: - build-and-deploy: - runs-on: windows-latest - steps: -- # checkout the repo - - uses: actions/checkout@main - - - uses: azure/login@v1 - with: - creds: ${{ secrets.AZURE_CREDENTIALS }} -- - name: Install Nuget - uses: nuget/setup-nuget@v1 - with: - nuget-version: ${{ env.NUGET_VERSION}} - - name: NuGet to restore dependencies as well as project-specific tools that are specified in the project file - run: nuget restore - - - name: Add msbuild to PATH - uses: microsoft/setup-msbuild@v1.0.2 -- - name: Run MSBuild - run: msbuild .\SampleWebApplication.sln - - - name: 'Run Azure webapp deploy action using Azure Credentials' - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name - package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/SampleWebApplication/' - - # Azure logout - - name: logout - run: | - az logout -``` --### Java --Build and deploy a Java Spring app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier. --```yaml -name: Java CI with Maven --on: [push] --jobs: - build: -- runs-on: ubuntu-latest -- steps: - - uses: actions/checkout@v4 - - uses: azure/login@v1 - with: - creds: ${{ secrets.AZURE_CREDENTIALS }} - - name: Set up JDK 1.8 - uses: actions/setup-java@v3 - with: - java-version: 1.8 - - name: Build with Maven - run: mvn -B package --file pom.xml - working-directory: complete - - name: Azure WebApp - uses: Azure/webapps-deploy@v2 - with: - app-name: my-app-name - package: my/target/*.jar -- # Azure logout - - name: logout - run: | - az logout -``` --### JavaScript --Build and deploy a Node.js app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier. --```yaml -name: JavaScript CI --on: [push] --name: Node.js --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: 'my-app-path' # set this to the path to your web app project, defaults to the repository root - NODE_VERSION: '18.x' # set this to the node version to use --jobs: - build-and-deploy: - runs-on: ubuntu-latest - steps: - # checkout the repo - - name: 'Checkout GitHub Action' - uses: actions/checkout@main - - - uses: azure/login@v1 - with: - creds: ${{ secrets.AZURE_CREDENTIALS }} - - - name: Setup Node ${{ env.NODE_VERSION }} - uses: actions/setup-node@v4 - with: - node-version: ${{ env.NODE_VERSION }} - - - name: 'npm install, build, and test' - run: | - npm install - npm run build --if-present - npm run test --if-present - working-directory: my-app-path - - # deploy web app using Azure credentials - - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} - package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }} -- # Azure logout - - name: logout - run: | - az logout -``` --### Python --Build and deploy a Python app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier. --```yaml -name: Python application --on: - [push] --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root --jobs: - build: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - - uses: azure/login@v1 - with: - creds: ${{ secrets.AZURE_CREDENTIALS }} -- - name: Set up Python 3.x - uses: actions/setup-python@v4 - with: - python-version: 3.x - - name: Install dependencies - run: | - python -m pip install --upgrade pip - pip install -r requirements.txt - - name: Deploy web App using GH Action azure/webapps-deploy - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} - package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }} - - name: logout - run: | - az logout -``` # [OpenID Connect](#tab/openid) -### .NET Core --Build and deploy a .NET Core app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action. ---```yaml -name: .NET Core --on: [push] --permissions: - id-token: write - contents: read --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root - DOTNET_VERSION: '6.0.x' # set this to the dot net version to use --jobs: - build: - runs-on: ubuntu-latest -- steps: - # Checkout the repo - - uses: actions/checkout@main - - uses: azure/login@v1 - with: - client-id: ${{ secrets.AZURE_CLIENT_ID }} - tenant-id: ${{ secrets.AZURE_TENANT_ID }} - subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} -- - # Setup .NET Core SDK - - name: Setup .NET Core - uses: actions/setup-dotnet@v3 - with: - dotnet-version: ${{ env.DOTNET_VERSION }} - - # Run dotnet build and publish - - name: dotnet build and publish - run: | - dotnet restore - dotnet build --configuration Release - dotnet publish -c Release --property:PublishDir='${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp' - - # Deploy to Azure Web apps - - name: 'Run Azure webapp deploy action using publish profile credentials' - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name - package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp' - - - name: logout - run: | - az logout -``` --### ASP.NET --Build and deploy a ASP.NET MVC app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action. --```yaml -name: Deploy ASP.NET MVC App deploy to Azure Web App -on: [push] --permissions: - id-token: write - contents: read --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root - NUGET_VERSION: '5.3.x' # set this to the dot net version to use --jobs: - build-and-deploy: - runs-on: windows-latest - steps: -- # checkout the repo - - uses: actions/checkout@main - - - uses: azure/login@v1 - with: - client-id: ${{ secrets.AZURE_CLIENT_ID }} - tenant-id: ${{ secrets.AZURE_TENANT_ID }} - subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} -- - name: Install Nuget - uses: nuget/setup-nuget@v1 - with: - nuget-version: ${{ env.NUGET_VERSION}} - - name: NuGet to restore dependencies as well as project-specific tools that are specified in the project file - run: nuget restore - - - name: Add msbuild to PATH - uses: microsoft/setup-msbuild@v1.0.2 -- - name: Run MSBuild - run: msbuild .\SampleWebApplication.sln - - - name: 'Run Azure webapp deploy action using publish profile credentials' - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name - package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/SampleWebApplication/' - - # Azure logout - - name: logout - run: | - az logout -``` --### Java --Build and deploy a Java Spring app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action. --```yaml -name: Java CI with Maven --on: [push] --permissions: - id-token: write - contents: read --jobs: - build: -- runs-on: ubuntu-latest -- steps: - - uses: actions/checkout@v4 - - uses: azure/login@v1 - with: - client-id: ${{ secrets.AZURE_CLIENT_ID }} - tenant-id: ${{ secrets.AZURE_TENANT_ID }} - subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - - name: Set up JDK 1.8 - uses: actions/setup-java@v3 - with: - java-version: 1.8 - - name: Build with Maven - run: mvn -B package --file pom.xml - working-directory: complete - - name: Azure WebApp - uses: Azure/webapps-deploy@v2 - with: - app-name: my-app-name - package: my/target/*.jar -- # Azure logout - - name: logout - run: | - az logout -``` --### JavaScript --Build and deploy a Node.js app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action. ---```yaml -name: JavaScript CI --on: [push] --permissions: - id-token: write - contents: read --name: Node.js --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: 'my-app-path' # set this to the path to your web app project, defaults to the repository root - NODE_VERSION: '18.x' # set this to the node version to use --jobs: - build-and-deploy: - runs-on: ubuntu-latest - steps: - # checkout the repo - - name: 'Checkout GitHub Action' - uses: actions/checkout@main - - - uses: azure/login@v1 - with: - client-id: ${{ secrets.AZURE_CLIENT_ID }} - tenant-id: ${{ secrets.AZURE_TENANT_ID }} - subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - - - name: Setup Node ${{ env.NODE_VERSION }} - uses: actions/setup-node@v4 - with: - node-version: ${{ env.NODE_VERSION }} - - - name: 'npm install, build, and test' - run: | - npm install - npm run build --if-present - npm run test --if-present - working-directory: my-app-path - - # deploy web app using Azure credentials - - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} - package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }} -- # Azure logout - - name: logout - run: | - az logout -``` --### Python --Build and deploy a Python app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action. --```yaml -name: Python application --on: - [push] --permissions: - id-token: write - contents: read --env: - AZURE_WEBAPP_NAME: my-app # set this to your application's name - AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root --jobs: - build: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - - uses: azure/login@v1 - with: - client-id: ${{ secrets.AZURE_CLIENT_ID }} - tenant-id: ${{ secrets.AZURE_TENANT_ID }} - subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} -- - name: Set up Python 3.x - uses: actions/setup-python@v4 - with: - python-version: 3.x - - name: Install dependencies - run: | - python -m pip install --upgrade pip - pip install -r requirements.txt - - name: Deploy web App using GH Action azure/webapps-deploy - uses: azure/webapps-deploy@v2 - with: - app-name: ${{ env.AZURE_WEBAPP_NAME }} - package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }} - - name: logout - run: | - az logout -``` --+-- ## Next steps Check out references on Azure GitHub Actions and workflows: -- [Azure login](https://github.com/Azure/login)-- [Azure WebApp](https://github.com/Azure/webapps-deploy)-- [Azure WebApp for containers](https://github.com/Azure/webapps-container-deploy)-- [Docker login/logout](https://github.com/Azure/docker-login)-- [K8s deploy](https://github.com/Azure/k8s-deploy)+- [Azure/login action](https://github.com/Azure/login) +- [Azure/webapps-deploy action](https://github.com/Azure/webapps-deploy) +- [Docker/login action](https://github.com/Azure/docker-login) +- [Azure/k8s-deploy action](https://github.com/Azure/k8s-deploy) - [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples) - [Starter Workflows](https://github.com/actions/starter-workflows) - [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) |
app-service | Deploy Local Git | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md | Title: Deploy from local Git repo description: Learn how to enable local Git deployment to Azure App Service. One of the simplest ways to deploy code from your local machine. ms.assetid: ac50a623-c4b8-4dfd-96b2-a09420770063 Previously updated : 02/16/2021 Last updated : 01/26/2024 +> [!NOTE] +> When [basic authentication is disabled](configure-basic-auth-disable.md), Local Git deployment doesn't work, and you can't configure Local Git deployment in the app's Deployment Center. + ## Prerequisites To follow the steps in this how-to guide: In the portal, you need to create an app first, then configure deployment for it ## Configure an existing app -If you haven't created an app yet, see [Create a Git enabled app](#create-a-git-enabled-app) instead. +If you don't have an app yet, see [Create a Git enabled app](#create-a-git-enabled-app) instead. # [Azure CLI](#tab/cli) Set-AzResource -PropertyObject $PropertiesObject -ResourceGroupName <group-name> 1. In the [Azure portal](https://portal.azure.com), navigate to your app's management page. -1. From the left menu, select **Deployment Center** > **Settings**. Select **Local Git** in **Source**, then click **Save**. +1. From the left menu, select **Deployment Center** > **Settings**. Select **Local Git** in **Source**, then select **Save**. ![Shows how to enable local Git deployment for App Service in the Azure portal](./media/deploy-local-git/enable-portal.png) Set-AzResource -PropertyObject $PropertiesObject -ResourceGroupName <group-name> If your Git remote URL already contains the username and password, you won't be prompted. -1. Review the output. You may see runtime-specific automation, such as MSBuild for ASP.NET, `npm install` for Node.js, and `pip install` for Python. +1. Review the output. You might see runtime-specific automation, such as MSBuild for ASP.NET, `npm install` for Node.js, and `pip install` for Python. 1. Browse to your app in the Azure portal to verify that the content is deployed. When you push commits to your App Service repository, App Service deploys the fi git push azure main ``` - You can also change the `DEPLOYMENT_BRANCH` app setting in the Azure Portal, by selecting **Configuration** under **Settings** and adding a new Application Setting with a name of `DEPLOYMENT_BRANCH` and value of `main`. + You can also change the `DEPLOYMENT_BRANCH` app setting in the Azure portal, by selecting **Configuration** under **Settings** and adding a new Application Setting with a name of `DEPLOYMENT_BRANCH` and value of `main`. ## Troubleshoot deployment -You may see the following common error messages when you use Git to publish to an App Service app in Azure: +You might see the following common error messages when you use Git to publish to an App Service app in Azure: |Message|Cause|Resolution ||| |`Unable to access '[siteURL]': Failed to connect to [scmAddress]`|The app isn't up and running.|Start the app in the Azure portal. Git deployment isn't available when the web app is stopped.|-|`Couldn't resolve host 'hostname'`|The address information for the 'azure' remote is incorrect.|Use the `git remote -v` command to list all remotes, along with the associated URL. Verify that the URL for the 'azure' remote is correct. If needed, remove and recreate this remote using the correct URL.| +|`Couldn't resolve host 'hostname'`|The address information for the `azure` remote is incorrect.|Use the `git remote -v` command to list all remotes, along with the associated URL. Verify that the URL for the `azure` remote is correct. If needed, remove and recreate this remote using the correct URL.| |`No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'main'.`|You didn't specify a branch during `git push`, or you haven't set the `push.default` value in `.gitconfig`.|Run `git push` again, specifying the main branch: `git push azure main`.|-|`Error - Changes committed to remote repository but deployment to website failed.`|You pushed a local branch that doesn't match the app deployment branch on 'azure'.|Verify that current branch is `master`. To change the default branch, use `DEPLOYMENT_BRANCH` application setting (see [Change deployment branch](#change-deployment-branch)). | -|`src refspec [branchname] does not match any.`|You tried to push to a branch other than main on the 'azure' remote.|Run `git push` again, specifying the main branch: `git push azure main`.| +|`Error - Changes committed to remote repository but deployment to website failed.`|You pushed a local branch that doesn't match the app deployment branch on `azure`.|Verify that current branch is `master`. To change the default branch, use `DEPLOYMENT_BRANCH` application setting (see [Change deployment branch](#change-deployment-branch)). | +|`src refspec [branchname] does not match any.`|You tried to push to a branch other than main on the `azure` remote.|Run `git push` again, specifying the main branch: `git push azure main`.| |`RPC failed; result=22, HTTP code = 5xx.`|This error can happen if you try to push a large git repository over HTTPS.|Change the git configuration on the local machine to make the `postBuffer` bigger. For example: `git config --global http.postBuffer 524288000`.| |`Error - Changes committed to remote repository but your web app not updated.`|You deployed a Node.js app with a _package.json_ file that specifies additional required modules.|Review the `npm ERR!` error messages before this error for more context on the failure. The following are the known causes of this error, and the corresponding `npm ERR!` messages:<br /><br />**Malformed package.json file**: `npm ERR! Couldn't read dependencies.`<br /><br />**Native module doesn't have a binary distribution for Windows**:<br />`npm ERR! \cmd "/c" "node-gyp rebuild"\ failed with 1` <br />or <br />`npm ERR! [modulename@version] preinstall: \make || gmake\ `| -## Additional resources +## More resources - [App Service build server (Project Kudu documentation)](https://github.com/projectkudu/kudu/wiki) - [Continuous deployment to Azure App Service](deploy-continuous-deployment.md) |
app-service | Deploy Zip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md | Title: Deploy files to App Service description: Learn to deploy various app packages or discrete libraries, static files, or startup scripts to Azure App Service Previously updated : 07/21/2023 Last updated : 01/26/2024 This ZIP package deployment uses the same Kudu service that powers continuous in - Deployment logs. - A package size limit of 2048 MB. -For more information, see [Kudu documentation](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file). - > [!NOTE]-> Files in the ZIP package are copied only if their timestamps don't match what is already deployed. Generating a zip using a build process that caches outputs can result in faster deployments. See [Deploying from a zip file or url](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file-or-url), for more information. +> Files in the ZIP package are copied only if their timestamps don't match what is already deployed. ++#### With zip deploy UI in Kudu ++In the browser, navigate to `https://<app_name>.scm.azurewebsites.net/ZipDeployUI`. ++Upload the ZIP package you created in [Create a project ZIP package](#create-a-project-zip-package) by dragging it to the file explorer area on the web page. ++When deployment is in progress, an icon in the top right corner shows you the progress in percentage. The page also shows verbose messages for the operation below the explorer area. When deployment completes, the last message should say `Deployment successful`. ++The above endpoint doesn't work for Linux App Services at this time. Consider using FTP or the [ZIP deploy API](./faq-app-service-linux.yml) instead. ++#### Without zip deploy UI in Kudu # [Azure CLI](#tab/cli) az webapp deploy --resource-group <group-name> --name <app-name> --src-path <zip This command restarts the app after deploying the ZIP package. --The following example uses the `--src-url` parameter to specify the URL of an Azure Storage account that the site should pull the ZIP from. --```azurecli-interactive -az webapp deploy --resource-group <group-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3 -``` - # [Azure PowerShell](#tab/powershell) -The following example uses [Publish-AzWebapp](/powershell/module/az.websites/publish-azwebapp) to upload the ZIP package. Replace the placeholders `<group-name>`, `<app-name>`, and `<zip-package-path>`. +The following example uses [Publish-AzWebapp](/powershell/module/az.websites/publish-azwebapp) to upload the ZIP package. Replace the placeholders `<group-name>`, `<app-name>`, and `<zip-package-path>` -```powershell +```azurepowershell-interactive Publish-AzWebApp -ResourceGroupName Default-Web-WestUS -Name MyApp -ArchivePath <zip-package-path> ``` # [Kudu API](#tab/api) -The following example uses the cURL tool to deploy a ZIP package. Replace the placeholders `<username>`, `<password>`, `<zip-package-path>`, and `<app-name>`. Use the [deployment credentials](deploy-configure-credentials.md) for authentication. +The following example uses the cURL tool to deploy a ZIP package. Replace the placeholders `<zip-package-path>` and `<app-name>`. If you choose basic authentication, supply the [deployment credentials](deploy-configure-credentials.md) in `<username>` and `<password>`. ```bash-curl -X POST \ - -H "Content-Type: application/octet-stream" \ - -u '<username>:<password>' \ - -T "<zip-package-path>" \ - "https://<app-name>.scm.azurewebsites.net/api/zipdeploy" -``` -+# Microsoft Entra authentication +TOKEN=$(az account get-access-token --query accessToken | tr -d '"') -The following example uses the `packageUri` parameter to specify the URL of an Azure Storage account that the web app should pull the ZIP from. +curl -X POST \ + -H "Authorization: Bearer $TOKEN" \ + -T @"<zip-package-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=zip" -```bash -curl -X PUT \ - -H "Content-Type: application/json" \ +# Basic authentication +curl -X POST \ -u '<username>:<password>' \- -d '{"packageUri": "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3"}' \ - "https://<app-name>.scm.azurewebsites.net/api/zipdeploy" + -T "<zip-package-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=zip" ``` -# [Kudu UI](#tab/kudu-ui) +# [ARM template](#tab/arm) -In the browser, navigate to `https://<app_name>.scm.azurewebsites.net/ZipDeployUI`. --Upload the ZIP package you created in [Create a project ZIP package](#create-a-project-zip-package) by dragging it to the file explorer area on the web page. --When deployment is in progress, an icon in the top right corner shows you the progress in percentage. The page also shows verbose messages for the operation below the explorer area. When it is finished, the last deployment message should say `Deployment successful`. --The above endpoint does not work for Linux App Services at this time. Consider using FTP or the [ZIP deploy API](./faq-app-service-linux.yml) instead. +ARM templates only support [deployments from remotely hosted packages](#deploy-to-network-secured-apps). -- -> [!NOTE] -> To deploy a ZIP package in an [ARM template](), upload the ZIP package to an internet-accessible location, then add a `onedeploy` resource like the following JSON. Replace the placeholders `<app-name>` and `<zip-package-uri>`. -> -> ```ARM template -> { -> "type": "Microsoft.Web/sites/extensions", -> "apiVersion": "2021-03-01", -> "name": "onedeploy", -> "dependsOn": [ -> "[resourceId('Microsoft.Web/Sites', <app-name>')]" -> ], -> "properties": { -> "packageUri": "<zip-package-uri>", -> "type":"zip" -> } -> } -> ``` -> -> The \<zip-package-uri> can be a public endpoint, but it's best to use blob storage with a SAS key to protect it. For more information, see [Microsoft.Web sites/extensions 'onedeploy' 2021-03-01](/azure/templates/microsoft.web/2021-03-01/sites/extensions-onedeploy?pivots=deployment-language-arm-template). -> -> --## Enable build automation for ZIP deploy +## Enable build automation for zip deploy By default, the deployment engine assumes that a ZIP package is ready to run as-is and doesn't run any build automation. To enable the same build automation as in a [Git deployment](deploy-local-git.md), set the `SCM_DO_BUILD_DURING_DEPLOYMENT` app setting by running the following command in the [Cloud Shell](https://shell.azure.com): az webapp config appsettings set --resource-group <group-name> --name <app-name> For more information, see [Kudu documentation](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file-or-url). - ## Deploy WAR/JAR/EAR packages You can deploy your [WAR](https://wikipedia.org/wiki/WAR_(file_format)), [JAR](https://wikipedia.org/wiki/JAR_(file_format)), or [EAR](https://wikipedia.org/wiki/EAR_(file_format)) package to App Service to run your Java web app using the Azure CLI, PowerShell, or the Kudu publish API. -The deployment process used by the steps here places the package on the app's content share with the right naming convention and directory structure (see [Kudu publish API reference](#kudu-publish-api-reference)), and it's the recommended approach. If you deploy WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy instead, you may see unkown failures due to mistakes in the naming or structure. +The deployment process shown here puts the package on the app's content share with the right naming convention and directory structure (see [Kudu publish API reference](#kudu-publish-api-reference)), and it's the recommended approach. If you deploy WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy instead, you might see unknown failures due to mistakes in the naming or structure. # [Azure CLI](#tab/cli) Deploy a WAR package to Tomcat or JBoss EAP by using the [az webapp deploy](/cli az webapp deploy --resource-group <group-name> --name <app-name> --src-path ./<package-name>.war ``` --The following example uses the `--src-url` parameter to specify the URL of an Azure Storage account that the web app should pull the WAR from. --```azurecli-interactive -az webapp deploy --resource-group <group-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.war?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3 --type war -``` - The CLI command uses the [Kudu publish API](#kudu-publish-api-reference) to deploy the package and can be fully customized. # [Azure PowerShell](#tab/powershell) Publish-AzWebapp -ResourceGroupName <group-name> -Name <app-name> -ArchivePath < # [Kudu API](#tab/api) -The following example uses the cURL tool to deploy a .war, .jar, or .ear file. Replace the placeholders `<username>`, `<file-path>`, `<app-name>`, and `<package-type>` (`war`, `jar`, or `ear`, accordingly). When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). +The following example uses the cURL tool to deploy a .war, .jar, or .ear file. Replace the placeholders `<file-path>`, `<app-name>`, and `<package-type>` (`war`, `jar`, or `ear`, accordingly). If you choose basic authentication, supply the [deployment credentials](deploy-configure-credentials.md) in `<username>` and `<password>`. ```bash-curl -X POST -u <username> -T @"<file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=<package-type> -``` +# Microsoft Entra authentication +TOKEN=$(az account get-access-token --query accessToken | tr -d '"') --The following example uses the `packageUri` parameter to specify the URL of an Azure Storage account that the web app should pull the WAR from. The WAR file could also be a JAR or EAR file. +curl -X POST \ + -H "Authorization: Bearer $TOKEN" \ + -T @"<file-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=<package-type>" -```bash -curl -X POST -u <username> https://<app-name>.scm.azurewebsites.net/api/publish -d '{"packageUri": "https://storagesample.blob.core.windows.net/sample-container/myapp.war?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3"}' +# Basic authentication +curl -X POST \ + -u <username>:<password> \ + -T @"<file-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=<package-type>" ``` For more information, see [Kudu publish API reference](#kudu-publish-api-reference) -# [Kudu UI](#tab/kudu-ui) +# [ARM template](#tab/arm) -The Kudu UI does not support deploying JAR, WAR, or EAR applications. Please use one of the other options. +ARM templates only support [deployments from remotely hosted packages](#deploy-to-network-secured-apps). -- Not supported. See Azure CLI or Kudu API. ### Deploy a startup script -The following example uses the cURL tool to deploy a startup file for their application.Replace the placeholders `<username>`, `<startup-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). +The following example uses the cURL tool to deploy a startup file for the application. Replace the placeholders `<startup-file-path>` and `<app-name>`. If you choose basic authentication, supply the [deployment credentials](deploy-configure-credentials.md) in `<username>` and `<password>`. ```bash-curl -X POST -u <username> -T @"<startup-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=startup +# Microsoft Entra authentication +TOKEN=$(az account get-access-token --query accessToken | tr -d '"') ++curl -X POST \ + -H "Authorization: Bearer $TOKEN" \ + -T @"<startup-file-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=startup" ++# Basic authentication +curl -X POST \ + -u <username>:<password> \ + -T @"<startup-file-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=startup" ``` ### Deploy a library file -The following example uses the cURL tool to deploy a library file for their application. Replace the placeholders `<username>`, `<lib-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). +The following example uses the cURL tool to deploy a library file for the application. Replace the placeholders `<lib-file-path>` and `<app-name>`. If you choose basic authentication, supply the [deployment credentials](deploy-configure-credentials.md) in `<username>` and `<password>`. ```bash-curl -X POST -u <username> -T @"<lib-file-path>" "https://<app-name>.scm.azurewebsites.net/api/publish?type=lib&path=/home/site/deployments/tools/my-lib.jar" +# Microsoft Entra authentication +TOKEN=$(az account get-access-token --query accessToken | tr -d '"') ++curl -X POST \ + -H "Authorization: Bearer $TOKEN" \ + -T @"<lib-file-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=lib&path=/home/site/deployments/tools/my-lib.jar" ++# Basic authentication +curl -X POST \ + -u <username>:<password> \ + -T @"<lib-file-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=lib&path=/home/site/deployments/tools/my-lib.jar" ``` ### Deploy a static file -The following example uses the cURL tool to deploy a config file for their application. Replace the placeholders `<username>`, `<config-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). +The following example uses the cURL tool to deploy a config file for the application. Replace the placeholders `<config-file-path>` and `<app-name>`. If you choose basic authentication, supply the [deployment credentials](deploy-configure-credentials.md) in `<username>` and `<password>`. ++```bash +# Microsoft Entra authentication +TOKEN=$(az account get-access-token --query accessToken | tr -d '"') ++curl -X POST \ + -H "Authorization: Bearer $TOKEN" \ + -T @"<config-file-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=static&path=/home/site/deployments/tools/my-config.json" ++# Basic authentication +curl -X POST \ + -u <username>:<password> \ + -T @"<config-file-path>" \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=static&path=/home/site/deployments/tools/my-config.json" +``` ++# [ARM template](#tab/arm) ++ARM templates only support [deployments from remotely hosted packages](#deploy-to-network-secured-apps). ++-- ++## Deploy to network-secured apps ++Depending on your web app's networking configuration, direct access to the app from your development environment might be blocked (see [Deploying to Network-secured sites](https://azure.github.io/AppService/2021/01/04/deploying-to-network-secured-sites.html) and [Deploying to Network-secured sites, Part 2](https://azure.github.io/AppService/2021/03/01/deploying-to-network-secured-sites-2.html)). Instead of pushing the package or file to the web app directly, you can publish it to a storage system accessible from the web app and trigger the app to pull the ZIP from the storage location. ++The remote URL can be any publicly accessible location, but it's best to use a blob storage container with a SAS key to protect it. ++# [Azure CLI](#tab/cli) ++Use the `az webapp deploy` command like you would in the other sections, but use `--src-url` instead of `--src-path`. The following example uses the `--src-url` parameter to specify the URL of a ZIP file hosted in an Azure Storage account. ++```azurecli-interactive +az webapp deploy --resource-group <group-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3 --type zip +``` ++# [Azure PowerShell](#tab/powershell) ++Not supported. See Azure CLI, Kudu API, or ARM template. ++# [Kudu API](#tab/api) ++Invoke the [Kudu publish API](#kudu-publish-api-reference) like you would in the other sections, but instead of uploading a file, pass in a JSON object with `packageUri` in the request body. The following examples use this method to specify the URL of a ZIP file hosted in an Azure Storage account. Note the type is still specified as a query string. If you choose basic authentication, supply the [deployment credentials](deploy-configure-credentials.md) in `<username>` and `<password>`. ```bash-curl -X POST -u <username> -T @"<config-file-path>" "https://<app-name>.scm.azurewebsites.net/api/publish?type=static&path=/home/site/deployments/tools/my-config.json" +# Microsoft Entra authentication +TOKEN=$(az account get-access-token --query accessToken | tr -d '"') ++curl -X POST \ + -H "Authorization: Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"packageUri": "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3"}' \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=zip" ++# Basic authentication +curl -X POST \ + -u '<username>:<password>' \ + -H "Content-Type: application/json" \ + -d '{"packageUri": "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3"}' \ + "https://<app-name>.scm.azurewebsites.net/api/publish?type=zip" +``` ++# [ARM template](#tab/arm) ++Add the following JSON to your ARM template. Replace the placeholder `<app-name>`. ++```json +{ + "type": "Microsoft.Web/sites/extensions", + "apiVersion": "2021-03-01", + "name": "onedeploy", + "dependsOn": [ + "[resourceId('Microsoft.Web/Sites', <app-name>')]" + ], + "properties": { + "packageUri": "<zip-package-uri>", + "type": "<type>", + "path": "<target-path>" + } +} ``` -# [Kudu UI](#tab/kudu-ui) +Use the following reference to help you configure the properties: -The Kudu UI does not support deploying individual files. Please use the Azure CLI or Kudu REST API. +|Property | Description | Required | +|-|-|-| +| `packageUri` | The URI of the package or file. For more information, see [Microsoft.Web sites/extensions 'onedeploy'](/azure/templates/microsoft.web/2021-03-01/sites/extensions-onedeploy?pivots=deployment-language-arm-template). | Yes | +| `type` | See the `type` parameter in [Kudu publish API reference](#kudu-publish-api-reference). | Yes | +| `path` | See the `target-path` parameter in [Kudu publish API reference](#kudu-publish-api-reference). | No | -- ## Kudu publish API reference -The `publish` Kudu API allows you to specify the same parameters from the CLI command as URL query parameters. To authenticate with the Kudu API, you can use basic authentication with your app's [deployment credentials](deploy-configure-credentials.md#userscope). +The `publish` Kudu API allows you to specify the same parameters from the CLI command as URL query parameters. To authenticate with the Kudu REST API, it's best to use token authentication, but you can also use basic authentication with your app's [deployment credentials](deploy-configure-credentials.md#userscope). -The table below shows the available query parameters, their allowed values, and descriptions. +The following table shows the available query parameters, their allowed values, and descriptions. | Key | Allowed values | Description | Required | Type | |-|-|-|-|-|-| `type` | `war`\|`jar`\|`ear`\|`lib`\|`startup`\|`static`\|`zip` | The type of the artifact being deployed, this sets the default target path and informs the web app how the deployment should be handled. <br/> - `type=zip`: Deploy a ZIP package by unzipping the content to `/home/site/wwwroot`. `target-path` parameter is optional. <br/> - `type=war`: Deploy a WAR package. By default, the WAR package is deployed to `/home/site/wwwroot/app.war`. The target path can be specified with `target-path`. <br/> - `type=jar`: Deploy a JAR package to `/home/site/wwwroot/app.jar`. The `target-path` parameter is ignored <br/> - `type=ear`: Deploy an EAR package to `/home/site/wwwroot/app.ear`. The `target-path` parameter is ignored <br/> - `type=lib`: Deploy a JAR library file. By default, the file is deployed to `/home/site/libs`. The target path can be specified with `target-path`. <br/> - `type=static`: Deploy a static file (e.g. a script). By default, the file is deployed to `/home/site/wwwroot`. <br/> - `type=startup`: Deploy a script that App Service automatically uses as the startup script for your app. By default, the script is deployed to `D:\home\site\scripts\<name-of-source>` for Windows and `home/site/wwwroot/startup.sh` for Linux. The target path can be specified with `target-path`. | Yes | String | +| `type` | `war`\|`jar`\|`ear`\|`lib`\|`startup`\|`static`\|`zip` | The type of the artifact being deployed, this sets the default target path and informs the web app how the deployment should be handled. <br/> - `type=zip`: Deploy a ZIP package by unzipping the content to `/home/site/wwwroot`. `target-path` parameter is optional. <br/> - `type=war`: Deploy a WAR package. By default, the WAR package is deployed to `/home/site/wwwroot/app.war`. The target path can be specified with `target-path`. <br/> - `type=jar`: Deploy a JAR package to `/home/site/wwwroot/app.jar`. The `target-path` parameter is ignored <br/> - `type=ear`: Deploy an EAR package to `/home/site/wwwroot/app.ear`. The `target-path` parameter is ignored <br/> - `type=lib`: Deploy a JAR library file. By default, the file is deployed to `/home/site/libs`. The target path can be specified with `target-path`. <br/> - `type=static`: Deploy a static file (such as a script). By default, the file is deployed to `/home/site/wwwroot`. <br/> - `type=startup`: Deploy a script that App Service automatically uses as the startup script for your app. By default, the script is deployed to `D:\home\site\scripts\<name-of-source>` for Windows and `home/site/wwwroot/startup.sh` for Linux. The target path can be specified with `target-path`. | Yes | String | | `restart` | `true`\|`false` | By default, the API restarts the app following the deployment operation (`restart=true`). To deploy multiple artifacts, prevent restarts on the all but the final deployment by setting `restart=false`. | No | Boolean | | `clean` | `true`\|`false` | Specifies whether to clean (delete) the target deployment before deploying the artifact there. | No | Boolean | | `ignorestack` | `true`\|`false` | The publish API uses the `WEBSITE_STACK` environment variable to choose safe defaults depending on your site's language stack. Setting this parameter to `false` disables any language-specific defaults. | No | Boolean |-| `target-path` | `"<absolute-path>"` | The absolute path to deploy the artifact to. For example, `"/home/site/deployments/tools/driver.jar"`, `"/home/site/scripts/helper.sh"`. | No | String | +| `target-path` | An absolute path | The absolute path to deploy the artifact to. For example, `"/home/site/deployments/tools/driver.jar"`, `"/home/site/scripts/helper.sh"`. | No | String | ## Next steps For more advanced deployment scenarios, try [deploying to Azure with Git](deploy ## More resources * [Kudu: Deploying from a zip file](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file)-* [Azure App Service Deployment Credentials](deploy-ftp.md) * [Environment variables and app settings reference](reference-app-settings.md) |
app-service | Quickstart Dotnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md | Title: "Quickstart: Deploy an ASP.NET web app" description: Learn how to run web apps in Azure App Service by deploying your first ASP.NET app. ms.assetid: b1e6bd58-48d1-4007-9d6c-53fd6db061e3 Previously updated : 05/03/2023 Last updated : 01/26/2024 zone_pivot_groups: app-service-ide adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 Follow these steps to create your App Service resources and publish your project -- -1. Select the **Next: Deployment >** button at the bottom of the page. +1. Select the **Deployment** tab at the top of the page -1. In the **Deployment** tab, under **GitHub Actions settings** make sure **Continuous deployment** is *Enable*. +1. Under **GitHub Actions settings**, set **Continuous deployment** to *Enable*. 1. Under **GitHub Actions details**, authenticate with your GitHub account, and select the following options: Follow these steps to create your App Service resources and publish your project -- + > [!NOTE] + > By default, the creation wizard [disables basic authentication](configure-basic-auth-disable.md) and GitHub Actions deployment is created [using a user-assigned identity](deploy-continuous-deployment.md#what-does-the-user-assigned-identity-option-do-for-github-actions). If you get a permissions error during resource creation, your Azure account may not have [enough permissions](deploy-continuous-deployment.md#why-do-i-see-the-error-you-do-not-have-sufficient-permissions-on-this-app-to-assign-role-based-access-to-a-managed-identity-and-configure-federated-credentials). You can [configure GitHub Actions deployment later](deploy-continuous-deployment.md) with an identity generated for you by an Azure administrator, or you can also enable basic authentication instead. + 1. Select the **Review + create** button at the bottom of the page. 1. After validation runs, select the **Create** button at the bottom of the page. |
app-service | Quickstart Golang | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-golang.md | - Title: 'Quickstart: Create a Go web app' -description: Deploy your first Go (GoLang) Hello World to Azure App Service in minutes. - Previously updated : 10/13/2022-----# Deploy a Go web app to Azure App Service --> [!IMPORTANT] -> Go on App Service on Linux is _experimental_. -> --In this quickstart, you'll deploy a Go web app to Azure App Service. Azure App Service is a fully managed web hosting service that supports Go 1.19 and higher apps hosted in a Linux server environment. --To complete this quickstart, you need: --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs).-- [Go 1.19](https://go.dev/dl/) or higher installed locally.--## 1 - Sample application --First, create a folder for your project. --Go to the terminal window, change into the folder you created and run `go mod init <ModuleName>`. The ModuleName could just be the folder name at this point. --The `go mod init` command creates a go.mod file to track your code's dependencies. So far, the file includes only the name of your module and the Go version your code supports. But as you add dependencies, the go.mod file will list the versions your code depends on. --Create a file called main.go. We'll be doing most of our coding here. --```go -package main -import ( - "fmt" - "net/http" -) -func main() { - http.HandleFunc("/", HelloServer) - http.ListenAndServe(":8080", nil) -} -func HelloServer(w http.ResponseWriter, r *http.Request) { - fmt.Fprintf(w, "Hello, %s!", r.URL.Path[1:]) -} -``` --This program uses the `net.http` package to handle all requests to the web root with the HelloServer function. The call to `http.ListenAndServe` tells the server to listen on the TCP network address `:8080`. --Using a terminal, go to your project’s directory and run `go run main.go`. Now open a browser window and type the URL `http://localhost:8080/world`. You should see the message `Hello, world!`. --## 2 - Create a web app in Azure --To host your application in Azure, you need to create Azure App Service web app in Azure. You can create a web app using the Azure CLI. --Azure CLI commands can be run on a computer with the [Azure CLI installed](/cli/azure/install-azure-cli). --Azure CLI has a command `az webapp up` that will create the necessary resources and deploy your application in a single step. --If necessary, log in to Azure using [az login](/cli/azure/authenticate-azure-cli). --```azurecli -az login -``` --Create the webapp and other resources, then deploy your code to Azure using [az webapp up](/cli/azure/webapp#az-webapp-up). --```azurecli -az webapp up --runtime GO:1.19 --os linux --sku B1 -``` --* The `--runtime` parameter specifies what version of Go your app is running. This example uses Go 1.18. To list all available runtimes, use the command `az webapp list-runtimes --os linux --output table`. -* The `--sku` parameter defines the size (CPU, memory) and cost of the app service plan. This example uses the B1 (Basic) service plan, which will incur a small cost in your Azure subscription. For a full list of App Service plans, view the [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) page. -* You can optionally specify a name with the argument `--name <app-name>`. If you don't provide one, then a name will be automatically generated. -* You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az-appservice-list-locations) command. --The command may take a few minutes to complete. While the command is running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives the message, "You can launch the app at http://<app-name>.azurewebsites.net", which is the app's URL on Azure. --<pre> -The webapp '<app-name>' doesn't exist -Creating Resource group '<group-name>' ... -Resource group creation complete -Creating AppServicePlan '<app-service-plan-name>' ... -Creating webapp '<app-name>' ... -Creating zip with contents of dir /home/tulika/myGoApp ... -Getting scm site credentials for zip deployment -Starting zip deployment. This operation can take a while to complete ... -Deployment endpoint responded with status code 202 -You can launch the app at http://<app-name>.azurewebsites.net -{ - "URL": "http://<app-name>.azurewebsites.net", - "appserviceplan": "<app-service-plan-name>", - "location": "centralus", - "name": "<app-name>", - "os": "<os-type>", - "resourcegroup": "<group-name>", - "runtime_version": "go|1.19", - "runtime_version_detected": "0.0", - "sku": "FREE", - "src_path": "<your-folder-location>" -} -</pre> ---## 3 - Browse to the app --Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. If you see a default app page, wait a minute and refresh the browser. --The Go sample code is running a Linux container in App Service using a built-in image. --**Congratulations!** You've deployed your Go app to App Service. --## 4 - Clean up resources --When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, and all related resources: --```azurecli-interactive -az group delete --resource-group <resource-group-name> -``` -## Next steps --> [!div class="nextstepaction"] -> [Configure an App Service app](./configure-common.md) --> [!div class="nextstepaction"] -> [Tutorial: Deploy from Azure Container Registry](./tutorial-custom-container.md) --> [!div class="nextstepaction"] -> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md) |
app-service | Quickstart Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md | ms.assetid: 6feac128-c728-4491-8b79-962da9a40788 Previously updated : 03/10/2022 Last updated : 01/26/2024 ms.devlang: php zone_pivot_groups: app-service-platform-windows-linux To complete this quickstart, you need: * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). * [Git](https://git-scm.com/) * [PHP](https://php.net/manual/install.php)-* [Azure CLI](/cli/azure/install-azure-cli) to run commands in any shell to provision and configure Azure resources. +* [Azure CLI](/cli/azure/install-azure-cli) to run commands in any shell to create and configure Azure resources. ## 1 - Get the sample repository To complete this quickstart, you need: You can create the web app using the [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell, and you use Git to deploy sample PHP code to the web app. -1. In a terminal window, run the following commands. It will clone the sample application to your local machine, and navigate to the directory containing the sample code. +1. In a terminal window, run the following commands to clone the sample application to your local machine and navigate to the project root. ```bash git clone https://github.com/Azure-Samples/php-docs-hello-world You can create the web app using the [Azure CLI](/cli/azure/get-started-with-azu ### [Azure CLI](#tab/cli) -Azure CLI has a command [`az webapp up`](/cli/azure/webapp#az-webapp-up) that will create the necessary resources and deploy your application in a single step. +Azure CLI has a command [`az webapp up`](/cli/azure/webapp#az-webapp-up) that creates the necessary resources and deploys your application in a single step. In the terminal, deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command: ```azurecli-az webapp up --runtime "PHP:8.0" --os-type=linux +az webapp up --runtime "PHP:8.2" --os-type=linux ``` - If the `az` command isn't recognized, be sure you have [Azure CLI](/cli/azure/install-azure-cli) installed.-- The `--runtime "PHP:8.0"` argument creates the web app with PHP version 8.0.+- The `--runtime "PHP:8.2"` argument creates the web app with PHP version 8.2. - The `--os-type=linux` argument creates the web app on App Service on Linux.-- You can optionally specify a name with the argument `--name <app-name>`. If you don't provide one, then a name will be automatically generated.+- You can optionally specify a name with the argument `--name <app-name>`. If you don't provide one, then a name is automatically generated. - You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az_appservice_list_locations) command. - If you see the error, "Could not auto-detect the runtime stack of your app," make sure you're running the command in the code directory (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md)). -The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives the message, "You can launch the app at http://<app-name>.azurewebsites.net", which is the app's URL on Azure. +The command can take a few minutes to complete. While it's running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives the message, "You can launch the app at http://<app-name>.azurewebsites.net", which is the app's URL on Azure. <pre> The webapp '<app-name>' doesn't exist You can launch the app at http://<app-name>.azurewebsites.net "name": "<app-name>", "os": "linux", "resourcegroup": "<group-name>",- "runtime_version": "php|8.0", + "runtime_version": "php|8.2", "runtime_version_detected": "0.0", "sku": "FREE", "src_path": "//home//msangapu//myPhpApp" Browse to the deployed application in your web browser at the URL `http://<app-n ![Screenshot of the Azure portal with 'app services' typed in the search text box. In the results, the App Services option under Services is highlighted.](media/quickstart-php/azure-portal-search-for-app-services.png) -1. On the **App Services** page, select **Create**. +1. In the **App Services** page, select **+ Create**. - ![Screenshot of the App Services page in the Azure portal. The Create button in the action bar is highlighted.](media/quickstart-php/azure-portal-create-app-service.png) +1. In the **Basics** tab: -1. Fill out the **Create Web App** page as follows. - - **Resource Group**: Create a resource group named *myResourceGroup*. - - **Name**: Type a globally unique name for your web app. - - **Publish**: Select *Code*. - - **Runtime stack**: Select *PHP 8.0*. - - **Operating system**: Select *Linux*. - - **Region**: Select an Azure region close to you. - - **App Service Plan**: Create an app service plan named *myAppServicePlan*. --1. To change to the Free tier, next to **Sku and size**, select **Change size**. - -1. In the Spec Picker, select **Dev/Test** tab, select **F1**, and select the **Apply** button at the bottom of the page. -- ![Screenshot of the Spec Picker for the App Service Plan pricing tiers in the Azure portal. Dev/Test, F1, and Apply are highlighted.](media/quickstart-php/azure-portal-create-app-service-select-free-tier.png) --1. Select the **Review + create** button at the bottom of the page. --1. After validation runs, select the **Create** button at the bottom of the page. This will create an Azure resource group, app service plan, and app service. + - Under **Resource group**, select **Create new**. Type *myResourceGroup* for the name. + - Under **Name**, type a globally unique name for your web app. + - Under **Publish**, select *Code*. + - Under **Runtime stack** select *PHP 8.2*. + - Under **Operating System**, select *Linux*. + - Under **Region**, select an Azure region close to you. + - Under **App Service Plan**, create an app service plan named *myAppServicePlan*. + - Under **Pricing plan**, select **Free F1**. + + :::image type="content" source="./media/quickstart-php/app-service-details-php.png" lightbox="./media/quickstart-php/app-service-details-php.png" alt-text="Screenshot of new App Service app configuration for PHP in the Azure portal."::: -1. After the Azure resources are created, select **Go to resource**. - -1. From the left navigation, select **Deployment Center**. +1. Select the **Deployment** tab at the top of the page. - ![Screenshot of the App Service in the Azure portal. The Deployment Center option in the Deployment section of the left navigation is highlighted.](media/quickstart-php/azure-portal-configure-app-service-deployment-center.png) +1. Under **GitHub Actions settings**, set **Continuous deployment** to *Enable*. -1. Under **Settings**, select a **Source**. For this quickstart, select *GitHub*. +1. Under **GitHub Actions details**, authenticate with your GitHub account, and select the following options: -1. In the section under **GitHub**, select the following settings: - - Organization: Select your organization. - - Repository: Select *php-docs-hello-world*. - - Branch: Select the default branch for your repository. + - For **Organization** select the organization where you forked the demo project. + - For **Repository** select the *php-docs-hello-world* project. + - For **Branch** select *master*. -1. Select **Save**. + :::image type="content" source="media/quickstart-php/app-service-deploy-php.png" lightbox="media/quickstart-php/app-service-deploy-php.png" border="true" alt-text="Screenshot of the deployment options for a PHP app."::: + + > [!NOTE] + > By default, the creation wizard [disables basic authentication](configure-basic-auth-disable.md) and GitHub Actions deployment is created [using a user-assigned identity](deploy-continuous-deployment.md#what-does-the-user-assigned-identity-option-do-for-github-actions). If you get a permissions error during resource creation, your Azure account might not have [enough permissions](deploy-continuous-deployment.md#why-do-i-see-the-error-you-do-not-have-sufficient-permissions-on-this-app-to-assign-role-based-access-to-a-managed-identity-and-configure-federated-credentials). You can [configure GitHub Actions deployment later](deploy-continuous-deployment.md) with an identity generated for you by an Azure administrator, or you can also enable basic authentication instead. - ![Screenshot of the Deployment Center for the App Service, focusing on the GitHub integration settings. The Save button in the action bar is highlighted.](media/quickstart-php/azure-portal-configure-app-service-github-integration.png) +1. Select the **Review + create** button at the bottom of the page. - > [!TIP] - > This quickstart uses GitHub. Additional continuous deployment sources include Bitbucket, Local Git, Azure Repos, and External Git. FTPS is also a supported deployment method. - -1. Once the GitHub integration is saved, from the left navigation of your app, select **Overview** > **URL**. +1. After validation runs, select the **Create** button at the bottom of the page. - ![Screenshot of the App Service resource with the URL field highlighted.](media/quickstart-php/azure-portal-app-service-url.png) +1. After deployment is completed, select **Go to resource**. + +1. Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. The PHP sample code is running in an Azure App Service. ![Screenshot of the sample app running in Azure, showing 'Hello World!'.](media/quickstart-php/php-8-hello-world-in-browser.png) -**Congratulations!** You've deployed your first PHP app to App Service using the Azure portal. +**Congratulations!** You deployed your first PHP app to App Service using the Azure portal. ## 3 - Update and redeploy the app The PHP sample code is running in an Azure App Service. 1. Save your changes, then redeploy the app using the [az webapp up](/cli/azure/webapp#az-webapp-up) command again with these arguments: ```azurecli- az webapp up --runtime "PHP:8.0" --os-type=linux + az webapp up --runtime "PHP:8.2" --os-type=linux ``` -1. Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page. +1. Once deployment is completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page. ![Screenshot of the updated sample app running in Azure.](media/quickstart-php/hello-azure-in-browser.png) The PHP sample code is running in an Azure App Service. 1. Browse to your GitHub fork of php-docs-hello-world. -1. On your repo page, press `.` to start Visual Studio code within your browser. +1. On your repo page, press `.` to start Visual Studio Code within your browser. -![Screenshot of the forked php-docs-hello-world repo in GitHub with instructions to press the period key on this screen.](media/quickstart-php/forked-github-repo-press-period.png) + ![Screenshot of the forked php-docs-hello-world repo in GitHub with instructions to press the period key on this screen.](media/quickstart-php/forked-github-repo-press-period.png) -> [!NOTE] -> The URL will change from GitHub.com to GitHub.dev. This feature only works with repos that have files. This does not work on empty repos. + > [!NOTE] + > The URL will change from GitHub.com to GitHub.dev. This feature only works with repos that have files. This does not work on empty repos. 1. Edit **index.php** so that it shows "Hello Azure!" instead of "Hello World!" The PHP sample code is running in an Azure App Service. ![Screenshot of Visual Studio Code in the browser, Source Control panel with a commit message of 'Hello Azure' and the Commit and Push button highlighted.](media/quickstart-php/visual-studio-code-in-browser-commit-push.png) -1. Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page. +1. Once deployment is completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page. ![Screenshot of the updated sample app running in Azure, showing 'Hello Azure!'.](media/quickstart-php/php-8-hello-azure-in-browser.png) The PHP sample code is running in an Azure App Service. ![Screenshot of the App Services list in Azure. The name of the demo app service is highlighted.](media/quickstart-php/app-service-list.png) - Your web app's **Overview** page will be displayed. Here, you can perform basic management tasks like **Browse**, **Stop**, **Restart**, and **Delete**. + Your web app's **Overview** page should be displayed. Here, you can perform basic management tasks like **Browse**, **Stop**, **Restart**, and **Delete**. ![Screenshot of the App Service overview page in Azure portal. In the action bar, the Browse, Stop, Swap (disabled), Restart, and Delete button group is highlighted.](media/quickstart-php/app-service-detail.png) The PHP sample code is running in an Azure App Service. ## 5 - Clean up resources -When you're finished with the sample app, you can remove all of the resources for the app from Azure. It will not incur extra charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app. +When you're finished with the sample app, you can remove all of the resources for the app from Azure. It helps you avoid extra charges and keeps your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app. ### [Azure CLI](#tab/cli) Delete the resource group by using the [az group delete](/cli/azure/group#az-gro az group delete --name myResourceGroup ``` -This command may take a minute to run. +This command takes a minute to run. ### [Portal](#tab/portal) |
app-service | Tutorial Multi Region App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-region-app.md | -When you deploy your application to the cloud, you choose a region in that cloud where your application infrastructure is based. If your application is deployed to a single region, and the region becomes unavailable, your application will also be unavailable. This lack of availability may be unacceptable under the terms of your application's SLA. If so, deploying your application and its services across multiple regions is a good solution. +When you deploy your application to the cloud, you choose a region in that cloud where your application infrastructure is based. If your application is deployed to a single region and the region becomes unavailable, your application will also be unavailable. This lack of availability might be unacceptable under the terms of your application's SLA. If so, deploying your application and its services across multiple regions is a good solution. -In this tutorial, you'll learn how to deploy a highly available multi-region web app. This scenario will be kept simple by restricting the application components to just a web app and [Azure Front Door](../frontdoor/front-door-overview.md), but the concepts can be expanded and applied to other infrastructure patterns. For example, if your application connects to an Azure database offering or storage account, see [active geo-replication for SQL databases](/azure/azure-sql/database/active-geo-replication-overview) and [redundancy options for storage accounts](../storage/common/storage-redundancy.md). For a reference architecture for a more detailed scenario, see [Highly available multi-region web application](/azure/architecture/reference-architectures/app-service-web-app/multi-region). +In this tutorial, you learn how to deploy a highly available multi-region web app. This scenario is kept simple by restricting the application components to just a web app and [Azure Front Door](../frontdoor/front-door-overview.md), but the concepts can be expanded and applied to other infrastructure patterns. For example, if your application connects to an Azure database offering or storage account, see [active geo-replication for SQL databases](/azure/azure-sql/database/active-geo-replication-overview) and [redundancy options for storage accounts](../storage/common/storage-redundancy.md). For a reference architecture for a more detailed scenario, see [Highly available multi-region web application](/azure/architecture/reference-architectures/app-service-web-app/multi-region). -The following architecture diagram shows the infrastructure you'll be creating during this tutorial. It consists of two identical App Services in separate regions, one being the active or primary region, and the other is the standby or secondary region. Azure Front Door is used to route traffic to the App Services and access restrictions are configured so that direct access to the apps from the internet is blocked. The dotted line indicates that traffic will only be sent to the standby region if the active region goes down. +The following architecture diagram shows the infrastructure you create in this tutorial. It consists of two identical App Services in separate regions, one being the active or primary region, and the other is the standby or secondary region. Azure Front Door is used to route traffic to the App Services and access restrictions are configured so that direct access to the apps from the internet is blocked. The dotted line indicates that traffic is sent to the standby region only if the active region goes down. Azure provides various options for load balancing and traffic routing. Azure Front Door was selected for this use case because it involves internet facing web apps hosted on Azure App Service deployed in multiple regions. To help you decide what to use for your use case if it differs from this tutorial, see the [decision tree for load balancing in Azure](/azure/architecture/guide/technology-choices/load-balancing-overview). To complete this tutorial: ## Create two instances of a web app -You'll need two instances of a web app that run in different Azure regions for this tutorial. You'll use the [region pair](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) East US/West US as your two regions and create two empty web apps. Feel free to choose you're own regions if needed. +You need two instances of a web app that run in different Azure regions for this tutorial. You use the [region pair](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) East US/West US as your two regions and create two empty web apps. Feel free to choose your own regions if needed. -To make management and clean-up simpler, you'll use a single resource group for all resources in this tutorial. Consider using separate resource groups for each region/resource to further isolate your resources in a disaster recovery situation. +To make management and clean-up simpler, you use a single resource group for all resources in this tutorial. Consider using separate resource groups for each region/resource to further isolate your resources in a disaster recovery situation. Run the following command to create your resource group. az appservice plan create --name <app-service-plan-west-us> --resource-group myr ### Create web apps -Once the App Service plans are created, run the following commands to create the web apps. Replace the placeholders for `<web-app-east-us>` and `<web-app-west-us>` with two globally unique names (valid characters are `a-z`, `0-9`, and `-`) and be sure to pay attention to the `--plan` parameter so that you place one app in each plan (and therefore in each region). Replace the `<runtime>` parameter with the language version of your app. Run `az webapp list-runtimes` for the list of available runtimes. If you plan on using the sample Node.js app given in this tutorial in the following sections, use "NODE:18-lts" as your runtime. +Once the App Service plans are created, run the following commands to create the web apps. Replace the placeholders for `<web-app-east-us>` and `<web-app-west-us>` with two globally unique names (valid characters are `a-z`, `0-9`, and `-`) and be sure to pay attention to the `--plan` parameter so that you place one app in each plan (and therefore in each region). Replace the `<runtime>` parameter with the language version of your app. Run `az webapp list-runtimes` for the list of available runtimes. If you plan on using the sample Node.js app given in this tutorial in the following sections, use `NODE:18-lts` as your runtime. ```azurecli-interactive az webapp create --name <web-app-east-us> --resource-group myresourcegroup --plan <app-service-plan-east-us> --runtime <runtime> A multi-region deployment can use an active-active or active-passive configurati ### Create an Azure Front Door profile -You'll now create an [Azure Front Door Premium](../frontdoor/front-door-overview.md) to route traffic to your apps. +You now create an [Azure Front Door Premium](../frontdoor/front-door-overview.md) to route traffic to your apps. -Run [az afd profile create](/cli/azure/afd/profile#az-afd-profile-create) to create an Azure Front Door profile. +Run [`az afd profile create`](/cli/azure/afd/profile#az-afd-profile-create) to create an Azure Front Door profile. > [!NOTE]-> If you want to deploy Azure Front Door Standard instead of Premium, substitute the value of the `--sku` parameter with Standard_AzureFrontDoor. You won't be able to deploy managed rules with WAF Policy if you choose the Standard SKU. For a detailed comparison of the SKUs, see [Azure Front Door tier comparison](../frontdoor/standard-premium/tier-comparison.md). +> If you want to deploy Azure Front Door Standard instead of Premium, substitute the value of the `--sku` parameter with Standard_AzureFrontDoor. You can't deploy managed rules with WAF Policy if you choose the Standard tier. For a detailed comparison of the pricing tiers, see [Azure Front Door tier comparison](../frontdoor/standard-premium/tier-comparison.md). ```azurecli-interactive az afd profile create --profile-name myfrontdoorprofile --resource-group myresourcegroup --sku Premium_AzureFrontDoor az afd profile create --profile-name myfrontdoorprofile --resource-group myresou |Parameter |Value |Description | ||||-|profile-name |myfrontdoorprofile |Name for the Azure Front Door profile, which is unique within the resource group. | -|resource-group |myresourcegroup |The resource group that contains the resources from this tutorial. | -|sku |Premium_AzureFrontDoor |The pricing tier of the Azure Front Door profile. | +|`profile-name` |`myfrontdoorprofile` |Name for the Azure Front Door profile, which is unique within the resource group. | +|`resource-group` |`myresourcegroup` |The resource group that contains the resources from this tutorial. | +|`sku` |`Premium_AzureFrontDoor` |The pricing tier of the Azure Front Door profile. | ### Add an endpoint -Run [az afd endpoint create](/cli/azure/afd/endpoint#az-afd-endpoint-create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience. +Run [`az afd endpoint create`](/cli/azure/afd/endpoint#az-afd-endpoint-create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience. ```azurecli-interactive az afd endpoint create --resource-group myresourcegroup --endpoint-name myendpoint --profile-name myfrontdoorprofile --enabled-state Enabled az afd endpoint create --resource-group myresourcegroup --endpoint-name myendpoi |Parameter |Value |Description | ||||-|endpoint-name |myendpoint |Name of the endpoint under the profile, which is unique globally. | -|enabled-state |Enabled |Whether to enable this endpoint. | +|`endpoint-name` |`myendpoint` |Name of the endpoint under the profile, which is unique globally. | +|`enabled-state` |`Enabled` |Whether to enable this endpoint. | ### Create an origin group -Run [az afd origin-group create](/cli/azure/afd/origin-group#az-afd-origin-group-create) to create an origin group that contains your two web apps. +Run [`az afd origin-group create`](/cli/azure/afd/origin-group#az-afd-origin-group-create) to create an origin group that contains your two web apps. ```azurecli-interactive az afd origin-group create --resource-group myresourcegroup --origin-group-name myorigingroup --profile-name myfrontdoorprofile --probe-request-type GET --probe-protocol Http --probe-interval-in-seconds 60 --probe-path / --sample-size 4 --successful-samples-required 3 --additional-latency-in-milliseconds 50 az afd origin-group create --resource-group myresourcegroup --origin-group-name |Parameter |Value |Description | ||||-|origin-group-name |myorigingroup |Name of the origin group. | -|probe-request-type |GET |The type of health probe request that is made. | -|probe-protocol |Http |Protocol to use for health probe. | -|probe-interval-in-seconds |60 |The number of seconds between health probes. | -|probe-path |/ |The path relative to the origin that is used to determine the health of the origin. | -|sample-size |4 |The number of samples to consider for load balancing decisions. | -|successful-samples-required |3 |The number of samples within the sample period that must succeed. | -|additional-latency-in-milliseconds |50 |The additional latency in milliseconds for probes to fall into the lowest latency bucket. | +|`origin-group-name` |`myorigingroup` |Name of the origin group. | +|`probe-request-type` |`GET` |The type of health probe request that is made. | +|`probe-protocol` |`Http` |Protocol to use for health probe. | +|`probe-interval-in-seconds` |`60` |The number of seconds between health probes. | +|`probe-path` |`/` |The path relative to the origin that is used to determine the health of the origin. | +|`sample-size` |`4` |The number of samples to consider for load balancing decisions. | +|`successful-samples-required` |`3` |The number of samples within the sample period that must succeed. | +|`additional-latency-in-milliseconds` |`50` |The extra latency in milliseconds for probes to fall into the lowest latency bucket. | ### Add an origin to the group -Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to add an origin to your origin group. For the `--host-name` parameter, replace the placeholder for `<web-app-east-us>` with your app name in that region. Notice the `--priority` parameter is set to "1", which indicates all traffic will be sent to your primary app. +Run [`az afd origin create`](/cli/azure/afd/origin#az-afd-origin-create) to add an origin to your origin group. For the `--host-name` parameter, replace the placeholder for `<web-app-east-us>` with your app name in that region. Notice the `--priority` parameter is set to `1`, which indicates all traffic is sent to your primary app. ```azurecli-interactive az afd origin create --resource-group myresourcegroup --host-name <web-app-east-us>.azurewebsites.net --profile-name myfrontdoorprofile --origin-group-name myorigingroup --origin-name primaryapp --origin-host-header <web-app-east-us>.azurewebsites.net --priority 1 --weight 1000 --enabled-state Enabled --http-port 80 --https-port 443 az afd origin create --resource-group myresourcegroup --host-name <web-app-east- |Parameter |Value |Description | ||||-|host-name |`<web-app-east-us>.azurewebsites.net` |The hostname of the primary web app. | -|origin-name |primaryapp |Name of the origin. | -|origin-host-header |`<web-app-east-us>.azurewebsites.net` |The host header to send for requests to this origin. If you leave this blank, the request hostname determines this value. Azure CDN origins, such as Web Apps, Blob Storage, and Cloud Services require this host header value to match the origin hostname by default. | -|priority |1 |Set this parameter to 1 to direct all traffic to the primary web app. | -|weight |1000 |Weight of the origin in given origin group for load balancing. Must be between 1 and 1000. | -|enabled-state |Enabled |Whether to enable this origin. | -|http-port |80 |The port used for HTTP requests to the origin. | -|https-port |443 |The port used for HTTPS requests to the origin. | +|`host-name` |`<web-app-east-us>.azurewebsites.net` |The hostname of the primary web app. | +|`origin-name` |`primaryapp` |Name of the origin. | +|`origin-host-header` |`<web-app-east-us>.azurewebsites.net` |The host header to send for requests to this origin. If you leave this blank, the request hostname determines this value. Azure CDN origins, such as Web Apps, Blob Storage, and Cloud Services require this host header value to match the origin hostname by default. | +|`priority` |`1` |Set this parameter to 1 to direct all traffic to the primary web app. | +|`weight` |`1000` |Weight of the origin in given origin group for load balancing. Must be between 1 and 1000. | +|`enabled-state` |`Enabled` |Whether to enable this origin. | +|`http-port` |`80` |The port used for HTTP requests to the origin. | +|`https-port` |`443` |The port used for HTTPS requests to the origin. | -Repeat this step to add your second origin. Pay attention to the `--priority` parameter. For this origin, it's set to "2". This priority setting tells Azure Front Door to direct all traffic to the primary origin unless the primary goes down. If you set the priority for this origin to "1", Azure Front Door will treat both origins as active and direct traffic to both regions. Be sure to replace both instances of the placeholder for `<web-app-west-us>` with the name of that web app. +Repeat this step to add your second origin. Pay attention to the `--priority` parameter. For this origin, it's set to `2`. This priority setting tells Azure Front Door to direct all traffic to the primary origin unless the primary goes down. If you set the priority for this origin to `1`, Azure Front Door treats both origins as active and direct traffic to both regions. Be sure to replace both instances of the placeholder for `<web-app-west-us>` with the name of that web app. ```azurecli-interactive az afd origin create --resource-group myresourcegroup --host-name <web-app-west-us>.azurewebsites.net --profile-name myfrontdoorprofile --origin-group-name myorigingroup --origin-name secondaryapp --origin-host-header <web-app-west-us>.azurewebsites.net --priority 2 --weight 1000 --enabled-state Enabled --http-port 80 --https-port 443 az afd origin create --resource-group myresourcegroup --host-name <web-app-west- ### Add a route -Run [az afd route create](/cli/azure/afd/route#az-afd-route-create) to map your endpoint to the origin group. This route forwards requests from the endpoint to your origin group. +Run [`az afd route create`](/cli/azure/afd/route#az-afd-route-create) to map your endpoint to the origin group. This route forwards requests from the endpoint to your origin group. ```azurecli-interactive az afd route create --resource-group myresourcegroup --profile-name myfrontdoorprofile --endpoint-name myendpoint --forwarding-protocol MatchRequest --route-name route --https-redirect Enabled --origin-group myorigingroup --supported-protocols Http Https --link-to-default-domain Enabled az afd route create --resource-group myresourcegroup --profile-name myfrontdoorp |Parameter |Value |Description | ||||-|endpoint-name |myendpoint |Name of the endpoint. | -|forwarding-protocol |MatchRequest |Protocol this rule will use when forwarding traffic to backends. | -|route-name |route |Name of the route. | -|https-redirect |Enabled |Whether to automatically redirect HTTP traffic to HTTPS traffic. | -|supported-protocols |Http Https |List of supported protocols for this route. | -|link-to-default-domain |Enabled |Whether this route will be linked to the default endpoint domain. | +|`endpoint-name` |`myendpoint` |Name of the endpoint. | +|forwarding-protocol |MatchRequest |Protocol this rule uses when forwarding traffic to backends. | +|`route-name` |`route` |Name of the route. | +|https-redirect |`Enabled` |Whether to automatically redirect HTTP traffic to HTTPS traffic. | +|`supported-protocols` |`Http Https` |List of supported protocols for this route. | +|`link-to-default-domain` |`Enabled` |Whether this route is linked to the default endpoint domain. | -Allow about 15 minutes for this step to complete as it takes some time for this change to propagate globally. After this period, your Azure Front Door will be fully functional. +Allow about 15 minutes for this step to complete as it takes some time for this change to propagate globally. After this period, your Azure Front Door is fully functional. ### Restrict access to web apps to the Azure Front Door instance -If you try to access your apps directly using their URLs at this point, you'll still be able to. To ensure traffic can only reach your apps through Azure Front Door, you'll set access restrictions on each of your apps. Front Door's features work best when traffic only flows through Front Door. You should configure your origins to block traffic that hasn't been sent through Front Door. Otherwise, traffic might bypass Front Door's web application firewall, DDoS protection, and other security features. Traffic from Azure Front Door to your applications originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. By using a service tag restriction rule, you can [restrict traffic to only originate from Azure Front Door](../frontdoor/origin-security.md). +At this point, you can still access your apps directly using their URLs at this point. To ensure traffic can only reach your apps through Azure Front Door, you set access restrictions on each of your apps. Front Door's features work best when traffic only flows through Front Door. You should configure your origins to block traffic that isn't sent through Front Door yet. Otherwise, traffic might bypass Front Door's web application firewall, DDoS protection, and other security features. Traffic from Azure Front Door to your applications originates from a well known set of IP ranges defined in the `AzureFrontDoor.Backend` service tag. By using a service tag restriction rule, you can [restrict traffic to only originate from Azure Front Door](../frontdoor/origin-security.md). -Before setting up the App Service access restrictions, take note of the *Front Door ID* by running the following command. This ID will be needed to ensure traffic only originates from your specific Front Door instance. The access restriction further filters the incoming requests based on the unique HTTP header that your Azure Front Door sends. +Before setting up the App Service access restrictions, take note of the *Front Door ID* by running the following command. This ID is needed to ensure traffic only originates from your specific Front Door instance. The access restriction further filters the incoming requests based on the unique HTTP header that your Azure Front Door sends. ```azurecli-interactive az afd profile show --resource-group myresourcegroup --profile-name myfrontdoorprofile --query "frontDoorId" az webapp config access-restriction add --resource-group myresourcegroup -n <web When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created. -Run [az afd endpoint show](/cli/azure/afd/endpoint#az-afd-endpoint-show) to get the hostname of the Front Door endpoint. +Run [`az afd endpoint show`](/cli/azure/afd/endpoint#az-afd-endpoint-show) to get the hostname of the Front Door endpoint. ```azurecli-interactive az afd endpoint show --resource-group myresourcegroup --profile-name myfrontdoorprofile --endpoint-name myendpoint --query "hostName" ``` -In a browser, go to the endpoint hostname that the previous command returned: `<myendpoint>-<hash>.z01.azurefd.net`. Your request will automatically get routed to the primary app in East US. +In a browser, go to the endpoint hostname that the previous command returned: `<myendpoint>-<hash>.z01.azurefd.net`. Your request should automatically get routed to the primary app in East US. To test instant global failover: To test instant global failover: 1. Refresh your browser. You should see the same information page because traffic is now directed to the running app in West US. > [!TIP]- > You might need to refresh the page a couple times as failover may take a couple seconds. + > You might need to refresh the page a few times for the failover to complete. 1. Now stop the secondary app. In the preceding steps, you created Azure resources in a resource group. If you az group delete --name myresourcegroup ``` -This command may take a few minutes to run. +This command might take a few minutes to run. ## Deploy from ARM/Bicep This section contains frequently asked questions that can help you further secur ### What is the recommended method for managing and deploying application infrastructure and Azure resources? -For this tutorial, you used the Azure CLI to deploy your infrastructure resources. Consider configuring a continuous deployment mechanism to manage your application infrastructure. Since you're deploying resources in different regions, you'll need to independently manage those resources across the regions. To ensure the resources are identical across each region, infrastructure as code (IaC) such as [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) or [Terraform](/azure/developer/terraform/overview) should be used with deployment pipelines such as [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [GitHub Actions](https://docs.github.com/actions). This way, if configured appropriately, any change to resources would trigger updates across all regions you're deployed to. For more information, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md). +For this tutorial, you used the Azure CLI to deploy your infrastructure resources. Consider configuring a continuous deployment mechanism to manage your application infrastructure. Since you're deploying resources in different regions, you need to independently manage those resources across the regions. To ensure the resources are identical across each region, infrastructure as code (IaC) such as [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) or [Terraform](/azure/developer/terraform/overview) should be used with deployment pipelines such as [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [GitHub Actions](https://docs.github.com/actions). This way, if configured appropriately, any change to resources would trigger updates across all regions you're deployed to. For more information, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md). ### How can I use staging slots to practice safe deployment to production? -Deploying your application code directly to production apps/slots isn't recommended. This is because you'd want to have a safe place to test your apps and validate changes you've made before pushing to production. Use a combination of staging slots and slot swap to move code from your testing environment to production. +Deploying your application code directly to production apps/slots isn't recommended. This is because you want to have a safe place to test your apps and validate changes you make before pushing to production. Use a combination of staging slots and slot swap to move code from your testing environment to production. -You already created the baseline infrastructure for this scenario. You'll now create deployment slots for each instance of your app and configure continuous deployment to these staging slots with GitHub Actions. As with infrastructure management, configuring continuous deployment for your application source code is also recommended to ensure changes across regions are in sync. If you donΓÇÖt configure continuous deployment, youΓÇÖll need to manually update each app in each region every time there's a code change. +You already created the baseline infrastructure for this scenario. Now, you create deployment slots for each instance of your app and configure continuous deployment to these staging slots with GitHub Actions. As with infrastructure management, configuring continuous deployment for your application source code is also recommended to ensure changes across regions are in sync. If you donΓÇÖt configure continuous deployment, youΓÇÖll need to manually update each app in each region every time there's a code change. For the remaining steps in this tutorial, you should have an app ready to deploy to your App Services. If you need a sample app, you can use the [Node.js Hello World sample app](https://github.com/Azure-Samples/nodejs-docs-hello-world). Fork that repository so you have your own copy. -Be sure to set the App Service stack settings for your apps. Stack settings refer to the language or runtime used for your app. This setting can be configured using the Azure CLI with the `az webapp config set` command or in the portal with the following steps. If you use the Node.js sample, set the stack settings to "Node 18 LTS". +Be sure to set the App Service stack settings for your apps. Stack settings refer to the language or runtime used for your app. This setting can be configured using the Azure CLI with the `az webapp config set` command or in the portal with the following steps. If you use the Node.js sample, set the stack settings to **Node 18 LTS**. 1. Going to your app and selecting **Configuration** in the left-hand table of contents. 1. Select the **General settings** tab. To configure continuous deployment with GitHub Actions, complete the following s A default workflow file that uses a publish profile to authenticate to App Service is added to your GitHub repository. You can view this file by going to the `<repo-name>/.github/workflows/` directory. -### How do I disable basic auth on App Service? +### How do I disable basic authentication on App Service? -Consider [disabling basic auth on App Service](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html), which limits access to the FTP and SCM endpoints to users that are backed by Microsoft Entra ID. If using a continuous deployment tool to deploy your application source code, disabling basic auth will require [extra steps to configure continuous deployment](deploy-github-actions.md). For example, you won't be able to use a publish profile since that authentication mechanism doesn't use Microsoft Entra backed credentials. Instead, you'll need to use either a [service principal or OpenID Connect](deploy-github-actions.md#generate-deployment-credentials). +Consider [disabling basic authentication](configure-basic-auth-disable.md), which limits access to the FTP and SCM endpoints to users that are backed by Microsoft Entra ID. If using a continuous deployment tool to deploy your application source code, disabling basic authentication requires [extra steps to configure continuous deployment](deploy-github-actions.md). For example, you can't use a publish profile since it doesn't use Microsoft Entra credentials. Instead, you need to use either a [service principal or OpenID Connect](deploy-github-actions.md#1-generate-deployment-credentials). -To disable basic auth for your App Service, run the following commands for each app and slot by replacing the placeholders for `<web-app-east-us>` and `<web-app-west-us>` with your app names. The first set of commands disables FTP access for the production sites and staging slots, and the second set of commands disables basic auth access to the WebDeploy port and SCM site for the production sites and staging slots. +To disable basic authentication for your App Service, run the following commands for each app and slot by replacing the placeholders for `<web-app-east-us>` and `<web-app-west-us>` with your app names. The first set of commands disables FTP access for the production sites and staging slots, and the second set of commands disables basic auth access to the WebDeploy port and SCM site for the production sites and staging slots. ```azurecli-interactive az resource update --resource-group myresourcegroup --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-east-us> --set properties.allow=false az resource update --resource-group myresourcegroup --name scm --namespace Micro az resource update --resource-group myresourcegroup --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<web-app-west-us>/slots/stage --set properties.allow=false ``` -For more information on disabling basic auth including how to test and monitor logins, see [Disabling basic auth on App Service](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html). +For more information on disabling basic auth including how to test and monitor sign-ins, see [Disable basic authentication in App Service deployments](configure-basic-auth-disable.md). ### How do I deploy my code using continuous deployment if I disabled basic auth? If you disable basic auth for your App Services, continuous deployment requires To configure continuous deployment with GitHub Actions and a service principal, use the following steps. -1. Run the following command to create the [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). Replace the placeholders with your `<subscription-id>` and app names. The output is a JSON object with the role assignment credentials that provide access to your App Service apps. Copy this JSON object for the next step. It will include your client secret, which will only be visible at this time. It's always a good practice to grant minimum access. The scope in this example is limited to just the apps, not the entire resource group. +1. Run the following command to create the [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). Replace the placeholders with your `<subscription-id>` and app names. The output is a JSON object with the role assignment credentials that provide access to your App Service apps. Copy this JSON object for the next step. It includes your client secret, which is visible only at this time. It's always a good practice to grant minimum access. The scope in this example is limited to just the apps, not the entire resource group. ```bash az ad sp create-for-rbac --name "myApp" --role contributor --scopes /subscriptions/<subscription-id>/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/<web-app-east-us> /subscriptions/<subscription-id>/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/<web-app-west-us> --sdk-auth ``` -1. You need to provide your service principal's credentials to the Azure Login action as part of the GitHub Action workflow you'll be using. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option. +1. You need to provide your service principal's credentials to the [Azure/login](https://github.com/Azure/login) action as part of the GitHub Action workflow you're using. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option. 1. Open your GitHub repository and go to **Settings** > **Security** > **Secrets and variables** > **Actions** 1. Select **New repository secret** and create a secret for each of the following values. The values can be found in the json output you copied earlier. To configure continuous deployment with GitHub Actions and a service principal, #### Create the GitHub Actions workflow -Now that you have a service principal that can access your App Service apps, edit the default workflows that were created for your apps when you configured continuous deployment. Authentication must be done using your service principal instead of the publish profile. For sample workflows, see the "Service principal" tab in [Deploy to App Service](deploy-github-actions.md#deploy-to-app-service). The following sample workflow can be used for the Node.js sample app that was provided. +Now that you have a service principal that can access your App Service apps, edit the default workflows that were created for your apps when you configured continuous deployment. Authentication must be done using your service principal instead of the publish profile. For sample workflows, see the "Service principal" tab in [Add the workflow file to your GitHub repository](deploy-github-actions.md?tabs=userlevel#3-add-the-workflow-file-to-your-github-repository). The following sample workflow can be used for the Node.js sample app that was provided. -1. Open your app's GitHub repository and go to the `<repo-name>/.github/workflows/` directory. You'll see the autogenerated workflows. -1. For each workflow file, select the "pencil" button in the top right to edit the file. Replace the contents with the following text, which assumes you created the GitHub secrets earlier for your credential. Update the placeholder for `<web-app-name>` under the "env" section, and then commit directly to the main branch. This commit will trigger the GitHub Action to run again and deploy your code, this time using the service principal to authenticate. +1. Open your app's GitHub repository and go to the `<repo-name>/.github/workflows/` directory. You should see the autogenerated workflows. +1. For each workflow file, select the "pencil" button in the top right to edit the file. Replace the contents with the following text, which assumes you created the GitHub secrets earlier for your credential. Update the placeholder for `<web-app-name>` under the "env" section, and then commit directly to the main branch. This commit triggers the GitHub Action to run again and deploy your code, this time using the service principal to authenticate. ```yml Now that you have a service principal that can access your App Service apps, edi ### How does slot traffic routing allow me to test updates that I make to my apps? -Traffic routing with slots allows you to direct a pre-defined portion of your user traffic to each slot. Initially, 100% of traffic is directed to the production site. However, you have the ability, for example, to send 10% of your traffic to your staging slot. If you configure slot traffic routing in this way, when users try to access your app, 10% of them will automatically be routed to the staging slot with no changes to your Front Door instance. To learn more about slot swaps and staging environments in App Service, see [Set up staging environments in Azure App Service](deploy-staging-slots.md). +Traffic routing with slots allows you to direct a predefined portion of your user traffic to each slot. Initially, 100% of traffic is directed to the production site. However, you have the ability, for example, to send 10% of your traffic to your staging slot. If you configure slot traffic routing in this way, when users try to access your app, 10% of them are automatically routed to the staging slot with no changes to your Front Door instance. To learn more about slot swaps and staging environments in App Service, see [Set up staging environments in Azure App Service](deploy-staging-slots.md). ### How do I move my code from my staging slot to my production slot? -Once you're done testing and validating in your staging slots, you can perform a [slot swap](deploy-staging-slots.md#swap-two-slots) from your staging slot to your production site. You'll need to do this swap for all instances of your app in each region. During a slot swap, the App Service platform [ensures the target slot doesn't experience downtime](deploy-staging-slots.md#swap-operation-steps). +Once you're done testing and validating in your staging slots, you can perform a [slot swap](deploy-staging-slots.md#swap-two-slots) from your staging slot to your production site. You need to do this swap for all instances of your app in each region. During a slot swap, the App Service platform [ensures the target slot doesn't experience downtime](deploy-staging-slots.md#swap-operation-steps). To perform the swap, run the following command for each app. Replace the placeholder for `<web-app-name>`. az webapp deployment slot swap --resource-group MyResourceGroup -name <web-app-n After a few minutes, you can navigate to your Front Door's endpoint to validate the slot swap succeeded. -At this point, your apps are up and running and any changes you make to your application source code will automatically trigger an update to both of your staging slots. You can then repeat the slot swap process when you're ready to move that code into production. +At this point, your apps are up and running and any changes you make to your application source code automatically trigger an update to both of your staging slots. You can then repeat the slot swap process when you're ready to move that code into production. ### How else can I use Azure Front Door in my multi-region deployments? -If you're concerned about potential disruptions or issues with continuity across regions, as in some customers seeing one version of your app while others see another version, or if you're making significant changes to your apps, you can temporarily remove the site that's undergoing the slot swap from your Front Door's origin group. All traffic will then be directed to the other origin. Navigate to the **Update origin group** pane and **Delete** the origin that is undergoing the change. Once you've made all of your changes and are ready to serve traffic there again, you can return to the same pane and select **+ Add an origin** to readd the origin. +If you're concerned about potential disruptions or issues with continuity across regions, as in some customers seeing one version of your app while others see another version, or if you're making significant changes to your apps, you can temporarily remove the site that's undergoing the slot swap from your Front Door's origin group. All traffic is then directed to the other origin. Navigate to the **Update origin group** pane and **Delete** the origin that is undergoing the change. Once you've made all of your changes and are ready to serve traffic there again, you can return to the same pane and select **+ Add an origin** to readd the origin. :::image type="content" source="./media/tutorial-multi-region-app/remove-origin.png" alt-text="Screenshot showing how to remove an Azure Front Door origin."::: If you'd prefer to not delete and then readd origins, you can create extra origin groups for your Front Door instance. You can then associate the route to the origin group pointing to the intended origin. For example, you can create two new origin groups, one for your primary region, and one for your secondary region. When your primary region is undergoing a change, associate the route with your secondary region and vice versa when your secondary region is undergoing a change. When all changes are complete, you can associate the route with your original origin group that contains both regions. This method works because a route can only be associated with one origin group at a time. -To demonstrate working with multiple origins, in the following screenshot, there are three origin groups. "MyOriginGroup" consists of both web apps, and the other two origin groups each consist of the web app in their respective region. In the example, the app in the primary region is undergoing a change. Before that change was started, the route was associated with "MySecondaryRegion" so all traffic would be sent to the app in the secondary region during the change period. You can update the route by selecting "Unassociated", which will bring up the **Associate routes** pane. +To demonstrate working with multiple origins, in the following screenshot, there are three origin groups. "MyOriginGroup" consists of both web apps, and the other two origin groups each consist of the web app in their respective region. In the example, the app in the primary region is undergoing a change. Before that change was started, the route was associated with "MySecondaryRegion" so all traffic would be sent to the app in the secondary region during the change period. You can update the route by selecting **Unassociated**, which brings up the **Associate routes** pane. :::image type="content" source="./media/tutorial-multi-region-app/associate-routes.png" alt-text="Screenshot showing how to associate routes with Azure Front Door."::: ### How do I restrict access to the advanced tools site? -With Azure App service, the SCM/advanced tools site is used to manage your apps and deploy application source code. Consider [locking down the SCM/advanced tools site](app-service-ip-restrictions.md#restrict-access-to-an-scm-site) since this site will most likely not need to be reached through Front Door. For example, you can set up access restrictions that only allow you to conduct your testing and enable continuous deployment from your tool of choice. If you're using deployment slots, for production slots specifically, you can deny almost all access to the SCM site since your testing and validation will be done with your staging slots. +With Azure App service, the SCM/advanced tools site is used to manage your apps and deploy application source code. Consider [locking down the SCM/advanced tools site](app-service-ip-restrictions.md#restrict-access-to-an-scm-site) since this site most likely doesn't need to be reached through Front Door. For example, you can set up access restrictions that only allow you to conduct your testing and enable continuous deployment from your tool of choice. If you're using deployment slots, for production slots specifically, you can deny almost all access to the SCM site since your testing and validation is done with your staging slots. ## Next steps |
app-service | Tutorial Secure Ntier App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-ntier-app.md | Now that the back-end SCM site is publicly accessible, you need to lock it down ## 7. Use a service principal for GitHub Actions deployment -Your Deployment Center configuration has created a default workflow file in each of your sample repositories, but it uses a publish profile by default, which uses basic auth. Since you've disabled basic auth, if you check the **Logs** tab in Deployment Center, you'll see that the automatically triggered deployment results in an error. You must modify the workflow file to use the service principal to authenticate with App Service. For sample workflows, see [Deploy to App Service](deploy-github-actions.md?tabs=userlevel#deploy-to-app-service). +Your Deployment Center configuration has created a default workflow file in each of your sample repositories, but it uses a publish profile by default, which uses basic auth. Since you've disabled basic auth, if you check the **Logs** tab in Deployment Center, you'll see that the automatically triggered deployment results in an error. You must modify the workflow file to use the service principal to authenticate with App Service. For sample workflows, see [Add the workflow file to your GitHub repository](deploy-github-actions.md?tabs=userlevel#3-add-the-workflow-file-to-your-github-repository). 1. Open one of your forked GitHub repositories and go to the `<repo-name>/.github/workflows/` directory. This command may take a few minutes to run. #### Is there an alternative to deployment using a service principal? -Since in this tutorial you've [disabled basic auth](#5-lock-down-ftp-and-scm-access), you can't authenticate with the back end SCM site with a username and password, and neither can you with a publish profile. Instead of a service principal, you can also use [OpenID Connect](deploy-github-actions.md?tabs=openid#deploy-to-app-service). +Since in this tutorial you've [disabled basic auth](#5-lock-down-ftp-and-scm-access), you can't authenticate with the back end SCM site with a username and password, and neither can you with a publish profile. Instead of a service principal, you can also use [OpenID Connect](deploy-github-actions.md?tabs=openid). #### What happens when I configure GitHub Actions deployment in App Service? |
application-gateway | Migrate V1 V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md | This article primarily helps with the configuration migration. Client traffic mi [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] > [!IMPORTANT]->FRun the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context.This is not a mandatory step for version 1.0.11 & above of the migration script. +>Run the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context.This is not a mandatory step for version 1.0.11 & above of the migration script. > [!IMPORTANT] >A new stable version of the migration script , version 1.0.11 is available now , which contains important bug fixes and updates.Use this version to avoid potential issues. |
automation | Manage Sql Server In Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-sql-server-in-automation.md | To allow access from the Automation system managed identity to the Azure SQL dat 1. In the **SQL server** page, under **Settings**, select **SQL Databases**. 1. Select your database to go to the SQL database page and select **Query editor (preview)** and execute the following two queries: - CREATE USER "AutomationAccount" FROM EXTERNAL PROVIDER WITH OBJECT_ID= `ObjectID`- - EXEC sp_addrolemember `dbowner`, "AutomationAccount" + - EXEC sp_addrolemember `db_owner`, "AutomationAccount" - Automation account - replace with your Automation account's name - Object ID - replace with object (principal) ID for your system managed identity principal from step 1. |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | The following snippet shows this configuration in the context of a project file: ```xml <ItemGroup> <FrameworkReference Include="Microsoft.AspNetCore.App" />- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.21.0" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" /> </ItemGroup> ``` |
azure-functions | Durable Functions Isolated Create First Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-isolated-create-first-csharp.md | Add the following to your app project: ```xml <ItemGroup>- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.21.0" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.DurableTask" Version="1.1.1" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.1.0" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" OutputItemType="Analyzer" /> Add the following to your app project: ```xml <ItemGroup>- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.21.0" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.DurableTask" Version="1.1.1" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.1.0" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" OutputItemType="Analyzer" /> |
azure-functions | Functions Deployment Technologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md | Each plan has different behaviors. Not all deployment technologies are available | [Web Deploy](#web-deploy-msdeploy) |Γ£ö|Γ£ö|Γ£ö| | | | | [Source control](#source-control) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| | [Local Git](#local-git)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|-| [Cloud sync](#cloud-sync)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| | [FTPS](#ftps)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| | [In-portal editing](#portal-editing)<sup>2</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>3</sup>|Γ£ö<sup>3</sup>| You can use local Git to push code from your local machine to Azure Functions by >__Where app content is stored:__ App content is stored on the file system, which may be backed by Azure Files from the storage account specified when the function app was created. -### Cloud sync --Use cloud sync to sync your content from Dropbox and OneDrive to Azure Functions. -->__How to use it:__ Follow the instructions in [Sync content from a cloud folder](../app-service/deploy-content-sync.md). -->__When to use it:__ To reduce the chance of errors, you should avoid using deployment methods that require the additional step of [manually syncing triggers](#trigger-syncing). Use [zip deployment](run-functions-from-deployment-package.md) when possible. -->__Where app content is stored:__ The app content is in the cloud store, but a local copy is stored on the app file system, which may be backed by Azure Files from the storage account specified when the function app was created. - ### FTP/S You can use FTP/S to directly transfer files to Azure Functions, although this deployment method isn't recommended. When you're not planning on using FTP, you should disable it. If you do choose to use FTP, you should enforce FTPS. To learn how in the Azure portal, see [Enforce FTPS](../app-service/deploy-ftp.md#enforce-ftps). |
azure-functions | Functions Reference Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md | The main project folder, *<project_root>*, can contain the following files: When you deploy your project to a function app in Azure, the entire contents of the main project folder, *<project_root>*, should be included in the package, but not the folder itself, which means that *host.json* should be in the package root. We recommend that you maintain your tests in a folder along with other functions (in this example, *tests/*). For more information, see [Unit testing](#unit-testing). +## Connect to a database ++[Azure Cosmos DB](../cosmos-db/introduction.md) is a fully managed NoSQL and relational database for modern app development including AI, digital commerce, Internet of Things, booking management, and other types of solutions. It offers single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale. Its various APIs can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table. ++To connect to Cosmos DB, first [create an account, database, and container](../cosmos-db/nosql/quickstart-portal.md). Then you may connect Functions to Cosmos DB using [trigger and bindings](functions-bindings-cosmosdb-v2.md), like this [example](functions-add-output-binding-cosmos-db-vs-code.md). You may also use the Python library for Cosmos DB, like so: ++```python +pip install azure-cosmos ++from azure.cosmos import CosmosClient, exceptions +from azure.cosmos.partition_key import PartitionKey ++# Replace these values with your Cosmos DB connection information +endpoint = "https://azure-cosmos-nosql.documents.azure.com:443/" +key = "master_key" +database_id = "cosmicwerx" +container_id = "cosmicontainer" +partition_key = "/partition_key" ++# Set the total throughput (RU/s) for the database and container +database_throughput = 1000 ++# Initialize the Cosmos client +client = CosmosClient(endpoint, key) ++# Create or get a reference to a database +try: + database = client.create_database_if_not_exists(id=database_id) + print(f'Database "{database_id}" created or retrieved successfully.') ++except exceptions.CosmosResourceExistsError: + database = client.get_database_client(database_id) + print('Database with id \'{0}\' was found'.format(database_id)) ++# Create or get a reference to a container +try: + container = database.create_container(id=container_id, partition_key=PartitionKey(path='/partitionKey')) + print('Container with id \'{0}\' created'.format(container_id)) ++except exceptions.CosmosResourceExistsError: + container = database.get_container_client(container_id) + print('Container with id \'{0}\' was found'.format(container_id)) ++# Sample document data +sample_document = { + "id": "1", + "name": "Doe Smith", + "city": "New York", + "partition_key": "NY" +} ++# Insert a document +container.create_item(body=sample_document) ++# Query for documents +query = "SELECT * FROM c where c.id = 1" +items = list(container.query_items(query, enable_cross_partition_query=True)) +``` + ::: zone pivot="python-mode-decorators" ## Blueprints |
azure-functions | Migrate Cosmos Db Version 3 Version 4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md | Update your `.csproj` project file to use the latest extension version for your <OutputType>Exe</OutputType> </PropertyGroup> <ItemGroup>- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" /> - <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.5.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.21.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.6.0" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" /> </ItemGroup> <ItemGroup> |
azure-functions | Migrate Service Bus Version 4 Version 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-service-bus-version-4-version-5.md | Update your `.csproj` project file to use the latest extension version for your <OutputType>Exe</OutputType> </PropertyGroup> <ItemGroup>- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" /> - <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.ServiceBus" Version="5.15.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.21.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.ServiceBus" Version="5.16.0" /> <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" /> </ItemGroup> <ItemGroup> |
azure-monitor | Troubleshooter Ama Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-linux.md | Check for the existence of the AMA Agent Troubleshooter directory on the machine ***/var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-{version}*** -To verify the Azure Monitor Agent Troubleshooter is presence, copy the following command and run in Bash as root: +To verify the Azure Monitor Agent Troubleshooter is present, copy the following command and run in Bash as root: ```Bash ls -ltr /var/lib/waagent | grep "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-*" |
azure-monitor | Alerts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md | For stateful alerts, while the alert itself is deleted after 30 days, the alert Stateful log alerts have these limitations: - they can trigger up to 300 alerts per evaluation.-- you can have a maximum of 5000 alerts with the `fired` alert condition.+- you can have a maximum of 6000 alerts with the `fired` alert condition. This table describes when a stateful alert is considered resolved: |
azure-monitor | Alerts Troubleshoot Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md | When you author an alert rule, Log Analytics creates a permission snapshot for y ### The alert rule uses a system-assigned managed identity -When you create a log alert rule with system-assigned managed identity, the identity is created without any permissions. After you create the rule, you need to assign the appropriate roles to the ruleΓÇÖs identity so that it can access the data you want to query. For example, you might need to give it a Reader role for the relevant Log Analytics workspaces, or a Reader role and a Database Viewer role for the relevant ADX cluster. See [managed identities](alerts-create-new-alert-rule.md#managed-id) for more information about using managed identities in log alerts. +When you create a log alert rule with system-assigned managed identity, the identity is created without any permissions. After you create the rule, you need to assign the appropriate roles to the ruleΓÇÖs identity so that it can access the data you want to query. For example, you might need to give it a Reader role for the relevant Log Analytics workspaces, or a Reader role and a Database Viewer role for the relevant ADX cluster. See [managed identities](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-create-log-alert-rule#configure-the-alert-rule-details) for more information about using managed identities in log alerts. ### Metric measurement alert rule with splitting using the legacy Log Analytics API |
azure-monitor | Itsm Convert Servicenow To Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-convert-servicenow-to-webhook.md | Title: Convert ITSM actions that send events to ServiceNow to secure webhook actions description: Learn how to convert ITSM actions that send events to ServiceNow to secure webhook actions. Previously updated : 09/20/2022 Last updated : 01/30/2024 |
azure-monitor | Itsmc Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md | Title: IT Service Management Connector in Log Analytics description: This article provides an overview of IT Service Management Connector (ITSMC) and information about using it to monitor and manage ITSM work items in Log Analytics and resolve problems quickly. Previously updated : 10/03/2022 Last updated : 01/30/2022 |
azure-monitor | Container Insights Metric Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md | The methods currently available for creating Prometheus alert rules are Azure Re 1. Download the template that includes the set of alert rules you want to enable. For a list of the rules for each, see [Alert rule details](#alert-rule-details). - - [Community alerts](https://aka.ms/azureprometheus-communityalerts) - - [Recommended alerts](https://aka.ms/azureprometheus-recommendedalerts) + [Recommended metric alerts](https://aka.ms/azureprometheus-recommendedmetricalerts) 2. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates). ### [Bicep template](#tab/bicep) -1. To deploy community and recommended alerts, follow this [template](https://aka.ms/azureprometheus-alerts-bicep) and follow the README.md file in the same folder for how to deploy. +1. To deploy recommended metric alerts, follow this [template](https://aka.ms/azureprometheus-recommendedmetricalertsbicep) and follow the README.md file in the same folder for how to deploy. The configuration change can take a few minutes to finish before it takes effect ### Prerequisites You might need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).- + ### Enable and configure metric alert rules #### [Azure portal](#tab/azure-portal) The following sections present information on the alert rules provided by Contai ### Community alert rules -These handpicked alerts come from the Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-communityalerts): +These handpicked alerts come from the Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-recommendedmetricalerts): | Alert name | Description | Default threshold | |:|:|:| These handpicked alerts come from the Prometheus community. Source code for thes ### Recommended alert rules The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics.-Source code for the recommended alerts can be found in [GitHub](https://github.com/Azure/prometheus-collector/blob/68ab5b195a77d72b0b8e36e5565b645c3d1e2d5d/mixins/kubernetes/rules/recording_and_alerting_rules/templates/ci_recommended_alerts.json): +Source code for the recommended alerts can be found in [GitHub](https://aka.ms/azureprometheus-recommendedmetricalerts): | Prometheus alert name | Custom metric alert name | Description | Default threshold | |:|:|:|:| |
azure-monitor | Prometheus Metrics Scrape Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md | The following table has a list of all the default targets that the Azure Monitor If you want to turn on the scraping of the default targets that aren't enabled by default, edit the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap` to update the targets listed under `default-scrape-settings-enabled` to `true`. Apply the configmap to your cluster. +### Enable pod annotation-based scraping +To scrape application pods without needing to create a custom Prometheus config, annotations can be added to the pods. The annotation `prometheus.io/scrape: "true"` is required for the pod to be scraped. The annotations `prometheus.io/path` and `prometheus.io/port` indicate the path and port that the metrics are hosted at on the pod. The annotations for a pod that is hosting metrics at `<pod IP>:8080/metrics` would be: ++```yaml +metadata: + annotations: + prometheus.io/scrape: 'true' + prometheus.io/path: '/metrics' + prometheus.io/port: '8080' +``` ++Scraping these pods with specific annotations is disabled by default. To enable, in the `ama-metrics-settings-configmap`, add the regex for the namespace(s) of the pods with annotations you wish to scrape as the value of the field `podannotationnamespaceregex`. ++For example, the following setting scrapes pods with annotations only in the namespaces `kube-system` and `default`: ++```yaml +pod-annotation-based-scraping: |- + podannotationnamespaceregex = "kube-system|my-namespace" +``` ++To enable scraping for pods with annotations in all namespaces, use: ++```yaml +pod-annotation-based-scraping: |- + podannotationnamespaceregex = ".*" +``` ++ ### Customize metrics collected by default targets By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, update the keep-lists in the settings configmap under `default-targets-metrics-keep-list`, and set `minimalingestionprofile` to `false`. metric_relabel_configs: regex: '.+' ``` -### Pod annotation-based scraping --The following scrape config uses the `__meta_*` labels added from the `kubernetes_sd_configs` for the `pod` role to filter for pods with certain annotations. --To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the following job scrapes only the address specified by the annotation: --- `prometheus.io/scrape`: Enable scraping for this pod.-- `prometheus.io/scheme`: If the metrics endpoint is secured, you need to set scheme to `https` and most likely set the TLS config.-- `prometheus.io/path`: If the metrics path isn't /metrics, define it with this annotation.-- `prometheus.io/port`: Specify a single port that you want to scrape.--```yaml -scrape_configs: - - job_name: 'kubernetespods-sample' -- kubernetes_sd_configs: - - role: pod -- relabel_configs: - # Scrape only pods with the annotation: prometheus.io/scrape = true - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] - action: keep - regex: true -- # If prometheus.io/path is specified, scrape this path instead of /metrics - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] - action: replace - target_label: __metrics_path__ - regex: (.+) -- # If prometheus.io/port is specified, scrape this port instead of the default - - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:$2 - target_label: __address__ -- # If prometheus.io/scheme is specified, scrape with this scheme instead of http - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme] - action: replace - regex: (http|https) - target_label: __scheme__ -- # Include the pod namespace as a label for each metric - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: kubernetes_namespace -- # Include the pod name as a label for each metric - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name -- # [Optional] Include all pod labels as labels for each metric - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) -``` --See the [Apply config file](prometheus-metrics-scrape-validate.md#deploy-config-file-as-configmap) section to create a configmap from the Prometheus config. - ## Next steps [Setup Alerts on Prometheus metrics](./container-insights-metric-alerts.md)<br> |
azure-monitor | Data Collection Endpoint Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md | This table describes the components of a data collection endpoint, related regio :::image type="content" source="media/data-collection-endpoint-overview/data-collection-endpoint-regionality-multiple-workspaces.png" alt-text="A diagram that shows monitored resources in multiple regions sending data to multiple Log Analytics workspaces in different regions using data collection endpoints." lightbox="media/data-collection-endpoint-overview/data-collection-endpoint-regionality-multiple-workspaces.png"::: +> [!NOTE] +> By default, the Microsoft.Insights resource provider isnt registered in a Subscription. Ensure to register it successfully before trying to create a Data Collection Endpoint. + ## Create a data collection endpoint # [Azure portal](#tab/portal) -1. On the **Azure Monitor** menu in the Azure portal, select **Data Collection Endpoints** under the **Settings** section. Select **Create** to create a new DCR and assignment. +1. On the **Azure Monitor** menu in the Azure portal, select **Data Collection Endpoints** under the **Settings** section. Select **Create** to create a new Data Collection Endpoint. <!-- convertborder later --> :::image type="content" source="media/data-collection-endpoint-overview/data-collection-endpoint-overview.png" lightbox="media/data-collection-endpoint-overview/data-collection-endpoint-overview.png" alt-text="Screenshot that shows data collection endpoints." border="false"::: |
azure-monitor | Prometheus Api Promql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-api-promql.md | The following limitations are in addition to those detailed in the Prometheus sp For more information on Prometheus metrics limits, see [Prometheus metrics](../../azure-monitor/service-limits.md#prometheus-metrics) ## Frequently asked questions |
azure-monitor | Search Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md | Search jobs are intended to scan large volumes of data in a specific table. Ther - [project-keep](/azure/data-explorer/kusto/query/project-keep-operator) - [project-rename](/azure/data-explorer/kusto/query/projectrenameoperator) - [project-reorder](/azure/data-explorer/kusto/query/projectreorderoperator)-- [parse](/azure/data-explorer/kusto/query/whereoperator)-- [parse-where](/azure/data-explorer/kusto/query/whereoperator)+- [parse](/azure/data-explorer/kusto/query/parse-operator) +- [parse-where](/azure/data-explorer/kusto/query/parse-where-operator) You can use all functions and binary operators within these operators. |
azure-resource-manager | Bicep Extensibility Kubernetes Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-extensibility-kubernetes-provider.md | Last updated 04/18/2023 The Kubernetes provider allows you to create Kubernetes resources directly with Bicep. Bicep can deploy anything that can be deployed with the [Kubernetes command-line client (kubectl)](https://kubernetes.io/docs/reference/kubectl/kubectl/) and a [Kubernetes manifest file](../../aks/concepts-clusters-workloads.md#deployments-and-yaml-manifests). +> [!NOTE] +> Kubernetes provider is not currently supported for private clusters: +> +> ```json +> resource AKS 'Microsoft.ContainerService/managedClusters@2023-01-02-preview' = { +> properties: { +> "apiServerAccessProfile": { +> "enablePrivateCluster": "true" +> } +> } +> } +> +> ``` +> + ## Enable the preview feature This preview feature can be enabled by configuring the [bicepconfig.json](./bicep-config.md): |
backup | Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-overview.md | Title: What is Azure Backup? description: Provides an overview of the Azure Backup service, and how it contributes to your business continuity and disaster recovery (BCDR) strategy. Previously updated : 01/05/2024 Last updated : 01/30/2024 Azure Backup delivers these key benefits: - [Locally redundant storage (LRS)](../storage/common/storage-redundancy.md#locally-redundant-storage) replicates your data three times (it creates three copies of your data) in a storage scale unit in a datacenter. All copies of the data exist within the same region. LRS is a low-cost option for protecting your data from local hardware failures. - [Geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage) is the default and recommended replication option. GRS replicates your data to a secondary region (hundreds of miles away from the primary location of the source data). GRS costs more than LRS, but GRS provides a higher level of durability for your data, even if there's a regional outage. - [Zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) replicates your data in [availability zones](../availability-zones/az-overview.md#availability-zones), guaranteeing data residency and resiliency in the same region. ZRS has no downtime. So your critical workloads that require [data residency](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/), and must have no downtime, can be backed up in ZRS.-- **Zone-redundancy** for Recovery Services vault and Backup Vault, as well as optional zone-redundancy for backup data. For more information on availability zone support and disaster recovery options see, [Reliability for Azure Backup](../reliability/reliability-backup.md).++ **Zone-redundancy** for Recovery Services vault and Backup vault, as well as optional zone-redundancy for backup data. Learn about [Reliability for Azure Backup](../reliability/reliability-backup.md). ## How Azure Backup protects from ransomware? |
backup | Sap Hana Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md | Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 12/06/2023 Last updated : 01/30/2024 Azure Backup supports the backup of SAP HANA databases to Azure. This article su | -- | | | | **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) | | **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West, Sweden Central, Sweden South <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |-| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, SP4, and SP5 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, 8.6, 8.8, and 9.0. | | +| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, SP4, and SP5 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, 8.6, 8.8, 9.0, and 9.2. | | | **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS 04, SPS 05 Rev <= 59, SPS 06 (validated for encryption enabled scenarios as well), and SPS 07. | | | **Encryption** | SSLEnforce, HANA data encryption | | | **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. | Azure Backup supports the backup of SAP HANA databases to Azure. This article su | **Number of full backups per day** | One scheduled backup. <br><br> Three on-demand backups. <br><br> We recommend not to trigger more than three backups per day. However, to allow user retries in case of failed attempts, hard limit for on-demand backups is set to nine attempts. | | **HANA deployments** | HANA System Replication (HSR) | | | **Special configurations** | | SAP HANA + Dynamic Tiering <br> Cloning through LaMa |+| **Compression** | You can enable HANA Native compression via the Backup policy. [See the SAP HANA document](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/86943e9f8d5343c59577755edff8296b.html). | | +| **Multi-streaming backup** | You can increase your streaming backup throughput from *420 MBps* to *1.5 GBps*. [Learn more](#support-for-multistreaming-data-backups). | | Azure Backup supports the backup of SAP HANA databases to Azure. This article su - **VM configuration applicable for multistreaming**: To utilize the benefits of multistreaming, the VM needs to have a minimum configuration of *16 vCPUs* and *128 GB* of RAM. - **Limiting factors**: Throughput of *total disk LVM striping* and *VM network*, whichever hits first. -Learn more about [SAP HANA Azure Virtual Machine storage](/azure/sap/workloads/hana-vm-operations-storage) and [SAP HANA Azure virtual machine Premium SSD storage configurations](/azure/sap/workloads/hana-vm-premium-ssd-v1) configurations. +Learn more about [SAP HANA Azure Virtual Machine storage](/azure/sap/workloads/hana-vm-operations-storage) and [SAP HANA Azure virtual machine Premium SSD storage configurations](/azure/sap/workloads/hana-vm-premium-ssd-v1) configurations. To configure multistreaming data backups, see the [SAP documentation](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/18db704959a24809be8d01cc0a409681.html). |
communication-services | Get Phone Number | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-phone-number.md | -zone_pivot_groups: acs-azcli-azp-java-net-python-csharp-js +zone_pivot_groups: acs-azcli-azp-azpnew-java-net-python-csharp-js # Quickstart: Get and manage phone numbers zone_pivot_groups: acs-azcli-azp-java-net-python-csharp-js [!INCLUDE [Azure portal](./includes/phone-numbers-portal.md)] ::: zone-end + ::: zone pivot="programming-language-csharp" [!INCLUDE [Azure portal](./includes/phone-numbers-net.md)] ::: zone-end |
communication-services | Migrating To Azure Communication Services Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/migrating-to-azure-communication-services-calling.md | + + Title: Tutorial - Migrating from Twilio video to ACS ++description: In this tutorial, you learn how to migrate your calling product from Twilio to Azure Communication Services. ++++ Last updated : 01/26/2024+++++++# Migration Guide from Twilio Video to Azure Communication Services ++This article provides guidance on how to migrate your existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md) for WebJS. Twilio Video and Azure Communication Services' calling SDK for WebJS are both cloud-based platforms that enable developers to add voice and video calling features to their web applications. However, there are some key differences between them that may affect your choice of platform or require some changes to your existing code if you decide to migrate. In this article, we will compare the main features and functionalities of both platforms and provide some guidance on how to migrate your existing Twilio Video implementation to Azure Communication Services' Calling SDK for WebJS. ++## Key features of the Azure Communication Services calling SDK ++- Addressing - Azure Communication Services provides [identities](../concepts/identity-model.md) for authentication and addressing communication endpoints. These identities are used within Calling APIs, providing clients with a clear view of who is connected to a call (the roster). +- Encryption - The Calling SDK safeguards traffic by encrypting it and preventing tampering along the way. +- Device Management and Media - The SDK handles the management of audio and video devices, efficiently encodes content for transmission, and supports both screen and application sharing. +- PSTN - The SDK can initiate voice calls with the traditional Public Switched Telephone Network (PSTN), [using phone numbers acquired either in the Azure portal](../quickstarts/telephony/get-phone-number.md) or programmatically. +- Teams Meetings ΓÇô Azure Communication Services is equipped to [join Teams meetings](../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with Teams voice and its video calls. +- Notifications - Azure Communication Services provides APIs for notifying clients of incoming calls, allowing your application to listen to events (for example, incoming calls) even when your application is not running in the foreground. +- User Facing Diagnostics (UFD) - Azure Communication Services utilizes [events](../concepts/voice-video-calling/user-facing-diagnostics.md) designed to provide insights into underlying issues that could affect call quality, allowing developers to subscribe to triggers such as weak network signals or muted microphones for proactive issue awareness. +- Media Statics - Provides comprehensive insights into VoIP and video call [metrics](../concepts/voice-video-calling/media-quality-sdk.md), including call quality information, empowering developers to enhance communication experiences. +- Video Constraints - Azure Communication Services offers APIs that control [video quality among other parameters](../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls. By adjusting parameters like resolution and frame rate, the SDK supports different call situations for varied levels of video quality. ++**For a more detailed understanding of the capabilities of the Calling SDK for different platforms, consult** [**this document**](../concepts/voice-video-calling/calling-sdk-features.md#detailed-capabilities)**.** ++If you're embarking on a new project from the ground up, see the [Quickstarts of the Calling SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web). ++**Prerequisites:** ++1. **Azure Account:** Confirm that you have an active subscription in your Azure account. New users can create a free Azure account [here](https://azure.microsoft.com/free/). +2. **Node.js 18:** Ensure Node.js 18 is installed on your system; download can be found right [here](https://nodejs.org/en). +3. **Communication Services Resource:** Set up a [Communication Services Resource](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp) via your Azure portal and note down your connection string. +4. **Azure CLI:** You can get the Azure CLI installer from [here](/cli/azure/install-azure-cli-windows?tabs=azure-cli).. +5. **User Access Token:** Generate a user access token to instantiate the call client. You can create one using the Azure CLI as follows: +```console +az communication identity token issue --scope voip --connection-string "yourConnectionString" +``` ++For more information, see the guide on how to [Use Azure CLI to Create and Manage Access Tokens](../quickstarts/identity/access-tokens.md?pivots=platform-azcli). ++For Video Calling as a Teams user: ++- You also can use Teams identity. For instructions on how to generate an access token for a Teams User, [follow this guide](../quickstarts/manage-teams-identity.md?pivots=programming-language-javascript). +- Obtain the Teams thread ID for call operations using the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). Additional information on how to create a chat thread ID can be found [here](/graph/api/chat-post?preserve-view=true&tabs=javascript&view=graph-rest-1.0#example-2-create-a-group-chat). ++### UI library ++The UI Library simplifies the process of creating modern communication user interfaces using Azure Communication Services. It offers a collection of ready-to-use UI components that you can easily integrate into your application. ++This prebuilt set of controls facilitates the creation of aesthetically pleasing designs using [Fluent UI SDK](https://developer.microsoft.com/en-us/fluentui#/) components and the development of audio/video communication experiences. If you wish to explore more about the UI Library, check out the [overview page](../concepts/ui-library/ui-library-overview.md), where you find comprehensive information about both web and mobile platforms. ++### Calling support ++The Azure Communication Services Calling SDK supports the following streaming configurations: ++| Limit | Web | Windows/Android/iOS | +||-|--| +| Maximum \# of outgoing local streams that can be sent simultaneously | 1 video and 1 screen sharing | 1 video + 1 screen sharing | +| Maximum \# of incoming remote streams that can be rendered simultaneously | 9 videos + 1 screen sharing on desktop browsers\*, 4 videos + 1 screen sharing on web mobile browsers | 9 videos + 1 screen sharing | ++## Call Types in Azure Communication Services ++Azure Communication Services offers various call types. The type of call you choose impacts your signaling schema, the flow of media traffic, and your pricing model. Further details can be found [here](../concepts/voice-video-calling/about-call-types.md). ++- Voice Over IP (VoIP) - This type of call involves one user of your application calling another over an internet or data connection. Both signaling and media traffic are routed over the internet. +- Public Switched Telephone Network (PSTN) - When your users interact with a traditional telephone number, calls are facilitated via PSTN voice calling. In order to make and receive PSTN calls, you need to introduce telephony capabilities to your Azure Communication Services resource. Here, signaling and media employ a mix of IP-based and PSTN-based technologies to connect your users. +- One-to-One Call - When one of your users connects with another through our SDKs. The call can be established via either VoIP or PSTN. +- Group Call - Involved when three or more participants connect. Any combination of VoIP and PSTN-connected users can partake in a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot. +- Rooms Call - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, please refer to the [conceptual documentation](../concepts/rooms/room-concept.md). ++## Installation ++### Install the Azure Communication Services calling SDK ++Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. +```console +npm install @azure/communication-common npm install @azure/communication-calling +``` ++### Remove the Twilio SDK from the project ++You can remove the Twilio SDK from your project by uninstalling the package +```console +npm uninstall twilio-video +``` ++## Object model ++The following classes and interfaces handle some of the main features of the Azure Communication Services Calling SDK: ++| **Name** | **Description** | +|--|-| +| CallClient | The main entry point to the Calling SDK. | +| AzureCommunicationTokenCredential | Implements the CommunicationTokenCredential interface, which is used to instantiate the CallAgent. | +| CallAgent | Used to start and manage calls. | +| Device Manager | Used to manage media devices. | +| Call | Used for representing a Call. | +| LocalVideoStream | Used for creating a local video stream for a camera device on the local system. | +| RemoteParticipant | Used for representing a remote participant in the Call. | +| RemoteVideoStream | Used for representing a remote video stream from a Remote Participant. | +| LocalAudioStream | Represents a local audio stream for a local microphone device | +| AudioOptions | Audio options, which are provided when making an outgoing call or joining a group call | +| AudioIssue | Represents the end of call survey audio issues, example responses would be NoLocalAudio - the other participants were unable to hear me, or LowVolume - the callΓÇÖs audio volume was low | ++When using in a Teams implementation there are a few differences: ++- Instead of `CallAgent` - use `TeamsCallAgent` for starting and managing Teams calls. +- Instead of `Call` - use `TeamsCall` for representing a Teams Call. ++## Initialize the Calling SDK (CallClient/CallAgent) ++Using the `CallClient`, initialize a `CallAgent` instance. The `createCallAgent` method uses CommunicationTokenCredential as an argument. It accepts a [user access token](../quickstarts/identity/access-tokens.md?tabs=windows&pivots=programming-language-javascript). ++### Device manager ++#### Twilio ++Twilio doesn't have a Device Manager analog, tracks are being created using the systemΓÇÖs default device. For customization, you should obtain the desired source track via: +```javascript +navigator.mediaDevices.getUserMedia() +``` ++And pass it to the track creation method. ++#### Azure Communication Services +```javascript +const { CallClient } = require('@azure/communication-calling'); +const { AzureCommunicationTokenCredential} = require('@azure/communication-common'); ++const userToken = '<USER_TOKEN>'; +const tokenCredential = new AzureCommunicationTokenCredential(userToken); ++callClient = new CallClient(); +const callAgent = await callClient.createCallAgent(tokenCredential, {displayName: 'optional user name'}); +``` ++You can use the getDeviceManager method on the CallClient instance to access deviceManager. ++```javascript +const deviceManager = await callClient.getDeviceManager(); +// Get a list of available video devices for use. +const localCameras = await deviceManager.getCameras(); ++// Get a list of available microphone devices for use. +const localMicrophones = await deviceManager.getMicrophones(); ++// Get a list of available speaker devices for use. +const localSpeakers = await deviceManager.getSpeakers(); +``` ++### Get device permissions ++#### Twilio ++Twilio Video asks for device permissions on track creation. ++#### Azure Communication Services ++Prompt a user to grant camera and/or microphone permissions: +```javascript +const result = await deviceManager.askDevicePermission({audio: true, video: true}); +``` ++The output returns with an object that indicates whether audio and video permissions were granted: +```javascript +console.log(result.audio); console.log(result.video); +``` ++## Starting a call ++### Twilio ++```javascript +import * as TwilioVideo from 'twilio-video'; ++const twilioVideo = TwilioVideo; +let twilioRoom; ++twilioRoom = await twilioVideo.connect('token', { name: 'roomName', audio: false, video: false }); +``` ++### Azure Communication Services ++To create and start a call, use one of the APIs on `callAgent` and provide a user that you created through the Communication Services identity SDK. ++Call creation and start are synchronous. The `call` instance allows you to subscribe to call events - subscribe to `stateChanged` event for value changes. +```javascript +call.on('stateChanged', async () =\> { console.log(\`Call state changed: \${call.state}\`) }); +`````` ++### Azure Communication Services 1:1 Call ++To call another Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's CommunicationUserIdentifier that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md). +```javascript +const userCallee = { communicationUserId: '\<Azure_Communication_Services_USER_ID\>' }; +const oneToOneCall = callAgent.startCall([userCallee]); +``` ++### Azure Communication Services Room Call ++To join a `room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the join method and pass the context instance. +```javascript +const context = { roomId: '\<RoomId\>' }; +const call = callAgent.join(context); +``` +A **room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **rooms**, you can read the [conceptual documentation](../concepts/rooms/room-concept.md) or follow the [quick start guide](../quickstarts/rooms/join-rooms-call.md). ++### Azure Communication Services group Call ++To start a new group call or join an ongoing group call, use the `join` method and pass an object with a groupId property. The `groupId` value has to be a GUID. +```javascript +const context = { groupId: '\<GUID\>'}; +const call = callAgent.join(context); +``` ++### Azure Communication Services Teams call ++Start a synchronous one-to-one or group call with `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events. +```javascript +const userCallee = { microsoftTeamsUserId: '\<MICROSOFT_TEAMS_USER_ID\>' }; +const oneToOneCall = teamsCallAgent.startCall(userCallee); +``` ++## Accepting and joining a call ++### Twilio ++The Twilio Video SDK the Participant is being created after joining the room, and it doesn't have any information about other rooms. ++### Azure Communication Services ++Azure Communication Services has the `CallAgent` instance, which emits an `incomingCall` event when the logged-in identity receives an incoming call. +```javascript +callAgent.on('incomingCall', async (call) =\>{ + // Incoming call + }); +``` ++The `incomingCall` event includes an `incomingCall` instance that you can accept or reject. ++When starting/joining/accepting a call with video on, if the specified video camera device is being used by another process or if it's disabled in the system, the call starts with video off, and a `cameraStartFailed:` true call diagnostic will be raised. ++```javascript +const incomingCallHandler = async (args: { incomingCall: IncomingCall }) => { + const incomingCall = args.incomingCall; ++ // Get incoming call ID + var incomingCallId = incomingCall.id ++ // Get information about this Call. This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment. To use this api please use 'beta' release of Azure Communication Services Calling Web SDK + var callInfo = incomingCall.info; ++ // Get information about caller + var callerInfo = incomingCall.callerInfo + + // Accept the call + var call = await incomingCall.accept(); ++ // Reject the call + incomingCall.reject(); ++ // Subscribe to callEnded event and get the call end reason + incomingCall.on('callEnded', args => + { console.log(args.callEndReason); + }); ++ // callEndReason is also a property of IncomingCall + var callEndReason = incomingCall.callEndReason; +}; ++callAgentInstance.on('incomingCall', incomingCallHandler); ++``` ++After starting a call, joining a call, or accepting a call, you can also use the callAgents' `callsUpdated` event to be notified of the new Call object and start subscribing to it. +```javascript +callAgent.on('callsUpdated', (event) => { + event.added.forEach((call) => { + // User joined call + }); + + event.removed.forEach((call) => { + // User left call + }); +}); +``` ++For Azure Communication Services Teams implementation, check how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call). ++## Adding participants to call ++### Twilio ++Participants can't be added or removed from Twilio Room, they need to join the Room or disconnect from it themselves. ++Local Participant in Twilio Room can be accessed this way: +```javascript +let localParticipant = twilioRoom.localParticipant; +``` ++Remote Participants in Twilio Room are represented with a map that has unique Participant SID as a key: +```javascript +twilioRoom.participants; +``` ++### Azure Communication Services ++All remote participants are represented by `RemoteParticipant` type and available through `remoteParticipants` collection on a call instance. ++The `remoteParticipants` collection returns a list of remote participants in a call: +```javascript +call.remoteParticipants; // [remoteParticipant, remoteParticipant....] +``` ++**Add participant:** ++To add a participant to a call, you can use `addParticipant`. Provide one of the Identifier types. It synchronously returns the remoteParticipant instance. ++The `remoteParticipantsUpdated` event from Call is raised when a participant is successfully added to the call. +```javascript +const userIdentifier = { communicationUserId: '<Azure_Communication_Services_USER_ID>' }; +const remoteParticipant = call.addParticipant(userIdentifier); +``` ++**Remove participant:** ++To remove a participant from a call, you can invoke `removeParticipant`. You have to pass one of the Identifier types. This method resolves asynchronously after the participant is removed from the call. The participant is also removed from the `remoteParticipants` collection. +```javascript +const userIdentifier = { communicationUserId: '<Azure_Communication_Services_USER_ID>' }; +await call.removeParticipant(userIdentifier); ++``` ++Subscribe to the call's `remoteParticipantsUpdated` event to be notified when new participants are added to the call or removed from the call. ++```javascript +call.on('remoteParticipantsUpdated', e => { + e.added.forEach(remoteParticipant => { + // Subscribe to new remote participants that are added to the call + }); + + e.removed.forEach(remoteParticipant => { + // Unsubscribe from participants that are removed from the call + }) ++}); +``` ++Subscribe to remote participant's `stateChanged` event for value changes. +```javascript +remoteParticipant.on('stateChanged', () => { + console.log(`Remote participants state changed: ${remoteParticipant.state}`) +}); +``` ++## Video ++### Starting and stopping video ++#### Twilio ++```javascript +const videoTrack = await twilioVideo.createLocalVideoTrack({ constraints }); +const videoTrackPublication = await localParticipant.publishTrack(videoTrack, { options }); +``` ++Camera is enabled by default, however it can be disabled and enabled back if necessary: +```javascript +videoTrack.disable(); +``` +Or +```javascript +videoTrack.enable(); +``` ++Later created video track should be attached locally: ++```javascript +const videoElement = videoTrack.attach(); +const localVideoContainer = document.getElementById( localVideoContainerId ); +localVideoContainer.appendChild(videoElement); ++``` ++Twilio Tracks rely on default input devices and reflect the changes in defaults. However, to change an input device, the previous Video Track should be unpublished: ++```javascript +localParticipant.unpublishTrack(videoTrack); +``` ++And a new Video Track with the correct constraints should be created. ++#### Azure Communication Services +To start a video while on a call, you have to enumerate cameras using the getCameras method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and then pass the `LocalVideoStream` object into the `startVideo` method of an existing call object: ++```javascript +const deviceManager = await callClient.getDeviceManager(); +const cameras = await deviceManager.getCameras(); +const camera = cameras[0] +const localVideoStream = new LocalVideoStream(camera); +await call.startVideo(localVideoStream); +``` ++After you successfully start sending video, a LocalVideoStream instance of type Video is added to the localVideoStreams collection on a call instance. +```javascript +const localVideoStream = call.localVideoStreams.find( (stream) =\> { return stream.mediaStreamType === 'Video'} ); +``` ++To stop local video while on a call, pass the localVideoStream instance that's being used for video: +```javascript +await call.stopVideo(localVideoStream); +``` ++You can switch to a different camera device while a video is sending by invoking switchSource on a localVideoStream instance: ++```javascript +const cameras = await callClient.getDeviceManager().getCameras(); +const camera = cameras[1]; +localVideoStream.switchSource(camera); +``` ++If the specified video device is being used by another process, or if it's disabled in the system: ++- While in a call, if your video is off and you start video using call.startVideo(), this method throws a `SourceUnavailableError` and `cameraStartFailed` will be set to true. +- A call to the `localVideoStream.switchSource()` method causes `cameraStartFailed` to be set to true. Our [Call Diagnostics guide](../concepts/voice-video-calling/call-diagnostics.md) provides additional information on how to diagnose call related issues. ++To verify if the local video is on or off you can use `isLocalVideoStarted` API, which returns true or false: +```javascript +call.isLocalVideoStarted; +``` ++To listen for changes to the local video, you can subscribe and unsubscribe to the `isLocalVideoStartedChanged` event ++```javascript +// Subscribe to local video event +call.on('isLocalVideoStartedChanged', () => { + // Callback(); +}); +// Unsubscribe from local video event +call.off('isLocalVideoStartedChanged', () => { + // Callback(); +}); ++``` ++### Rendering a remote user video ++#### Twilio ++As soon as a Remote Participant publishes a Video Track, it needs to be attached. `trackSubscribed` event on Room or Remote Participant allows you to detect when the track can be attached: ++```javascript +twilioRoom.on('participantConneted', (participant) => { + participant.on('trackSubscribed', (track) => { + const remoteVideoElement = track.attach(); + const remoteVideoContainer = document.getElementById(remoteVideoContainerId + participant.identity); + remoteVideoContainer.appendChild(remoteVideoElement); + }); +}); +``` ++Or ++```javascript +twilioRoom..on('trackSubscribed', (track, publication, participant) => { + const remoteVideoElement = track.attach(); + const remoteVideoContainer = document.getElementById(remoteVideoContainerId + participant.identity); + remoteVideoContainer.appendChild(remoteVideoElement); + }); +}); +``` ++#### Azure Communication Services ++To list the video streams and screen sharing streams of remote participants, inspect the `videoStreams` collections: +```javascript +const remoteVideoStream: RemoteVideoStream = call.remoteParticipants[0].videoStreams[0]; +const streamType: MediaStreamType = remoteVideoStream.mediaStreamType; +``` ++To render `RemoteVideoStream`, you have to subscribe to its `isAvailableChanged` event. If the `isAvailable` property changes to true, a remote participant is sending a stream. After that happens, create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance by using the asynchronous createView method. You can then attach `view.target` to any UI element. ++Whenever availability of a remote stream changes, you can destroy the whole `VideoStreamRenderer` or a specific `VideoStreamRendererView`. If you do decide to keep them it will result in displaying a blank video frame. ++```javascript +// Reference to the html's div where we would display a grid of all remote video streams from all participants. +let remoteVideosGallery = document.getElementById('remoteVideosGallery'); ++subscribeToRemoteVideoStream = async (remoteVideoStream) => { + let renderer = new VideoStreamRenderer(remoteVideoStream); + let view; + let remoteVideoContainer = document.createElement('div'); + remoteVideoContainer.className = 'remote-video-container'; ++ let loadingSpinner = document.createElement('div'); + // See the css example below for styling the loading spinner. + loadingSpinner.className = 'loading-spinner'; + remoteVideoStream.on('isReceivingChanged', () => { + try { + if (remoteVideoStream.isAvailable) { + const isReceiving = remoteVideoStream.isReceiving; + const isLoadingSpinnerActive = remoteVideoContainer.contains(loadingSpinner); + if (!isReceiving && !isLoadingSpinnerActive) { + remoteVideoContainer.appendChild(loadingSpinner); + } else if (isReceiving && isLoadingSpinnerActive) { + remoteVideoContainer.removeChild(loadingSpinner); + } + } + } catch (e) { + console.error(e); + } + }); ++ const createView = async () => { + // Create a renderer view for the remote video stream. + view = await renderer.createView(); + // Attach the renderer view to the UI. + remoteVideoContainer.appendChild(view.target); + remoteVideosGallery.appendChild(remoteVideoContainer); + } ++ // Remote participant has switched video on/off + remoteVideoStream.on('isAvailableChanged', async () => { + try { + if (remoteVideoStream.isAvailable) { + await createView(); + } else { + view.dispose(); + remoteVideosGallery.removeChild(remoteVideoContainer); + } + } catch (e) { + console.error(e); + } + }); ++ // Remote participant has video on initially. + if (remoteVideoStream.isAvailable) { + try { + await createView(); + } catch (e) { + console.error(e); + } + } + + console.log(`Initial stream size: height: ${remoteVideoStream.size.height}, width: ${remoteVideoStream.size.width}`); + remoteVideoStream.on('sizeChanged', () => { + console.log(`Remote video stream size changed: new height: ${remoteVideoStream.size.height}, new width: ${remoteVideoStream.size.width}`); + }); +} ++``` ++Subscribe to the remote participant's videoStreamsUpdated event to be notified when the remote participant adds new video streams and removes video streams. ++```javascript +remoteParticipant.on('videoStreamsUpdated', e => { + e.added.forEach(remoteVideoStream => { + // Subscribe to new remote participant's video streams + }); ++ e.removed.forEach(remoteVideoStream => { + // Unsubscribe from remote participant's video streams + }); +}); ++``` ++### Virtual background ++#### Twilio ++To use Virtual Background, Twilio helper library should be installed: +```console +npm install @twilio/video-processors +``` ++New Processor instance should be created and loaded: ++```javascript +import { GaussianBlurBackgroundProcessor } from '@twilio/video-processors'; ++const blurProcessor = new GaussianBlurBackgroundProcessor({ assetsPath: virtualBackgroundAssets }); ++await blurProcessor.loadModel(); +``` +As soon as the model is loaded the background can be added to the video track via addProcessor method: +```javascript +videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' }); +``` +++#### Azure Communication Services ++Use the npm install command to install the Azure Communication Services Effects SDK for JavaScript. +```console +npm install @azure/communication-calling-effects --save +``` ++> [!NOTE] +> To use video effects with the Azure Communication Calling SDK, once you've created a LocalVideoStream, you need to get the VideoEffects feature API of the LocalVideoStream to start/stop video effects: ++```javascript +import * as AzureCommunicationCallingSDK from '@azure/communication-calling'; ++import { BackgroundBlurEffect, BackgroundReplacementEffect } from '@azure/communication-calling-effects'; ++// Get the video effects feature API on the LocalVideoStream +// (here, localVideoStream is the LocalVideoStream object you created while setting up video calling) +const videoEffectsFeatureApi = localVideoStream.feature(AzureCommunicationCallingSDK.Features.VideoEffects); ++// Subscribe to useful events +videoEffectsFeatureApi.on(ΓÇÿeffectsStartedΓÇÖ, () => { + // Effects started +}); ++videoEffectsFeatureApi.on(ΓÇÿeffectsStoppedΓÇÖ, () => { + // Effects stopped +}); ++videoEffectsFeatureApi.on(ΓÇÿeffectsErrorΓÇÖ, (error) => { + // Effects error +}); +``` ++To blur the background: ++```javascript +// Create the effect instance +const backgroundBlurEffect = new BackgroundBlurEffect(); ++// Recommended: Check support +const backgroundBlurSupported = await backgroundBlurEffect.isSupported(); ++if (backgroundBlurSupported) { + // Use the video effects feature API we created to start effects + await videoEffectsFeatureApi.startEffects(backgroundBlurEffect); +} +``` ++For background replacement with an image you need to provide the URL of the image you want as the background to this effect. Currently supported image formats are: png, jpg, jpeg, tiff, bmp, and current supported aspect ratio is 16:9 ++```javascript +const backgroundImage = 'https://linkToImageFile'; ++// Create the effect instance +const backgroundReplacementEffect = new BackgroundReplacementEffect({ + backgroundImageUrl: backgroundImage +}); ++// Recommended: Check support +const backgroundReplacementSupported = await backgroundReplacementEffect.isSupported(); ++if (backgroundReplacementSupported) { + // Use the video effects feature API as before to start/stop effects + await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); +} +``` ++Changing the image for this effect can be done by passing it via the configured method: +```javascript +const newBackgroundImage = 'https://linkToNewImageFile'; ++await backgroundReplacementEffect.configure({ + backgroundImageUrl: newBackgroundImage +}); +``` ++Switching effects can be done using the same method on the video effects feature API: ++```javascript +// Switch to background blur +await videoEffectsFeatureApi.startEffects(backgroundBlurEffect); ++// Switch to background replacement +await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); +``` ++At any time if you want to check what effects are active, you can use the `activeEffects` property. The `activeEffects` property returns an array with the names of the currently active effects and returns an empty array if there are no affects active. +```javascript +// Using the video effects feature API +const currentActiveEffects = videoEffectsFeatureApi.activeEffects; +``` ++To stop effects: +```javascript +await videoEffectsFeatureApi.stopEffects(); +``` +++## Audio ++### Starting and stopping audio ++#### Twilio ++```javascript +const audioTrack = await twilioVideo.createLocalAudioTrack({ constraints }); +const audioTrackPublication = await localParticipant.publishTrack(audioTrack, { options }); +``` ++Microphone is enabled by default, however it can be disabled and enabled back if necessary: +```javascript +audioTrack.disable(); +``` ++Or +```javascript +audioTrack.enable(); +``` ++Created Audio Track should be attached by Local Participant the same way as Video Track: ++```javascript +const audioElement = audioTrack.attach(); +const localAudioContainer = document.getElementById(localAudioContainerId); +localAudioContainer.appendChild(audioElement); +``` ++And by Remote Participant: ++```javascript +twilioRoom.on('participantConneted', (participant) => { + participant.on('trackSubscribed', (track) => { + const remoteAudioElement = track.attach(); + const remoteAudioContainer = document.getElementById(remoteAudioContainerId + participant.identity); + remoteAudioContainer.appendChild(remoteAudioElement); + }); +}); +``` ++Or ++```javascript +twilioRoom..on('trackSubscribed', (track, publication, participant) => { + const remoteAudioElement = track.attach(); + const remoteAudioContainer = document.getElementById(remoteAudioContainerId + participant.identity); + remoteVideoContainer.appendChild(remoteAudioElement); + }); +}); ++``` ++It is impossible to mute incoming audio in Twilio Video SDK. ++#### Azure Communication Services ++```javascript +await call.startAudio(); +``` ++To mute or unmute the local endpoint, you can use the mute and unmute asynchronous APIs: ++```javascript +//mute local device (microphone / sent audio) +await call.mute(); ++//unmute local device (microphone / sent audio) +await call.unmute(); +``` ++Mute incoming audio sets the call volume to 0. To mute or unmute the incoming audio, use the `muteIncomingAudio` and `unmuteIncomingAudio` asynchronous APIs: ++```javascript +//mute local device (speaker) +await call.muteIncomingAudio(); ++//unmute local device (speaker) +await call.unmuteIncomingAudio(); ++``` ++### Detecting Dominant speaker ++#### Twilio ++To detect the loudest Participant in the Room, Dominant Speaker API can be used. It can be enabled in the connection options when joining the Group Room with at least 2 participants: +```javascript +twilioRoom = await twilioVideo.connect('token', { +name: 'roomName', +audio: false, +video: false, +dominantSpeaker: true +}); +``` ++When the loudest speaker in the Room will change, the dominantSpeakerChanged event is emitted: ++```javascript +twilioRoom.on('dominantSpeakerChanged', (participant) => { + // Highlighting the loudest speaker +}); +``` ++#### Azure Communication Services ++Dominant speakers for a call are an extended feature of the core Call API and allows you to obtain a list of the active speakers in the call. This is a ranked list, where the first element in the list represents the last active speaker on the call and so on. ++In order to obtain the dominant speakers in a call, you first need to obtain the call dominant speakers feature API object: +```javascript +const callDominantSpeakersApi = call.feature(Features.CallDominantSpeakers); +``` ++Next you can obtain the list of the dominant speakers by calling `dominantSpeakers`. This has a type of `DominantSpeakersInfo`, which has the following members: ++- `speakersList` contains the list of the ranked dominant speakers in the call. These are represented by their participant ID. +- `timestamp` is the latest update time for the dominant speakers in the call. +```javascript +let dominantSpeakers: DominantSpeakersInfo = callDominantSpeakersApi.dominantSpeakers; +``` ++Also, you can subscribe to the `dominantSpeakersChanged` event to know when the dominant speakers list has changed. ++```javascript +const dominantSpeakersChangedHandler = () => { + // Get the most up-to-date list of dominant speakers + let dominantSpeakers = callDominantSpeakersApi.dominantSpeakers; +}; +callDominantSpeakersApi.on('dominantSpeakersChanged', dominantSpeakersChangedHandler); ++``` ++## Enabling screen sharing +### Twilio ++To share the screen in Twilio Video, source track should be obtained via navigator.mediaDevices ++Chromium-based browsers: +```javascript +const stream = await navigator.mediaDevices.getDisplayMedia({ + audio: false, + video: true + }); +const track = stream.getTracks()[0]; +``` ++Firefox and Safari: +```javascript +const stream = await navigator.mediaDevices.getUserMedia({ mediaSource: 'screen' }); +const track = stream.getTracks()[0]; +``` ++Obtain the screen share track can then be published and managed the same way as casual Video Track (see the ΓÇ£VideoΓÇ¥ section). ++### Azure Communication Services ++To start screen sharing while on a call, you can use asynchronous API `startScreenSharing`: +```javascript +await call.startScreenSharing(); +``` ++After successfully starting to sending screen sharing, a `LocalVideoStream` instance of type `ScreenSharing` is created and is added to the `localVideoStreams` collection on the call instance. ++```javascript +const localVideoStream = call.localVideoStreams.find( (stream) => { return stream.mediaStreamType === 'ScreenSharing'} ); +``` ++To stop screen sharing while on a call, you can use asynchronous API `stopScreenSharing`: +```javascript +await call.stopScreenSharing(); +``` ++To verify if screen sharing is on or off, you can use `isScreenSharingOn` API, which returns true or false: +```javascript +call.isScreenSharingOn; +``` ++To listen for changes to the screen share, you can subscribe and unsubscribe to the `isScreenSharingOnChanged` event ++```javascript +// Subscribe to screen share event +call.on('isScreenSharingOnChanged', () => { + // Callback(); +}); +// Unsubscribe from screen share event +call.off('isScreenSharingOnChanged', () => { + // Callback(); +}); ++``` ++## Media quality statistics ++### Twilio ++To collect real-time media stats, the getStats method can be used. +```javascript +const stats = twilioRoom.getStats(); +``` ++### Azure Communication Services ++Media quality statistics is an extended feature of the core Call API. You first need to obtain the mediaStatsFeature API object: ++```javascript +const mediaStatsFeature = call.feature(Features.MediaStats); +``` +++To receive the media statistics data, you can subscribe `sampleReported` event or `summmaryReported` event: ++- `sampleReported` event triggers every second. It's suitable as a data source for UI display or your own data pipeline. +- `summmaryReported` event contains the aggregated values of the data over intervals, which is useful when you just need a summary. ++If you want control over the interval of the summmaryReported event, you need to define `mediaStatsCollectorOptions` of type `MediaStatsCollectorOptions`. Otherwise, the SDK uses default values. +```javascript +const mediaStatsCollectorOptions: SDK.MediaStatsCollectorOptions = { + aggregationInterval: 10, + dataPointsPerAggregation: 6 +}; ++const mediaStatsCollector = mediaStatsFeature.createCollector(mediaStatsSubscriptionOptions); ++mediaStatsCollector.on('sampleReported', (sample) => { + console.log('media stats sample', sample); +}); ++mediaStatsCollector.on('summaryReported', (summary) => { + console.log('media stats summary', summary); +}); +``` ++In case you don't need to use the media statistics collector, you can call dispose method of `mediaStatsCollector`. ++```javascript +mediaStatsCollector.dispose(); +``` +++It's not necessary to call the dispose method of `mediaStatsCollector` every time the call ends, as the collectors are reclaimed internally when the call ends. ++You can learn more about media quality statistics [here](../concepts/voice-video-calling/media-quality-sdk.md?pivots=platform-web). ++## Diagnostics ++### Twilio ++To test connectivity, Twilio offers Preflight API - a test call is performed to identify signaling and media connectivity issues. ++To launch the test, an access token is required: ++```javascript +const preflightTest = twilioVideo.runPreflight(token); ++// Emits when particular call step completes +preflightTest.on('progress', (progress) => { + console.log(`Preflight progress: ${progress}`); +}); ++// Emits if the test has failed and returns error and partial test results +preflightTest.on('failed', (error, report) => { + console.error(`Preflight error: ${error}`); + console.log(`Partial preflight test report: ${report}`); +}); ++// Emits when the test has been completed successfully and returns the report +preflightTest.on('completed', (report) => { + console.log(`Preflight test report: ${report}`); +}); ++``` ++Another way to identify network issues during the call is Network Quality API, which monitors Participant's network and provides quality metrics. It can be enabled in the connection options when joining the Group Room: ++```javascript +twilioRoom = await twilioVideo.connect('token', { + name: 'roomName', + audio: false, + video: false, + networkQuality: { + local: 3, // Local Participant's Network Quality verbosity + remote: 1 // Remote Participants' Network Quality verbosity + } +}); +``` ++When the network quality for Participant changes, a `networkQualityLevelChanged` event will be emitted: +```javascript +participant.on(networkQualityLevelChanged, (networkQualityLevel, networkQualityStats) => { + // Processing Network Quality stats +}); +``` ++### Azure Communication Services +Azure Communication Services provides a feature called `"User Facing Diagnostics" (UFD)` that can be used to examine various properties of a call to determine what the issue might be. User Facing Diagnostics are events that are fired off that could indicate due to some underlying issue (poor network, the user has their microphone muted) that a user might have a poor experience. ++User-facing diagnostics is an extended feature of the core Call API and allows you to diagnose an active call. +```javascript +const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics); +``` ++Subscribe to the diagnosticChanged event to monitor when any user-facing diagnostic changes: +```javascript +/** + * Each diagnostic has the following data: + * - diagnostic is the type of diagnostic, e.g. NetworkSendQuality, DeviceSpeakWhileMuted + * - value is DiagnosticQuality or DiagnosticFlag: + * - DiagnosticQuality = enum { Good = 1, Poor = 2, Bad = 3 }. + * - DiagnosticFlag = true | false. + * - valueType = 'DiagnosticQuality' | 'DiagnosticFlag' + */ +const diagnosticChangedListener = (diagnosticInfo: NetworkDiagnosticChangedEventArgs | MediaDiagnosticChangedEventArgs) => { + console.log(`Diagnostic changed: ` + + `Diagnostic: ${diagnosticInfo.diagnostic}` + + `Value: ${diagnosticInfo.value}` + + `Value type: ${diagnosticInfo.valueType}`); ++ if (diagnosticInfo.valueType === 'DiagnosticQuality') { + if (diagnosticInfo.value === DiagnosticQuality.Bad) { + console.error(`${diagnosticInfo.diagnostic} is bad quality`); ++ } else if (diagnosticInfo.value === DiagnosticQuality.Poor) { + console.error(`${diagnosticInfo.diagnostic} is poor quality`); + } ++ } else if (diagnosticInfo.valueType === 'DiagnosticFlag') { + if (diagnosticInfo.value === true) { + console.error(`${diagnosticInfo.diagnostic}`); + } + } +}; ++userFacingDiagnostics.network.on('diagnosticChanged', diagnosticChangedListener); +userFacingDiagnostics.media.on('diagnosticChanged', diagnosticChangedListener); ++``` ++You can learn more about User Facing Diagnostics and the different diagnostic values available in [this article](../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web). ++ACS also provides a pre-call diagnostics API. To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method. ++```javascript +import { CallClient, Features} from "@azure/communication-calling"; +import { AzureCommunicationTokenCredential } from '@azure/communication-common'; ++const callClient = new CallClient(); +const tokenCredential = new AzureCommunicationTokenCredential("INSERT ACCESS TOKEN"); +const preCallDiagnosticsResult = await callClient.feature(Features.PreCallDiagnostics).startTest(tokenCredential); +``` ++The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a PreCallDiagnosticsResult object. ++```javascript +export declare type PreCallDiagnosticsResult = { + deviceAccess: Promise<DeviceAccess>; + deviceEnumeration: Promise<DeviceEnumeration>; + inCallDiagnostics: Promise<InCallDiagnostics>; + browserSupport?: Promise<DeviceCompatibility>; + id: string; + callMediaStatistics?: Promise<MediaStatsCallFeature>; +}; +``` ++You can learn more about ensuring precall readiness [here](../concepts/voice-video-calling/pre-call-diagnostics.md). +++## Event listeners ++### Twilio ++```javascript +twilioRoom.on('participantConneted', (participant) => { +// Participant connected +}); ++twilioRoom.on('participantDisconneted', (participant) => { +// Participant Disconnected +}); ++``` ++### Azure Communication Services ++Each object in the JavaScript Calling SDK has properties and collections. Their values change throughout the lifetime of the object. Use the `on()` method to subscribe to objects' events, and use the `off()` method to unsubscribe from objects' events. ++**Properties** ++- You must inspect their initial values, and subscribe to the `'\<property\>Changed'` event for future value updates. ++**Collections** ++- You must inspect their initial values, and subscribe to the `'\<collection\>Updated'` event for future value updates. +- The `'\<collection\>Updated'` event's payload, has an `added` array that contains values that were added to the collection. +- The `'\<collection\>Updated'` event's payload also has a removed array that contains values that were removed from the collection. ++## Leaving and ending sessions ++### Twilio +```javascript +twilioVideo.disconnect(); +``` +++### Azure Communication Services +```javascript +call.hangUp(); ++// Set the 'forEveryone' property to true to end call for all participants +call.hangUp({ forEveryone: true }); ++``` ++## Cleaning Up ++If you want to [clean up and remove a Communication Services subscription](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp#clean-up-resources), you can delete the resource or resource group. |
container-apps | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/whats-new.md | This article lists significant updates and new features available in Azure Conta | - | -- | | [Generally Available: Inbound IP restrictions](./ingress-overview.md#ip-restrictions) | Enables container apps to restrict inbound HTTP or TCP traffic by allowing or denying access to a specific list of IP address ranges. | | [Generally Available: TCP support](./ingress-overview.md#tcp) | Azure Container Apps now supports using TCP-based protocols other than HTTP or HTTPS for ingress. | -| [Generally Available: Github Actions for Azure Container Apps](./github-actions.md) | Azure Container Apps allows you to use GitHub Actions to publish revisions to your container app. | +| [Generally Available: GitHub Actions for Azure Container Apps](./github-actions.md) | Azure Container Apps allows you to use GitHub Actions to publish revisions to your container app. | | [Generally Available: Azure Pipelines for Azure Container Apps](./azure-pipelines.md) | Azure Container Apps allows you to use Azure Pipelines to publish revisions to your container app. | | [Dapr: Easy component creation](./dapr-component-connection.md) | You can now configure and secure dependent Azure services to use Dapr APIs in the portal using the Service Connector feature. Learn how to [connect to Azure services via Dapr components in the Azure portal](./dapr-component-connection.md). | |
container-registry | Container Registry Tutorial Sign Build Push | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md | Title: Sign container images with Notation and Azure Key Vault using a self-signed certificate (Preview) + Title: Sign container images with Notation and Azure Key Vault using a self-signed certificate description: In this tutorial you'll learn to create a self-signed certificate in Azure Key Vault (AKV), build and sign a container image stored in Azure Container Registry (ACR) with notation and AKV, and then verify the container image with notation. -# Sign container images with Notation and Azure Key Vault using a self-signed certificate (Preview) +# Sign container images with Notation and Azure Key Vault using a self-signed certificate Signing container images is a process that ensures their authenticity and integrity. This is achieved by adding a digital signature to the container image, which can be validated during deployment. The signature helps to verify that the image is from a trusted publisher and has not been modified. [Notation](https://github.com/notaryproject/notation) is an open source supply chain tool developed by the [Notary Project](https://notaryproject.dev/), which supports signing and verifying container images and other artifacts. The Azure Key Vault (AKV) is used to store certificates with signing keys that can be used by Notation with the Notation AKV plugin (azure-kv) to sign and verify container images and other artifacts. The Azure Container Registry (ACR) allows you to attach signatures to container images and other artifacts as well as view those signatures. -> [!IMPORTANT] -> This feature is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA). - In this tutorial: > [!div class="checklist"] |
container-registry | Container Registry Tutorial Sign Trusted Ca | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-trusted-ca.md | Title: Sign container images with Notation and Azure Key vault using a CA-issued certificate (Preview) + Title: Sign container images with Notation and Azure Key vault using a CA-issued certificate description: In this tutorial learn to create a CA-issued certificate in Azure Key Vault, build and sign a container image stored in Azure Container Registry (ACR) with notation and AKV, and then verify the container image using notation. -# Sign container images with Notation and Azure Key Vault using a CA-issued certificate (Preview) +# Sign container images with Notation and Azure Key Vault using a CA-issued certificate Signing and verifying container images with a certificate issued by a trusted Certificate Authority (CA) is a valuable security practice. This security measure will help you to responsibly identify, authorize, and validate the identity of both the publisher of the container image and the container image itself. The Trusted Certificate Authorities (CAs) such as GlobalSign, DigiCert, and others play a crucial role in the validation of a user's or organization's identity, maintaining the security of digital certificates, and revoking the certificate immediately upon any risk or misuse. Here are some essential components that help you to sign and verify container im When you verify the image, the signature is used to validate the integrity of the image and the identity of the signer. This helps to ensure that the container images are not tampered with and are from a trusted source. -> [!IMPORTANT] -> This feature is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA). - In this article: > [!div class="checklist"] |
copilot | Build Infrastructure Deploy Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/build-infrastructure-deploy-workloads.md | Microsoft Copilot for Azure (preview) can help you quickly build custom infrastr Throughout a conversation, Microsoft Copilot for Azure (preview) asks you questions to better understand your requirements and applications. Based on the provided information, it then provides several architecture options suitable for deploying that infrastructure. After you select an option, Microsoft Copilot for Azure (preview) provides detailed descriptions of the infrastructure, including how it can be configured. Finally, Microsoft Copilot for Azure provides templates and scripts using the language of your choice to deploy your infrastructure. -To get help building infrastructure and deploying workloads, start on the [More virtual machines and related solutions](https://portal.azure.com/?feature.customportal=false#view/Microsoft_Azure_SolutionCenter/SolutionGroup.ReactView/groupid/defaultLandingVmBrowse) page in the Azure portal. +To get help building infrastructure and deploying workloads, start on the [More virtual machines and related solutions](https://portal.azure.com/#view/Microsoft_Azure_SolutionCenter/SolutionGroup.ReactView/groupid/defaultLandingVmBrowse) page in the Azure portal. You can reach this page from **Virtual machines** by selecting the arrow next to **Create**, then selecting **More VMs and related solutions**. + Once you're there, start the conversation by letting Microsoft Copilot for Azure (preview) know what you want to build and deploy. The prompts you use can vary depending on the type of workload you want to deplo ## Examples -From the **More virtual machines and related solutions** page, you can tell Microsoft Copilot for Azure (preview) "**I want to deploy a website on Azure**." Microsoft Copilot for Azure (preview) responds with a series of questions to better understand your scenario. +From the **[More virtual machines and related solutions](https://portal.azure.com/#view/Microsoft_Azure_SolutionCenter/SolutionGroup.ReactView/groupid/defaultLandingVmBrowse)** page, you can tell Microsoft Copilot for Azure (preview) "**I want to deploy a website on Azure**." Microsoft Copilot for Azure (preview) responds with a series of questions to better understand your scenario. :::image type="content" source="media/build-infrastructure-deploy-workloads/workloads-deploy-website.png" lightbox="media/build-infrastructure-deploy-workloads/workloads-deploy-website.png" alt-text="Screenshot showing Microsoft Copilot for Azure (preview) providing options to deploy a website."::: |
defender-for-cloud | Agentless Vulnerability Assessment Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-aws.md | Container vulnerability assessment powered by Microsoft Defender Vulnerability M | Recommendation | Description | Assessment Key| |--|--|--|- | [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce) | Scans your AWS registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce | - | [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainersRuntimeRecommendationDetailsBlade/assessmentKey/682b2595-d045-4cff-b5aa-46624eb2dd8f)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Elastic Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 682b2595-d045-4cff-b5aa-46624eb2dd8f | + | [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce) | Scans your AWS registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce | + | [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainersRuntimeRecommendationDetailsBlade/assessmentKey/682b2595-d045-4cff-b5aa-46624eb2dd8f)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Elastic Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 682b2595-d045-4cff-b5aa-46624eb2dd8f | - **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md). |
defender-for-cloud | Endpoint Protection Recommendations Technical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md | Microsoft Antimalware extension logs are available at: ### Support -For more help, contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Or file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support common questions](https://azure.microsoft.com/support/faq/). +For more help, contact the Azure experts in [Azure Community Support](https://azure.microsoft.com/support/community/). Or file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support common questions](https://azure.microsoft.com/support/faq/). |
defender-for-cloud | How To Test Attack Path And Security Explorer With Vulnerable Container Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-test-attack-path-and-security-explorer-with-vulnerable-container-image.md | If there are no entries in the list of attack paths, you can still test this fea az aks get-credentials --subscription <cluster-suid> --resource-group <your-rg> --name <your-cluster-name> ``` -1. Install [ngnix ingress Controller](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/) : +1. Install the [ngnix ingress Controller](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/) : ```azurecli helm install ingress-controller oci://ghcr.io/nginxinc/charts/nginx-ingress --version 1.0.1 After you completed testing the attack path, investigate the created attack path ## AWS: Testing the attack path and security explorer using a mock vulnerable container image -1. Create ECR repository named *mdc-mock-0001* +1. Create an ECR repository named *mdc-mock-0001* 1. Go to your AWS account and choose **Command line or programmatic access**. 1. Open a command line and choose **Option 1: Set AWS environment variables (Short-term credentials)**. Copy the credentials of the *AWS_ACCESS_KEY_ID*, *AWS_SECRET_ACCESS_KEY*, and *AWS_SESSION_TOKEN* environment variables. 1. Run the following command to get the authentication token for your Amazon ECR registry. Replace `<REGION>` with the region of your registry. Replace `<ACCOUNT>` with your AWS account ID. After you completed testing the attack path, investigate the created attack path kubectl get nodes ``` -1. Install [ngnix ingress Controller](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/) : +1. Install the [ngnix ingress Controller](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/) : ```azurecli helm install ingress-controller oci://ghcr.io/nginxinc/charts/nginx-ingress --version 1.0.1 The Helm chart deploys resources onto your cluster that can be used to infer att After you completed testing the attack path, investigate the created attack path by going to **Attack path analysis**, and search for the attack path you created. For more information, see [Identify and remediate attack paths](how-to-manage-attack-path.md). +## GCP: Testing the attack path and security explorer using a mock vulnerable container image ++1. In the GCP portal, search for **Artifact Registry**, and then create a GCP repository named *mdc-mock-0001* +1. Follow [these instructions](https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling) to push the image to your repository. Run these commands: ++ ```docker + docker pull alpine + docker tag alpine <LOCATION>-docker.pkg.dev/<PROJECT_ID>/<REGISTRY>/<REPOSITORY>/mdc-mock-0001 + docker push <LOCATION>-docker.pkg.dev/<PROJECT_ID>/<REGISTRY>/<REPOSITORY>/mdc-mock-0001 + ``` ++1. Go to the GCP portal. Then go to **Kubernetes Engine** > **Clusters**. Select the **Connect** button. +1. Once connected, either run the command in the Cloud Shell or copy the connection command and run it on your machine: ++ ```gcloud-cli + gcloud container clusters get-credentials contra-bugbash-gcp --zone us-central1-c --project onboardingc-demo-gcp-1 + ``` ++1. Verify the configuration. You can check if `kubectl` is correctly configured by running: ++ ```gcloud-cli + kubectl get nodes + ``` ++1. To install the Helm chart, follow these steps: ++ 1. Under **Artifact registry** in the portal, go to the repository, and find the image URI under **Pull by digest**. + 1. Use the following command to install the Helm chart: ++ ```gcloud-cli + helm install dcspmcharts oci:/mcr.microsoft.com/mdc/stable/dcspmcharts --version 1.0.0 --namespace mdc-dcspm-demo --create-namespace --set image=<IMAGE_URI> --set distribution=GCP + ``` ++The Helm chart deploys resources onto your cluster that can be used to infer attack paths. It also includes the vulnerable image. ++> [!NOTE] +> After completing the above flow, it can take up to 24 hours to see results in the cloud security explorer and attack path. ++After you completed testing the attack path, investigate the created attack path by going to **Attack path analysis**, and search for the attack path you created. For more information, see [Identify and remediate attack paths](how-to-manage-attack-path.md). + ## Find container posture issues with cloud security explorer You can build queries in one of the following ways: |
defender-for-cloud | Transition To Defender Vulnerability Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/transition-to-defender-vulnerability-management.md | securityresources severity = properties.additionalData.vulnerabilityDetails.severity, status = properties.status.code, VulnId = properties.id, - description = properties.displayName, + description = properties.description, fixStatus = properties.additionalData.softwareDetails.fixStatus, - cve = properties.additionalData.cve, Repo = properties.additionalData.artifactDetails.repositoryName, imageUri = properties.resourceDetails.id | where status == 'Unhealthy' |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | If you're looking for the latest release notes, you can find them in the [What's | Planned change | Announcement date | Estimated date for change | |--|--|--|+| [Change in pricing for multicloud container threat detection](#change-in-pricing-for-multicloud-container-threat-detection) | January 30, 2024 | April 2024 | +| [Enforcement of Defender CSPM for Premium DevOps Security Capabilities](#enforcement-of-defender-cspm-for-premium-devops-security-value) | January 29, 2024 | March 2024 | | [Update to agentless VM scanning built-in Azure role](#update-to-agentless-vm-scanning-built-in-azure-role) |January 14, 2024 | February 2024 | | [Deprecation of two recommendations related to PCI](#deprecation-of-two-recommendations-related-to-pci) |January 14, 2024 | February 2024 | | [Four new recommendations for Azure Stack HCI resource type](#four-new-recommendations-for-azure-stack-hci-resource-type) | January 11, 2024 | February 2024 | | [Defender for Servers built-in vulnerability assessment (Qualys) retirement path](#defender-for-servers-built-in-vulnerability-assessment-qualys-retirement-path) | January 9, 2024 | May 2024 | | [Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys](#retirement-of-the-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys) | January 9, 2023 | March 2024 |-| [Enforcement of Defender CSPM for Premium DevOps Security Capabilities](#enforcement-of-defender-cspm-for-premium-devops-security-value) | January 29, 2024 | March 2024 | | [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | January 4, 2024 | February 2024 | | [Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements](#upcoming-change-for-the-defender-for-clouds-multicloud-network-requirements) | January 3, 2024 | May 2024 | | [Deprecation of two DevOps security recommendations](#deprecation-of-two-devops-security-recommendations) | November 30, 2023 | January 2024 | If you're looking for the latest release notes, you can find them in the [What's | [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 | +## Change in pricing for multicloud container threat detection ++**Announcement date: January 30, 2024** ++**Estimated date for change: April 2024** ++When [multicloud container threat detection](support-matrix-defender-for-containers.md) moves to GA, it will no longer be free of charge. For more information, see [Microsoft Defender for Cloud pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ++## Enforcement of Defender CSPM for Premium DevOps Security Value ++**Announcement date: January 29, 2024** ++**Estimated date for change: March 7, 2024** ++Defender for Cloud will begin enforcing the Defender CSPM plan check for premium DevOps security value beginning **March 7th, 2024**. If you have the Defender CSPM plan enabled on a cloud environment (Azure, AWS, GCP) within the same tenant your DevOps connectors are created in, you'll continue to receive premium DevOps capabilities at no additional cost. If you aren't a Defender CSPM customer, you have until **March 7th, 2024** to enable Defender CSPM before losing access to these security features. To enable Defender CSPM on a connected cloud environment before March 7, 2024, follow the enablement documentation outlined [here](tutorial-enable-cspm-plan.md#enable-the-components-of-the-defender-cspm-plan). ++For more information about which DevOps security features are available across both the Foundational CSPM and Defender CSPM plans, see [our documentation outlining feature availability](devops-support.md#feature-availability). ++For more information about DevOps Security in Defender for Cloud, see the [overview documentation](defender-for-devops-introduction.md). ++For more information on the code to cloud security capabilities in Defender CSPM, see [how to protect your resources with Defender CSPM](tutorial-enable-cspm-plan.md). + ## Update to agentless VM scanning built-in Azure role **Announcement date: January 14, 2024** **Estimated date of change: February 2024** -In Azure, agentless scanning for VMs uses a built-in role (called [VM scanner operator](/azure/defender-for-cloud/faq-permissions)) with the minimum necessary permissions required to scan and assess your VMs for security issues. To continuously provide relevant scan health and configuration recommendations for VMs with encrypted volumes, an update to this role's permissions is planned. The update includes the addition of the ```Microsoft.Compute/DiskEncryptionSets/read``` permission. This permission solely enables improved identification of encrypted disk usage in VMs. It does not provide Defender for Cloud any additional capabilities to decrypt or access the content of these encrypted volumes beyond the encryption methods [already supported](/azure/defender-for-cloud/concept-agentless-data-collection#availability) prior to this change. This change is expected to take place during February 2024 and no action is required on your end. +In Azure, agentless scanning for VMs uses a built-in role (called [VM scanner operator](/azure/defender-for-cloud/faq-permissions)) with the minimum necessary permissions required to scan and assess your VMs for security issues. To continuously provide relevant scan health and configuration recommendations for VMs with encrypted volumes, an update to this role's permissions is planned. The update includes the addition of the ```Microsoft.Compute/DiskEncryptionSets/read``` permission. This permission solely enables improved identification of encrypted disk usage in VMs. It doesn't provide Defender for Cloud any additional capabilities to decrypt or access the content of these encrypted volumes beyond the encryption methods [already supported](/azure/defender-for-cloud/concept-agentless-data-collection#availability) prior to this change. This change is expected to take place during February 2024 and no action is required on your end. ## Deprecation of two recommendations related to PCI Azure Stack HCI is set to be a new resource type that can be managed through Mic **Estimated date for change: May 2024** -The Defender for Servers built-in vulnerability assessment solution powered by Qualys is on a retirement path which is estimated to complete on **May 1st, 2024**. If you're currently using the vulnerability assessment solution powered by Qualys, you should plan your [transition to the integrated Microsoft Defender vulnerability management solution](how-to-transition-to-built-in.md). +The Defender for Servers built-in vulnerability assessment solution powered by Qualys is on a retirement path, which is estimated to complete on **May 1st, 2024**. If you're currently using the vulnerability assessment solution powered by Qualys, you should plan your [transition to the integrated Microsoft Defender vulnerability management solution](how-to-transition-to-built-in.md). For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, you can read [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112). For more information about transitioning to our new container vulnerability asse For common questions about the transition to Microsoft Defender Vulnerability Management, see [Common questions about the Microsoft Defender Vulnerability Management solution](common-questions-microsoft-defender-vulnerability-management.md). -## Enforcement of Defender CSPM for Premium DevOps Security Value --**Announcement date: January 29, 2023** --**Estimated date for change: March 7, 2024** --Defender for Cloud will begin enforcing the Defender CSPM plan check for premium DevOps security value beginning **March 7th, 2024**. If you have the Defender CSPM plan enabled on a cloud environment (Azure, AWS, GCP) within the same tenant your DevOps connectors are created in, you will continue to receive premium DevOps capabilities at no additional cost. If you are not a Defender CSPM customer, you have until **March 7th, 2024** to enable Defender CSPM before losing access to these security features. To enable Defender CSPM on a connected cloud environment before March 7th, 2024, follow the enablement documentation outlined [here](tutorial-enable-cspm-plan.md#enable-the-components-of-the-defender-cspm-plan). --For more information about which DevOps security features are available across both the Foundational CSPM and Defender CSPM plans, see [our documentation outlining feature availability](devops-support.md#feature-availability). --For more information about DevOps Security in Defender for Cloud, see the [overview documentation](defender-for-devops-introduction.md). --For more information on the code to cloud security capabilities in Defender CSPM, see [how to protect your resources with Defender CSPM](tutorial-enable-cspm-plan.md). - ## New version of Defender Agent for Defender for Containers **Announcement date: January 4, 2024** The following table lists the alerts to be deprecated: | AlertDisplayName | AlertType | |--|--|-| Communication with suspicious random domain name (Preview) | DNS_RandomizedDomain +| Communication with suspicious random domain name (Preview) | DNS_RandomizedDomain | | Communication with suspicious domain identified by threat intelligence (Preview) | DNS_ThreatIntelSuspectDomain | | Digital currency mining activity (Preview) | DNS_CurrencyMining | | Network intrusion detection signature activation (Preview) | DNS_SuspiciousDomain | |
expressroute | Expressroute Howto Circuit Portal Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md | After you create your circuit, continue with the following next step: > [!div class="nextstepaction"] > [Create and modify routing for your ExpressRoute circuit](expressroute-howto-routing-portal-resource-manager.md)+> [Create a connection to a virtual network gateway (Preview)](expressroute-howto-linkvnet-portal-resource-manager.md?pivots=expressroute-preview) |
frontdoor | Front Door Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md | For a comparison of supported features in Azure Front Door, see [Tier comparison ## Where is the service available? -Azure Front Door Classic/Standard/Premium SKUs are available in Microsoft Azure (Commercial) and Azure Front Door Classic SKU is available in Microsoft Azure Government (US). +Azure Front Door Standard, Premium and Classic tiers are available in Microsoft Azure (Commercial) and Microsoft Azure Government (US). ## Pricing |
governance | Supported Tables Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md | Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 01/22/2023 Last updated : 01/29/2023 For sample queries for this table, see [Resource Graph sample queries for resour - microsoft.sqlvirtualmachine/sqlvirtualmachinegroups - microsoft.SqlVirtualMachine/SqlVirtualMachines (SQL virtual machines) - microsoft.sqlvm/dwvm-- microsoft.storage/datamovers - microsoft.Storage/StorageAccounts (Storage accounts) - Sample query: [Find storage accounts with a specific case-insensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-insensitive-tag-on-the-resource-group) - Sample query: [Find storage accounts with a specific case-sensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-sensitive-tag-on-the-resource-group) |
iot-hub-device-update | Device Update Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-error-codes.md | There are two primary client-side components that may throw error codes: the Dev ### ResultCode and ExtendedResultCode -The Device Update for IoT Hub Core PnP interface reports `ResultCode` and `ExtendedResultCode`, which can be used to diagnose failures. For more information about the Device Update Core PnP interface, see [Device Update and Plug and Play](device-update-plug-and-play.md). For more details regarding the default meanings of Device Update agent ResultCode and ExtendedResultCodes, see the [Device Update Github repository](https://aka.ms/du-resultcodes). +The Device Update for IoT Hub Core PnP interface reports `ResultCode` and `ExtendedResultCode`, which can be used to diagnose failures. For more information about the Device Update Core PnP interface, see [Device Update and Plug and Play](device-update-plug-and-play.md). For more details regarding the default meanings of Device Update agent ResultCode and ExtendedResultCodes, see the [Device Update GitHub repository](https://aka.ms/du-resultcodes). `ResultCode` is a general status code and `ExtendedResultCode` is an integer with encoded error information. |
iot-operations | Howto Deploy Iot Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md | Use the Azure portal to deploy Azure IoT Operations components to your Arc-enabl ``` > [!NOTE]- > If you're using Github Codespaces in a browser, `az login` returns a localhost error in the browser window after logging in. To fix, either: + > If you're using GitHub Codespaces in a browser, `az login` returns a localhost error in the browser window after logging in. To fix, either: > > * Open the codespace in VS Code desktop, and then run `az login` again in the browser terminal. > * After you get the localhost error on the browser, copy the URL from the browser and run `curl "<URL>"` in a new terminal tab. You should see a JSON response with the message "You have logged into Microsoft Azure!." |
iot-operations | Howto Configure Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-reference.md | To add a dataset to the data store, you have two options: | Keys | See keys configuration in the following table. | | | Timestamps referenced should be in RFC3339, ISO 8601, or Unix timestamp format.-By default, the Expiration time for a dataset is set to `12h`. This default ensures that no stale data is enriched beyond 12 hours (if the data is not updated) or grow unbounded which can fill up the disk. +By default, the expiration time for a dataset is set to `24h`. This default ensures that no stale data is enriched beyond 24 hours (if the data is not updated) or grow unbounded which can fill up the disk. Each key includes: The two keys: | Field | Example | |||-| Property name | `asset` | +| Property name | `equipment name` | | Property path | `.equipment` | | Primary key | Yes | |
machine-learning | Concept Model Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-monitoring.md | Title: Monitoring models in production (preview) + Title: Monitoring models in production description: Monitor the performance of models deployed to production on Azure Machine Learning. --++ reviewer: msakande Previously updated : 09/15/2023 Last updated : 01/29/2024 -# Model monitoring with Azure Machine Learning (preview) +# Model monitoring with Azure Machine Learning In this article, you learn about model monitoring in Azure Machine Learning, the signals and metrics you can monitor, and the recommended practices for using model monitoring. +## The case for model monitoring -Model monitoring is the last step in the machine learning end-to-end lifecycle. This step tracks model performance in production and aims to understand it from both data science and operational perspectives. Unlike traditional software systems, the behavior of machine learning systems is governed not just by rules specified in code, but also by model behavior learned from data. Data distribution changes, training-serving skew, data quality issues, shift in environment, or consumer behavior changes can all cause models to become stale and their performance to degrade to the point that they fail to add business value or start to cause serious compliance issues in highly regulated environments. +Model monitoring is the last step in the machine learning end-to-end lifecycle. This step tracks model performance in production and aims to understand the performance from both data science and operational perspectives. -To implement monitoring, Azure Machine Learning acquires monitoring signals through data analysis on streamed production inference data and reference data. The reference data can include historical training data, validation data, or ground truth data. Each monitoring signal has one or more metrics. Users can set thresholds for these metrics in order to receive alerts via Azure Machine Learning or Azure Monitor about model or data anomalies. These alerts can prompt users to analyze or troubleshoot monitoring signals in Azure Machine Learning studio for continuous model quality improvement. +Unlike traditional software systems, the behavior of machine learning systems is governed not just by rules specified in code, but also by model behavior learned from data. Therefore, data distribution changes, training-serving skew, data quality issues, shifts in environments, or consumer behavior changes can all cause a model to become stale. When a model becomes stale, its performance can degrade to the point that it fails to add business value or starts to cause serious compliance issues in highly regulated environments. ++## How model monitoring works in Azure Machine Learning ++To implement monitoring, Azure Machine Learning acquires monitoring signals by performing statistical computations on streamed production inference data and reference data. The reference data can be historical training data, validation data, or ground truth data. On the other hand, the production inference data refers to the model's input and output data collected in production. ++Each monitoring signal has one or more metrics. Users can set thresholds for these metrics in order to receive alerts via Azure Machine Learning or Azure Event Grid about model or data anomalies. These alerts can prompt users to analyze or troubleshoot monitoring signals in Azure Machine Learning studio for continuous model quality improvement. ++The following steps describe an example of the statistical computation used to acquire a built-in monitoring signal, such as data drift, for a model that's in production. ++* For a feature in the training data, calculate the statistical distribution of its values. This distribution is the baseline distribution for the feature. +* Calculate the statistical distribution of the feature's latest values that are seen in production. +* Compare the distribution of the feature's latest values in production with the baseline distribution by performing a statistical test or calculating a distance score. +* When the test statistic or the distance score between the two distributions exceeds a user-specified threshold, Azure Machine Learning identifies the anomaly and notifies the user. ++### Model monitoring setup ++To enable and use model monitoring in Azure Machine Learning: ++1. **Enable production inference data collection.** If you deploy a model to an Azure Machine Learning online endpoint, you can enable production inference data collection by using Azure Machine Learning [model data collection](concept-data-collection.md). However, if you deploy a model outside of Azure Machine Learning or to an Azure Machine Learning batch endpoint, you're responsible for collecting production inference data. You can then use this data for Azure Machine Learning model monitoring. +1. **Set up model monitoring.** You can use Azure Machine Learning SDK/CLI 2.0 or the studio UI to easily set up model monitoring. During the setup, you can specify your preferred monitoring signals and customize metrics and thresholds for each signal. +1. **View and analyze model monitoring results.** Once model monitoring is set up, Azure Machine Learning schedules a monitoring job to run at your specified frequency. Each run computes and evaluates metrics for all selected monitoring signals and triggers alert notifications when any specified threshold is exceeded. You can follow the link in the alert notification to view and analyze monitoring results in your Azure Machine Learning workspace. ## Capabilities of model monitoring Azure Machine Learning provides the following capabilities for continuous model monitoring: -* **Built-in monitoring signals**. Model monitoring provides built-in monitoring signals for tabular data. These monitoring signals include data drift, prediction drift, data quality, and feature attribution drift. +* **Built-in monitoring signals**. Model monitoring provides built-in monitoring signals for tabular data. These monitoring signals include data drift, prediction drift, data quality, feature attribution drift, and model performance. * **Out-of-box model monitoring setup with Azure Machine Learning online endpoint**. If you deploy your model to production in an Azure Machine Learning online endpoint, Azure Machine Learning collects production inference data automatically and uses it for continuous monitoring. * **Use of multiple monitoring signals for a broad view**. You can easily include several monitoring signals in one monitoring setup. For each monitoring signal, you can select your preferred metric(s) and fine-tune an alert threshold.-* **Use of recent past production data or training data as reference data for comparison**. For monitoring signals, Azure Machine Learning lets you set reference data using recent past production data or training data. -* **Monitoring of top N features for data drift or data quality**. If you use training data as the reference data, you can define data drift or data quality signals layering over feature importance. +* **Use of training data or recent, past production data as reference data for comparison**. For monitoring signals, Azure Machine Learning lets you set reference data using training data or recent, past production data. +* **Monitoring of top N features for data drift or data quality**. If you use training data as the reference data, you can define data drift or data quality signals layered over feature importance. * **Flexibility to define your monitoring signal**. If the built-in monitoring signals aren't suitable for your business scenario, you can define your own monitoring signal with a custom monitoring signal component.-* **Flexibility to use production inference data from any source**. If you deploy models outside of Azure Machine Learning, or if you deploy models to Azure Machine Learning batch endpoints, you can collect production inference data. You can then use the inference data in Azure Machine Learning for model monitoring. -* **Flexibility to select data window**. You have the flexibility to select a data window for both the production data and the reference data. - * By default, the data window for production data is your monitoring frequency. That is, all data collected in the past monitoring period before the monitoring job is run will be analyzed. You can use the `production_data.data_window_size` property to adjust the data window for the production data, if needed. - * By default, the data window for the reference data is the full dataset. You can adjust the reference data window with the `reference_data.data_window` property. Both rolling data window and fixed data window are supported. +* **Flexibility to use production inference data from any source**. If you deploy models outside of Azure Machine Learning, or if you deploy models to Azure Machine Learning batch endpoints, you can collect production inference data to use in Azure Machine Learning for model monitoring. ++## Lookback window size and offset ++The **lookback window size** is the duration of time (in ISO 8601 format) for your production or reference data window, looking back from the date of your monitoring run. ++The **lookback window offset** is the duration of time (in ISO 8601 format) to offset the end of your data window from the date of your monitoring run. ++For example, suppose your model is in production and you have a monitor set to run on January 31 at 3:15pm UTC, if you set a production lookback window size of `P7D` (seven days) for the monitor and a production lookback window offset of `P0D` (zero days), the monitor uses data from January 24 at 3:15pm UTC up until January 31 at 3:15pm UTC (the time your monitor runs) in the data window. ++Furthermore, for the reference data, if you set the lookback window offset to `P7D` (seven days), the reference data window ends right before the production data window starts, so that there's no overlap. You can then set your reference data lookback window size to be as large as you like. For example, by setting the reference data lookback window size to `P24D` (24 days), the reference data window includes data from January 1 at 3:15pm UTC up until January 24 at 3:15pm UTC. The following figure illustrates this example. +++In some cases, you might find it useful to set the _lookback window offset_ for your production data to a number greater than zero days. For example, if your monitor is scheduled to run weekly on Mondays at 3:15pm UTC, but you don't want to use data from the weekend in your monitoring run, you can use a _lookback window size_ of `P5D` (five days) and a _lookback window offset_ of `P2D` (two days). Then, your data window starts on the prior Monday at 3:15pm UTC and ends on Friday at 3:15pm UTC. ++In practice, you should ensure that the reference data window and the production data window don't overlap. As shown in the following figure, you can ensure non-overlapping windows by making sure that the reference data lookback window offset (`P10D` or 10 days, in this example) is greater or equal to the sum of the production data's lookback window size and its lookback window offset (seven days total). +++With Azure Machine Learning model monitoring, you can use smart defaults for your lookback window size and lookback window offset, or you can customize them to meet your needs. Also, both rolling windows and fixed windows are supported. ++### Customize lookback window size ++You have the flexibility to select a lookback window size for both the production data and the reference data. ++* By default, the lookback window size for production data is your monitoring frequency. That is, all data collected in the monitoring period before the monitoring job is run will be analyzed. You can use the `production_data.data_window.lookback_window_size` property to adjust the rolling data window for the production data. ++* By default, the lookback window for the reference data is the full dataset. You can use the `reference_data.data_window.lookback_window_size` property to adjust the reference lookback window size. ++* To specify a fixed data window for the reference data, you can use the properties `reference_data.data_window.window_start_date` and `reference_data.data_window.window_end_date`. ++### Customize lookback window offset ++You have the flexibility to select a lookback window offset for your data window for both the production data and the reference data. You can use the offset for granular control over the data your monitor uses. The offset only applies to rolling data windows. ++* By default, the offset for production data is `P0D` (zero days). You can modify this offset with the `production_data.data_window.lookback_window_offset` property. ++* By default, the offset for reference data is twice the `production_data.data_window.lookback_window_size`. This setting ensures that there's enough reference data for statistically meaningful monitoring results. You can modify this offset with the `reference_data.data_window.lookback_window_offset` property. ## Monitoring signals and metrics -Azure Machine Learning model monitoring (preview) supports the following list of monitoring signals and metrics: +Azure Machine Learning model monitoring supports the following list of monitoring signals and metrics: + |Monitoring signal | Description | Metrics | Model tasks (supported data format) | Production data | Reference data | |--|--|--|--|--|--|-| Data drift | Data drift tracks changes in the distribution of a model's input data by comparing it to the model's training data or recent past production data. | Jensen-Shannon Distance, Population Stability Index, Normalized Wasserstein Distance, Two-Sample Kolmogorov-Smirnov Test, Pearson's Chi-Squared Test | Classification (tabular data), Regression (tabular data) | Production data - model inputs | Recent past production data or training data | -| Prediction drift | Prediction drift tracks changes in the distribution of a model's prediction outputs by comparing it to validation or test labeled data or recent past production data. | Jensen-Shannon Distance, Population Stability Index, Normalized Wasserstein Distance, Chebyshev Distance, Two-Sample Kolmogorov-Smirnov Test, Pearson's Chi-Squared Test | Classification (tabular data), Regression (tabular data) | Production data - model outputs | Recent past production data or validation data | -| Data quality | Data quality tracks the data integrity of a model's input by comparing it to the model's training data or recent past production data. The data quality checks include checking for null values, type mismatch, or out-of-bounds of values. | Null value rate, data type error rate, out-of-bounds rate | Classification (tabular data), Regression (tabular data) | production data - model inputs | Recent past production data or training data | -| Feature attribution drift | Feature attribution drift tracks the contribution of features to predictions (also known as feature importance) during production by comparing it with feature importance during training.| Normalized discounted cumulative gain | Classification (tabular data), Regression (tabular data) | Production data - model inputs & outputs | Training data (required) | -|[Generative AI: Generation safety and quality](./prompt-flow/how-to-monitor-generative-ai-applications.md)|Evaluates generative AI applications for safety & quality using GPT-assisted metrics.| Groundedness, relevance, fluency, similarity, coherence|text_question_answering| prompt, completion, context, and annotation template |N/A| +| Data drift | Data drift tracks changes in the distribution of a model's input data by comparing the distribution to the model's training data or recent, past production data. | Jensen-Shannon Distance, Population Stability Index, Normalized Wasserstein Distance, Two-Sample Kolmogorov-Smirnov Test, Pearson's Chi-Squared Test | Classification (tabular data), Regression (tabular data) | Production data - model inputs | Recent past production data or training data | +| Prediction drift | Prediction drift tracks changes in the distribution of a model's predicted outputs, by comparing the distribution to validation data, labeled test data, or recent past production data. | Jensen-Shannon Distance, Population Stability Index, Normalized Wasserstein Distance, Chebyshev Distance, Two-Sample Kolmogorov-Smirnov Test, Pearson's Chi-Squared Test | Classification (tabular data), Regression (tabular data) | Production data - model outputs | Recent past production data or validation data | +| Data quality | Data quality tracks the data integrity of a model's input by comparing it to the model's training data or recent, past production data. The data quality checks include checking for null values, type mismatch, or out-of-bounds values. | Null value rate, data type error rate, out-of-bounds rate | Classification (tabular data), Regression (tabular data) | production data - model inputs | Recent past production data or training data | +| Feature attribution drift (preview) | Feature attribution drift is based on the contribution of features to predictions (also known as feature importance). Feature attribution drift tracks feature importance during production by comparing it with feature importance during training.| Normalized discounted cumulative gain | Classification (tabular data), Regression (tabular data) | Production data - model inputs & outputs | Training data (required) | +| Model performance - Classification (preview) | Model performance tracks the objective performance of a model's output in production by comparing it to collected ground truth data. | Accuracy, Precision, and Recall | Classification (tabular data) | Production data - model outputs | Ground truth data (required) | +| Model performance - Regression (preview) | Model performance tracks the objective performance of a model's output in production by comparing it to collected ground truth data. | Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE) | Regression (tabular data) | Production data - model outputs | Ground truth data (required) | +|[Generative AI: Generation safety and quality](./prompt-flow/how-to-monitor-generative-ai-applications.md) (preview)|Evaluates generative AI applications for safety and quality, using GPT-assisted metrics.| Groundedness, relevance, fluency, similarity, coherence| Question & Answering | prompt, completion, context, and annotation template |N/A| +### Metrics for the data quality monitoring signal - -## How model monitoring works in Azure Machine Learning +The data quality monitoring signal tracks the integrity of a model's input data by calculating the three metrics: -Azure Machine Learning acquires monitoring signals by performing statistical computations on production inference data and reference data. This reference data can include the model's training data or validation data, while the production inference data refers to the model's input and output data collected in production. +- Null value rate +- Data type error rate +- Out-of-bounds rate -The following steps describe an example of the statistical computation used to acquire a data drift signal for a model that's in production. -* For a feature in the training data, calculate the statistical distribution of its values. This distribution is the baseline distribution. -* Calculate the statistical distribution of the feature's latest values that are seen in production. -* Compare the distribution of the feature's latest values in production against the baseline distribution by performing a statistical test or calculating a distance score. -* When the test statistic or the distance score between the two distributions exceeds a user-specified threshold, Azure Machine Learning identifies the anomaly and notifies the user. +#### Null value rate -### Enabling model monitoring +The _null value rate_ is the rate of null values in the model's input for each feature. For example, if the monitoring production data window contains 100 rows and the value for a specific feature `temperature` is null for 10 of those rows, the null value rate for `temperature` is 10%. -Take the following steps to enable model monitoring in Azure Machine Learning: +- Azure Machine Learning supports calculating the **Null value rate** for all feature data types. -* **Enable production inference data collection.** If you deploy a model to an Azure Machine Learning online endpoint, you can enable production inference data collection by using Azure Machine Learning [Model Data Collection](concept-data-collection.md). However, if you deploy a model outside of Azure Machine Learning or to an Azure Machine Learning batch endpoint, you're responsible for collecting production inference data. You can then use this data for Azure Machine Learning model monitoring. -* **Set up model monitoring.** You can use SDK/CLI 2.0 or the studio UI to easily set up model monitoring. During the setup, you can specify your preferred monitoring signals and customize metrics and thresholds for each signal. -* **View and analyze model monitoring results.** Once model monitoring is set up, a monitoring job is scheduled to run at your specified frequency. Each run computes and evaluates metrics for all selected monitoring signals and triggers alert notifications when any specified threshold is exceeded. You can follow the link in the alert notification to your Azure Machine Learning workspace to view and analyze monitoring results. +#### Data type error rate ++The _data type error rate_ is the rate of data type differences between the current production data window and the reference data. During each monitoring run, Azure Machine Learning model monitoring infers the data type for each feature from the reference data. For example, if the data type for a feature `temperature` is inferred to be `IntegerType` from the reference data, but in the production data window, 10 out of 100 values for `temperature` aren't IntegerType (perhaps they're strings), then the data type error rate for `temperature` is 10%. ++- Azure Machine Learning supports calculating the data type error rate for the following data types that are available in PySpark: `ShortType`, `BooleanType`, `BinaryType`, `DoubleType`, `TimestampType`, `StringType`, `IntegerType`, `FloatType`, `ByteType`, `LongType`, and `DateType`. +- If the data type for a feature isn't contained in this list, Azure Machine Learning model monitoring still runs but won't compute the data type error rate for that specific feature. ++#### Out-of-bounds rate ++The _out-of-bounds rate_ is the rate of values for each feature, which fall outside of the appropriate range or set determined by the reference data. During each monitoring run, Azure Machine Learning model monitoring determines the acceptable range or set for each feature from the reference data. ++- For a numerical feature, the appropriate range is a numerical interval of the minimum value in the reference dataset to the maximum value, such as [0, 100]. +- For a categorical feature, such as `color`, the appropriate range is a set of all values contained in the reference dataset, such as [`red`, `yellow`, `green`]. ++For example, if you have a numerical feature `temperature` where all values fall within the range [37, 77] in the reference dataset, but in the production data window, 10 out of 100 values for `temperature` fall outside of the range [37, 77], then the out-of-bounds rate for `temperature` is 10%. ++- Azure Machine Learning supports calculating the out-of-bounds rate for these data types that are available in PySpark: `StringType`, `IntegerType`, `DoubleType`, `ByteType`, `LongType`, and `FloatType`. +- If the data type for a feature isn't contained in this list, Azure Machine Learning model monitoring still runs but won't compute the out-of-bounds rate for that specific feature. ++Azure Machine Learning model monitoring supports up to 0.00001 precision for calculations of the null value rate, data type error rate, and out-of-bounds rate. ## Recommended best practices for model monitoring Each machine learning model and its use cases are unique. Therefore, model monitoring is unique for each situation. The following is a list of recommended best practices for model monitoring:-* **Start model monitoring as soon as your model is deployed to production.** -* **Work with data scientists that are familiar with the model to set up model monitoring.** Data scientists who have insight into the model and its use cases are in the best position to recommend monitoring signals and metrics as well as set the right alert thresholds for each metric (to avoid alert fatigue). -* **Include multiple monitoring signals in your monitoring setup.** With multiple monitoring signals, you get both a broad view and granular view of monitoring. For example, you can combine both data drift and feature attribution drift signals to get an early warning about your model performance issue. +* **Start model monitoring immediately after you deploy a model to production.** +* **Work with data scientists that are familiar with the model to set up model monitoring.** Data scientists who have insight into the model and its use cases are in the best position to recommend monitoring signals and metrics and set the right alert thresholds for each metric (to avoid alert fatigue). +* **Include multiple monitoring signals in your monitoring setup.** With multiple monitoring signals, you get both a broad view and granular view of monitoring. For example, you can combine data drift and feature attribution drift signals to get an early warning about your model performance issues. * **Use model training data as the reference data.** For reference data used as the comparison baseline, Azure Machine Learning allows you to use the recent past production data or historical data (such as training data or validation data). For a meaningful comparison, we recommend that you use the training data as the comparison baseline for data drift and data quality. For prediction drift, use the validation data as the comparison baseline.-* **Specify the monitoring frequency based on how your production data will grow over time**. For example, if your production model has much traffic daily, and the daily data accumulation is sufficient for you to monitor, then you can set the monitoring frequency to daily. Otherwise, you can consider a weekly or monthly monitoring frequency, based on the growth of your production data over time. +* **Specify the monitoring frequency, based on how your production data will grow over time**. For example, if your production model has much traffic daily, and the daily data accumulation is sufficient for you to monitor, then you can set the monitoring frequency to daily. Otherwise, you can consider a weekly or monthly monitoring frequency, based on the growth of your production data over time. * **Monitor the top N important features or a subset of features.** If you use training data as the comparison baseline, you can easily configure data drift monitoring or data quality monitoring for the top N features. For models that have a large number of features, consider monitoring a subset of those features to reduce computation cost and monitoring noise.+* **Use the model performance signal when you have access to ground truth data.** If you have access to ground truth data (also known as actuals) based on the particulars of your machine learning application, we recommended that you use the model performance signal to compare the ground truth data to your model's output. This comparison provides an objective view into the performance of your model in production. ++## Model monitoring integration with Azure Event Grid ++You can use events generated by Azure Machine Learning model monitoring runs to set up event-driven applications, processes, or CI/CD workflows with [Azure Event Grid](how-to-use-event-grid.md). ++When your model monitor detects drift, data quality issues, or model performance degradation, you can track these events with Event Grid and take action programmatically. For example, if the accuracy of your classification model in production dips below a certain threshold, you can use Event Grid to begin a retraining job that uses collected ground truth data. To learn how to integrate Azure Machine Learning with Event Grid, see [Perform continuous model monitoring in Azure Machine Learning](how-to-monitor-model-performance.md). -## Next steps +## Related content -- [Perform continuous model monitoring in Azure Machine Learning](how-to-monitor-model-performance.md) - [Model data collection](concept-data-collection.md) - [Collect production inference data](how-to-collect-production-data.md) - [Model monitoring for generative AI applications](./prompt-flow/how-to-monitor-generative-ai-applications.md) |
machine-learning | How To Auto Train Forecast | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md | az ml job create --file automl-hts-forecasting-pipeline.yml -w <Workspace> -g <R After the job finishes, the evaluation metrics can be downloaded locally using the same procedure as in the [single training run pipeline](#orchestrating-training-inference-and-evaluation-with-components-and-pipelines). -Also see the [demand forecasting with hierarchical time series notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-hierarchical-timeseries-in-pipeline/automl-forecasting-demand-hierarchical-timeseries-in-pipeline.ipynb) for a more detailed example. +Also see the [demand forecasting with hierarchical time series notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-hierarchical-timeseries-in-pipeline/automl-forecasting-demand-hts.ipynb) for a more detailed example. > [!NOTE] > The HTS training and inference components conditionally partition your data according to the `hierarchy_column_names` setting so that each partition is in its own file. This process can be very slow or fail when data is very large. In this case, we recommend partitioning your data manually before running HTS training or inference. |
machine-learning | How To Collect Production Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-collect-production-data.md | -In this article, you'll learn how to collect production inference data from a model deployed to an Azure Machine Learning managed online endpoint or Kubernetes online endpoint. +In this article, you learn how to use Azure Machine Learning **Data collector** to collect production inference data from a model that is deployed to an Azure Machine Learning managed online endpoint or a Kubernetes online endpoint. [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] -Azure Machine Learning **Data collector** logs inference data in Azure blob storage. You can enable data collection for new or existing online endpoint deployments. +You can enable data collection for new or existing online endpoint deployments. Azure Machine Learning data collector logs inference data in Azure Blob Storage. Data collected with the Python SDK is automatically registered as a data asset in your Azure Machine Learning workspace. This data asset can be used for model monitoring. -Data collected with the provided Python SDK is automatically registered as a data asset in your Azure Machine Learning workspace. This data asset can be used for model monitoring. --If you're interested in collecting production inference data for a MLFlow model deployed to a real-time endpoint, doing so can be done with a single toggle. To learn how to do this, see [Data collection for MLFlow models](#collect-data-for-mlflow-models). +If you're interested in collecting production inference data for an MLflow model that is deployed to a real-time endpoint, see [Data collection for MLflow models](#collect-data-for-mlflow-models). ## Prerequisites If you're interested in collecting production inference data for a MLFlow model * Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). -# [Python](#tab/python) -+# [Python SDK](#tab/python) [!INCLUDE [basic prereqs sdk](includes/machine-learning-sdk-v2-prereqs.md)] If you're interested in collecting production inference data for a MLFlow model -* Have a registered model that you can use for deployment. If you haven't already registered a model, see [Register your model as an asset in Machine Learning](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-cli). +* Have a registered model that you can use for deployment. If you don't have a registered model, see [Register your model as an asset in Machine Learning](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-cli). * Create an Azure Machine Learning online endpoint. If you don't have an existing online endpoint, see [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md). ## Perform custom logging for model monitoring -Data collection with custom logging allows you to log pandas DataFrames directly from your scoring script before, during, and after any data transformations. With custom logging, tabular data is logged in real-time to your workspace Blob storage or a custom Blob storage container. From storage, it can be consumed by your model monitors. +Data collection with custom logging allows you to log pandas DataFrames directly from your scoring script before, during, and after any data transformations. With custom logging, tabular data is logged in real time to your workspace Blob Storage or a custom blob storage container. Your model monitors can consume the data from storage. ### Update your scoring script with custom logging code -First, you'll need to add custom logging code to your scoring script (`score.py`). For custom logging, you'll need the `azureml-ai-monitoring` package. For more information, see the comprehensive [PyPI page for the data collector SDK](https://pypi.org/project/azureml-ai-monitoring/). +To begin, add custom logging code to your scoring script (`score.py`). For custom logging, you need the `azureml-ai-monitoring` package. For more information on this package, see the comprehensive [PyPI page for the data collector SDK](https://pypi.org/project/azureml-ai-monitoring/). 1. Import the `azureml-ai-monitoring` package by adding the following line to the top of the scoring script: First, you'll need to add custom logging code to your scoring script (`score.py` 1. Declare your data collection variables (up to five of them) in your `init()` function: > [!NOTE]- > If you use the names `model_inputs` and `model_outputs` for your `Collector` objects, the model monitoring system will automatically recognize the automatically registered data assets, which will provide for a more seamless model monitoring experience. + > If you use the names `model_inputs` and `model_outputs` for your `Collector` objects, the model monitoring system automatically recognizes the automatically registered data assets to provide for a more seamless model monitoring experience. ```python global inputs_collector, outputs_collector inputs_collector = Collector(name='model_inputs') outputs_collector = Collector(name='model_outputs')- inputs_outputs_collector = Collector(name='model_inputs_outputs') ``` By default, Azure Machine Learning raises an exception if there's a failure during data collection. Optionally, you can use the `on_error` parameter to specify a function to run if logging failure happens. For instance, using the `on_error` parameter in the following code, Azure Machine Learning logs the error rather than throwing an exception: First, you'll need to add custom logging code to your scoring script (`score.py` ``` > [!NOTE]- > Currently, only pandas DataFrames can be logged with the `collect()` API. If the data is not in a DataFrame when passed to `collect()`, it will not be logged to storage and an error will be reported. + > Currently, the `collect()` API logs only pandas DataFrames. If the data is not in a DataFrame when passed to `collect()`, it won't get logged to storage and an error will be reported. -The following code is an example of a full scoring script (`score.py`) that uses the custom logging Python SDK. In this example, a third `Collector` called `inputs_outputs_collector` logs a joined DataFrame of the `model_inputs` and the `model_outputs`. This joined DataFrame enables additional monitoring signals (feature attribution drift, etc.). If you are not interested in those monitoring signals, please feel free to remove this `Collector`. +The following code is an example of a full scoring script (`score.py`) that uses the custom logging Python SDK. In this example, a third `Collector` called `inputs_outputs_collector` logs a joined DataFrame of the `model_inputs` and the `model_outputs`. This joined DataFrame enables more monitoring signals such as feature attribution drift. If you're not interested in these monitoring signals, you can remove this `Collector`. ```python import pandas as pd def init(): # instantiate collectors with appropriate names, make sure align with deployment spec inputs_collector = Collector(name='model_inputs') outputs_collector = Collector(name='model_outputs')- inputs_outputs_collector = Collector(name='model_inputs_outputs') #note: this is used to enable Feature Attribution Drift def run(data): # json data: { "data" : { "col1": [1,2,3], "col2": [2,3,4] } } def run(data): # collect outputs data, pass in correlation_context so inputs and outputs data can be correlated later outputs_collector.collect(output_df, context)-- # create a dataframe with inputs/outputs joined - this creates a URI folder (not mltable) - # input_output_df = input_df.merge(output_df, context) - input_output_df = input_df.join(output_df) -- # collect both your inputs and output - inputs_outputs_collector.collect(input_output_df, context) return output_df.to_dict() def predict(input_df): ### Update your dependencies -Before you create your deployment with the updated scoring script, you'll create your environment with the base image `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04` and the appropriate conda dependencies, then you'll build the environment using the specification in the following YAML. +Before you can create your deployment with the updated scoring script, you need to create your environment with the base image `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04` and the appropriate conda dependencies. Thereafter, you can build the environment using the specification in the following YAML. ```yml channels: name: model-env ### Update your deployment YAML -Next, we'll create the deployment YAML. Include the `data_collector` attribute and enable collection for `model_inputs` and `model_outputs`, which are the names we gave our `Collector` objects earlier via the custom logging Python SDK: +Next, you create the deployment YAML. To create the deployment YAML, include the `data_collector` attribute and enable data collection for the `Collector` objects, `model_inputs` and `model_outputs`, that you instantiated earlier via the custom logging Python SDK: ```yml data_collector: data_collector: enabled: 'True' model_outputs: enabled: 'True'- model_inputs_outputs: - enabled: 'True' ``` -The following code is an example of a comprehensive deployment YAML for a managed online endpoint deployment. You should update the deployment YAML according to your scenario. For more examples on how to format your deployment YAML for inference data logging, see [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/data-collector](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/data-collector). +The following code is an example of a comprehensive deployment YAML for a managed online endpoint deployment. You should update the deployment YAML according to your scenario. For more examples on how to format your deployment YAML for inference data logging, see [Azure model data collector examples](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/data-collector). ```yml $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json data_collector: enabled: 'True' model_outputs: enabled: 'True'- model_inputs_outputs: - enabled: 'True' ``` -Optionally, you can adjust the following additional parameters for your `data_collector`: +Optionally, you can adjust the following extra parameters for your `data_collector`: -- `data_collector.rolling_rate`: The rate to partition the data in storage. Value can be: Minute, Hour, Day, Month, or Year.-- `data_collector.sampling_rate`: The percentage, represented as a decimal rate, of data to collect. For instance, a value of 1.0 represents collecting 100% of data.+- `data_collector.rolling_rate`: The rate to partition the data in storage. Choose from the values: `Minute`, `Hour`, `Day`, `Month`, or `Year`. +- `data_collector.sampling_rate`: The percentage, represented as a decimal rate, of data to collect. For instance, a value of `1.0` represents collecting 100% of data. - `data_collector.collections.<collection_name>.data.name`: The name of the data asset to register with the collected data. - `data_collector.collections.<collection_name>.data.path`: The full Azure Machine Learning datastore path where the collected data should be registered as a data asset. - `data_collector.collections.<collection_name>.data.version`: The version of the data asset to be registered with the collected data in blob storage. -#### Collect data to a custom Blob storage container +#### Collect data to a custom blob storage container -If you need to collect your production inference data to a custom Blob storage container, you can do so with the data collector. +You can use the data collector to collect your production inference data to a custom blob storage container by following these steps: -To use the data collector with a custom Blob storage container, connect the storage container to an Azure Machine Learning datastore. To learn how to do so, see [create datastores](how-to-datastore.md). +1. Connect the storage container to an Azure Machine Learning datastore. For more information on connecting the storage container to the Azure Machine Learning datastore, see [create datastores](how-to-datastore.md). -Next, ensure that your Azure Machine Learning endpoint has the necessary permissions to write to the datastore destination. The data collector supports both system assigned managed identities (SAMIs) and user assigned managed identities (UAMIs). Add the identity to your endpoint. Assign the role `Storage Blob Data Contributor` to this identity with the Blob storage container which will be used as the data destination. To learn how to use managed identities in Azure, see [assign Azure roles to a managed identity](/azure/role-based-access-control/role-assignments-portal-managed-identity). +1. Check that your Azure Machine Learning endpoint has the necessary permissions to write to the datastore destination. -Then, update your deployment YAML to include the `data` property within each collection. The `data.name` is a required parameter used to specify the name of the data asset to be registered with the collected data. The `data.path` is a required parameter used to specify the fully-formed Azure Machine Learning datastore path, which is connected to your Azure Blob storage container. The `data.version` is an optional parameter used to specify the version of the data asset (defaults to 1). + The data collector supports both system assigned managed identities (SAMIs) and user assigned managed identities (UAMIs). Add the identity to your endpoint. Assign the `Storage Blob Data Contributor` role to this identity with the Blob storage container to be used as the data destination. To learn how to use managed identities in Azure, see [assign Azure roles to a managed identity](/azure/role-based-access-control/role-assignments-portal-managed-identity). -Here is an example YAML configuration of how you would do so: +1. Update your deployment YAML to include the `data` property within each collection. -```yml -data_collector: - collections: - model_inputs: - enabled: 'True' - data: - name: my_model_inputs_data_asset - path: azureml://datastores/workspaceblobstore/paths/modelDataCollector/my_endpoint/blue/model_inputs - version: 1 - model_outputs: - enabled: 'True' - data: - name: my_model_outputs_data_asset - path: azureml://datastores/workspaceblobstore/paths/modelDataCollector/my_endpoint/blue/model_outputs - version: 1 -``` + - The _required_ parameter, `data.name`, specifies the name of the data asset to be registered with the collected data. + - The _required_ parameter, `data.path`, specifies the fully formed Azure Machine Learning datastore path, which is connected to your Azure Blob Storage container. + - The _optional_ parameter, `data.version`, specifies the version of the data asset (defaults to 1). ++ The following YAML configuration shows an example of how to include the `data` property within each collection. + + ```yml + data_collector: + collections: + model_inputs: + enabled: 'True' + data: + name: my_model_inputs_data_asset + path: azureml://datastores/workspaceblobstore/paths/modelDataCollector/my_endpoint/blue/model_inputs + version: 1 + model_outputs: + enabled: 'True' + data: + name: my_model_outputs_data_asset + path: azureml://datastores/workspaceblobstore/paths/modelDataCollector/my_endpoint/blue/model_outputs + version: 1 + ``` -**Note**: You can also use the `data.path` parameter to point to datastores in different Azure subscriptions. To do so, ensure your path looks like this: `azureml://subscriptions/<sub_id>/resourcegroups/<rg_name>/workspaces/<ws_name>/datastores/<datastore_name>/paths/<path>` + > [!NOTE] + > You can also use the `data.path` parameter to point to datastores in different Azure subscriptions by providing a path that follows the format: `azureml://subscriptions/<sub_id>/resourcegroups/<rg_name>/workspaces/<ws_name>/datastores/<datastore_name>/paths/<path>` ### Create your deployment with data collection Deploy the model with custom logging enabled: $ az ml online-deployment create -f deployment.YAML ``` -For more information on how to format your deployment YAML for data collection (along with default values) with kubernetes online endpoints, see the [CLI (v2) Azure Arc-enabled Kubernetes online deployment YAML schema](reference-yaml-deployment-kubernetes-online.md). For more information on how to format your deployment YAML for data collection with managed online endpoints, see [CLI (v2) managed online deployment YAML schema](reference-yaml-deployment-managed-online.md). +For more information on how to format your deployment YAML for data collection with Kubernetes online endpoints, see the [CLI (v2) Azure Arc-enabled Kubernetes online deployment YAML schema](reference-yaml-deployment-kubernetes-online.md). ++For more information on how to format your deployment YAML for data collection with managed online endpoints, see [CLI (v2) managed online deployment YAML schema](reference-yaml-deployment-managed-online.md). ### Store collected data in a blob-Blob storage output/format -By default, the collected data will be stored at the following path in your workspace Blob storage: `azureml://datastores/workspaceblobstore/paths/modelDataCollector`. The final path in Blob will be appended with `{endpoint_name}/{deployment_name}/{collection_name}/{yyyy}/{MM}/{dd}/{HH}/{instance_id}.jsonl`. Each line in the file is a JSON object representing a single inference request/response that was logged. +__Blob storage output/format__: ++- By default, the collected data is stored at the following path in your workspace Blob Storage: `azureml://datastores/workspaceblobstore/paths/modelDataCollector`. ++- The final path in the blob will be appended with `{endpoint_name}/{deployment_name}/{collection_name}/{yyyy}/{MM}/{dd}/{HH}/{instance_id}.jsonl`. ++- Each line in the file is a JSON object representing a single inference request/response that was logged. > [!NOTE]-> `collection_name` refers to the MDC data collection name (e.g., "model_inputs" or "model_outputs"). `instance_id` is a unique id identifying the grouping of data which was logged. +> `collection_name` refers to the data collection name (e.g., `model_inputs` or `model_outputs`). +> `instance_id` is a unique id identifying the grouping of data which was logged. -The collected data will follow the following json schema. The collected data is available from the `data` key and additional metadata is provided. +The collected data follows the following JSON schema. The collected data is available from the `data` key and additional metadata is provided. ```json {"specversion":"1.0", The collected data will follow the following json schema. The collected data is "contentrange":"bytes 0-116/117"} ``` -> [!NOTE] +> [!TIP] > Line breaks are shown only for readability. In your collected .jsonl files, there won't be any line breaks. #### Store large payloads -If the payload of your data is greater than 256 kb, there will be an event in the `{instance_id}.jsonl` file contained within the `{endpoint_name}/{deployment_name}/request/.../{instance_id}.jsonl` path that points to a raw file path, which should have the following path: `blob_url/{blob_container}/{blob_path}/{endpoint_name}/{deployment_name}/{rolled_time}/{instance_id}.jsonl`. The collected data will exist at this path. +If the payload of your data is greater than 256 KB, there will be an event in the `{instance_id}.jsonl` file contained within the `{endpoint_name}/{deployment_name}/request/.../{instance_id}.jsonl` path that points to a raw file path, which should have the following path: `blob_url/{blob_container}/{blob_path}/{endpoint_name}/{deployment_name}/{rolled_time}/{instance_id}.jsonl`. The collected data will exist at this path. #### Store binary data With collected binary data, we show the raw file directly, with `instance_id` as } ``` -#### Viewing the data in the studio UI +#### View the data in the studio UI -To view the collected data in Blob storage from the studio UI: +To view the collected data in Blob Storage from the studio UI: 1. Go to thee **Data** tab in your Azure Machine Learning workspace: To view the collected data in Blob storage from the studio UI: ## Log payload -In addition to custom logging with the provided Python SDK, you can collect request and response HTTP payload data directly without the need to augment your scoring script (`score.py`). To enable payload logging, in your deployment YAML, use the names `request` and `response`: +In addition to custom logging with the provided Python SDK, you can collect request and response HTTP payload data directly without the need to augment your scoring script (`score.py`). -```yml -$schema: http://azureml/sdk-2-0/OnlineDeployment.json +1. To enable payload logging, in your deployment YAML, use the names `request` and `response`: -endpoint_name: my_endpoint -name: blue -model: azureml:my-model-m1:1 -environment: azureml:env-m1:1 -data_collector: - collections: - request: - enabled: 'True' - response: - enabled: 'True' -``` + ```yml + $schema: http://azureml/sdk-2-0/OnlineDeployment.json + + endpoint_name: my_endpoint + name: blue + model: azureml:my-model-m1:1 + environment: azureml:env-m1:1 + data_collector: + collections: + request: + enabled: 'True' + response: + enabled: 'True' + ``` -Deploy the model with payload logging enabled: +1. Deploy the model with payload logging enabled: -```bash -$ az ml online-deployment create -f deployment.YAML -``` + ```bash + $ az ml online-deployment create -f deployment.YAML + ``` -> [!NOTE] -> With payload logging, the collected data is not guaranteed to be in tabular format. Because of this, if you want to use collected payload data with model monitoring, you'll be required to provide a pre-processing component to make the data tabular. If you're interested in a seamless model monitoring experience, we recommend using the [custom logging Python SDK](#perform-custom-logging-for-model-monitoring). +With payload logging, the collected data is not guaranteed to be in tabular format. Therefore, if you want to use collected payload data with model monitoring, you'll be required to provide a preprocessing component to make the data tabular. If you're interested in a seamless model monitoring experience, we recommend using the [custom logging Python SDK](#perform-custom-logging-for-model-monitoring). -As your deployment is used, the collected data will flow to your workspace Blob storage. The following code is an example of an HTTP _request_ collected JSON: +As your deployment is used, the collected data flows to your workspace Blob storage. The following JSON code is an example of an HTTP _request_ collected: ```json {"specversion":"1.0", As your deployment is used, the collected data will flow to your workspace Blob "correlationid":"f6e806c9-1a9a-446b-baa2-901373162105","xrequestid":"f6e806c9-1a9a-446b-baa2-901373162105"} ``` -And the following code is an example of an HTTP _response_ collected JSON: +And the following JSON code is another example of an HTTP _response_ collected: ```json {"specversion":"1.0", And the following code is an example of an HTTP _response_ collected JSON: "correlationid":"f6e806c9-1a9a-446b-baa2-901373162105","xrequestid":"f6e806c9-1a9a-446b-baa2-901373162105"} ``` -## Collect data for MLFlow models --If you're deploying an MLFlow model to an Azure Machine Learning online endpoint, you can enable production inference data collection with single toggle in the studio UI. If data collection is toggled on, we'll auto-instrument your scoring script with custom logging code to ensure that the production data is logged to your workspace Blob storage. The data can then be used by your model monitors to monitor the performance of your MLFlow model in production. +## Collect data for MLflow models -To enable production data collection, while you're deploying your model, under the **Deployment** tab, select **Enabled** for **Data collection (preview)**. +If you're deploying an MLflow model to an Azure Machine Learning online endpoint, you can enable production inference data collection with single toggle in the studio UI. If data collection is toggled on, Azure Machine Learning auto-instruments your scoring script with custom logging code to ensure that the production data is logged to your workspace Blob Storage. Your model monitors can then use the data to monitor the performance of your MLflow model in production. -After enabling data collection, production inference data will be logged to your Azure Machine Learning workspace blob storage and two data assets will be created with names `<endpoint_name>-<deployment_name>-model_inputs` and `<endpoint_name>-<deployment_name>-model_outputs`. These data assets will be updated in real-time as your deployment is used in production. The data assets can then be used by your model monitors to monitor the performance of your model in production. +While you're configuring the deployment of your model, you can enable production data collection. Under the **Deployment** tab, select **Enabled** for **Data collection (preview)**. -## Next steps +After you've enabled data collection, production inference data will be logged to your Azure Machine Learning workspace Blob Storage and two data assets will be created with names `<endpoint_name>-<deployment_name>-model_inputs` and `<endpoint_name>-<deployment_name>-model_outputs`. These data assets are updated in real time as you use your deployment in production. Your model monitors can then use the data assets to monitor the performance of your model in production. -To learn how to monitor the performance of your models with the collected production inference data, see the following articles: +## Related content - [What is Azure Machine Learning model monitoring?](concept-model-monitoring.md) - [Monitor performance of models deployed to production](how-to-monitor-model-performance.md) |
machine-learning | How To Devops Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md | jobs: pool: vmImage: ubuntu-latest steps:- - checkout: none - task: UsePythonVersion@0 displayName: Use Python >=3.8 inputs: jobs: pool: vmImage: ubuntu-latest steps:- - checkout: none - task: UsePythonVersion@0 displayName: Use Python >=3.8 inputs: The task has four inputs: `Service Connection`, `Azure Resource Group Name`, `Az dependsOn: SubmitAzureMLJob variables: # We are saving the name of azureMl job submitted in previous step to a variable and it will be used as an inut to the AzureML Job Wait task- azureml_job_name_from_submit_job: $[ dependencies.SubmitAzureMLJob.outputs['submit_azureml_job_task.AZUREML_JOB_NAME'] ] + azureml_job_name_from_submit_job: $[ dependencies.SubmitAzureMLJob.outputs['submit_azureml_job_task.JOB_NAME'] ] steps: - task: AzureMLJobWaitTask@1 inputs: |
machine-learning | How To Enable Preview Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-preview-features.md | You can enable or disable preview features anytime in [Azure Machine Learning st :::image type="content" source="./media/how-to-enable-preview-features/megaphone-icon.png" alt-text="Screenshot of the megaphone icon in Azure Machine Learning studio."::: -1. Find the feature you would like to try out and select the toggle next to it to enable or disable the feature. +1. Find the feature you would like to try out and select the toggle next to it to enable or disable the feature. If you know the feature's name, use the search field. > [!TIP] > When you disable a feature, a text box will appear that can be used to provide feedback on the feature. To learn how to provide feedback without disabling a feature, see [How do I provide feedback?](#how-do-i-provide-feedback). |
machine-learning | How To Managed Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md | The following diagram shows a managed VNet configured to __allow only approved o :::image type="content" source="./media/how-to-managed-network/only-approved-outbound.svg" alt-text="Diagram of managed VNet isolation configured for allow only approved outbound." lightbox="./media/how-to-managed-network/only-approved-outbound.svg"::: > [!NOTE]-> Once a managed VNet workspace is configured to __allow only approved outbound__, the workspace cannot be reconfigured to __allow internet outbound__. Please keep this in mind when configuring managed VNet for your workspace. +> Once a managed VNet workspace is configured to __allow internet outbound__, the workspace cannot be reconfigured to __disabled__. Similarily, once a managed VNet workspace is configured to __allow only approved outbound__, the workspace cannot be reconfigured to __allow internet outbound__. Please keep this in mind when selecting the isolation mode for managed VNet in your workspace. ### Azure Machine Learning studio Before following the steps in this article, make sure you have the following pre * The Azure CLI examples in this article use `ws` to represent the name of the workspace, and `rg` to represent the name of the resource group. Change these values as needed when using the commands with your Azure subscription. +* With Azure CLI and managed VNet, SSH using public IP works, but SSH using private IP doesn't work. + # [Python SDK](#tab/python) * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). If you plan to use __Visual Studio Code__ with Azure Machine Learning, add outbo * `update.code.visualstudio.com` * `*.vo.msecnd.net` * `marketplace.visualstudio.com`+* `vscode.download.prss.microsoft.com` ### Scenario: Use batch endpoints |
machine-learning | How To Monitor Model Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-model-performance.md | Title: Monitor performance of models deployed to production (preview) + Title: Monitor performance of models deployed to production -description: Monitor the performance of models deployed to production on Azure Machine Learning +description: Monitor the performance of models deployed to production in Azure Machine Learning --++ reviewer: msakande Previously updated : 09/15/2023 Last updated : 01/29/2024 -# Monitor performance of models deployed to production (preview) +# Monitor performance of models deployed to production -Once a machine learning model is in production, it's important to critically evaluate the inherent risks associated with it and identify blind spots that could adversely affect your business. Azure Machine Learning's model monitoring continuously tracks the performance of models in production by providing a broad view of monitoring signals and alerting you to potential issues. In this article, you learn to perform out-of box and advanced monitoring setup for models that are deployed to Azure Machine Learning online endpoints. You also learn to set up model monitoring for models that are deployed outside Azure Machine Learning or deployed to Azure Machine Learning batch endpoints. +In this article, you learn to perform out-of box and advanced monitoring setup for models that are deployed to Azure Machine Learning online endpoints. You also learn to set up monitoring for models that are deployed outside Azure Machine Learning or deployed to Azure Machine Learning batch endpoints. ++Once a machine learning model is in production, it's important to critically evaluate the inherent risks associated with it and identify blind spots that could adversely affect your business. Azure Machine Learning's model monitoring continuously tracks the performance of models in production by providing a broad view of monitoring signals and alerting you to potential issues. ## Prerequisites Once a machine learning model is in production, it's important to critically eva [!INCLUDE [basic prereqs cli](includes/machine-learning-cli-prereqs.md)] -* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). --# [Python](#tab/python) -+# [Python SDK](#tab/python) [!INCLUDE [basic prereqs sdk](includes/machine-learning-sdk-v2-prereqs.md)] -* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). - # [Studio](#tab/azure-studio) Before following the steps in this article, make sure you have the following prerequisites: * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). -* An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. --* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). +* An Azure Machine Learning workspace and a compute instance. If you don't have these resources, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. -* For monitoring a model that is deployed to an Azure Machine Learning online endpoint (Managed Online Endpoint or Kubernetes Online Endpoint): -- * A model deployed to an Azure Machine Learning online endpoint. Both Managed Online Endpoint and Kubernetes Online Endpoint are supported. If you don't have a model deployed to an Azure Machine Learning online endpoint, see [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md). +* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). - * Data collection enabled for your model deployment. You can enable data collection during the deployment step for Azure Machine Learning online endpoints. For more information, see [Collect production data from models deployed to a real-time endpoint](how-to-collect-production-data.md). +* For monitoring a model that is deployed to an Azure Machine Learning online endpoint (managed online endpoint or Kubernetes online endpoint), be sure to: -* For monitoring a model that is deployed to an Azure Machine Learning batch endpoint or deployed outside of Azure Machine Learning: + * Have a model already deployed to an Azure Machine Learning online endpoint. Both managed online endpoint and Kubernetes online endpoint are supported. If you don't have a model deployed to an Azure Machine Learning online endpoint, see [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md). - * A way to collect production data and register it as an Azure Machine Learning data asset. - * The registered Azure Machine Learning data asset is continuously updated for model monitoring. - * (Recommended) The model should be registered in Azure Machine Learning workspace, for lineage tracking. + * Enable data collection for your model deployment. You can enable data collection during the deployment step for Azure Machine Learning online endpoints. For more information, see [Collect production data from models deployed to a real-time endpoint](how-to-collect-production-data.md). +* For monitoring a model that is deployed to an Azure Machine Learning batch endpoint or deployed outside of Azure Machine Learning, be sure to: + * Have a means to collect production data and register it as an Azure Machine Learning data asset. + * Update the registered data asset continuously for model monitoring. + * (Recommended) Register the model in an Azure Machine Learning workspace, for lineage tracking. > [!IMPORTANT] >-> Model monitoring jobs are scheduled to run on serverless Spark compute pool with `Standard_E4s_v3` VM instance type support only. More VM instance type support will come in the future roadmap. +> Model monitoring jobs are scheduled to run on serverless Spark compute pools with support for the following VM instance types: `Standard_E4s_v3`, `Standard_E8s_v3`, `Standard_E16s_v3`, `Standard_E32s_v3`, and `Standard_E64s_v3`. You can select the VM instance type with the `create_monitor.compute.instance_type` property in your YAML configuration or from the dropdown in the Azure Machine Learning studio. -## Set up out-of-the-box model monitoring +## Set up out-of-box model monitoring -If you deploy your model to production in an Azure Machine Learning online endpoint, Azure Machine Learning collects production inference data automatically and uses it for continuous monitoring. +Suppose you deploy your model to production in an Azure Machine Learning online endpoint and enable [data collection](how-to-collect-production-data.md) at deployment time. In this scenario, Azure Machine Learning collects production inference data, and automatically stores it in Microsoft Azure Blob Storage. You can then use Azure Machine Learning model monitoring to continuously monitor this production inference data. -You can use Azure CLI, the Python SDK, or Azure Machine Learning studio for out-of-box setup of model monitoring. The out-of-box model monitoring provides following monitoring capabilities: +You can use the Azure CLI, the Python SDK, or the studio for an out-of-box setup of model monitoring. The out-of-box model monitoring configuration provides the following monitoring capabilities: -* Azure Machine Learning will automatically detect the production inference dataset associated with a deployment to an Azure Machine Learning online endpoint and use the dataset for model monitoring. -* The recent past production inference dataset is used as the comparison baseline dataset. +* Azure Machine Learning automatically detects the production inference dataset associated with an Azure Machine Learning online deployment and uses the dataset for model monitoring. +* The comparison reference dataset is set as the recent, past production inference dataset. * Monitoring setup automatically includes and tracks the built-in monitoring signals: **data drift**, **prediction drift**, and **data quality**. For each monitoring signal, Azure Machine Learning uses:- * the recent past production inference dataset as the comparison baseline dataset. + * the recent, past production inference dataset as the comparison reference dataset. * smart defaults for metrics and thresholds.-* A monitoring job is scheduled to run daily at 3:15am (for this example) to acquire monitoring signals and evaluate each metric result against its corresponding threshold. By default, when any threshold is exceeded, an alert email is sent to the user who set up the monitoring. --## Configure feature importance --For feature importance to be enabled with any of your signals (such as data drift or data quality,) you need to provide both the 'baseline_dataset' (typically training) dataset and 'target_column_name' fields. +* A monitoring job is scheduled to run daily at 3:15am (for this example) to acquire monitoring signals and evaluate each metric result against its corresponding threshold. By default, when any threshold is exceeded, Azure Machine Learning sends an alert email to the user that set up the monitor. # [Azure CLI](#tab/azure-cli) -Azure Machine Learning model monitoring uses `az ml schedule` for model monitoring setup. You can create out-of-box model monitoring setup with the following CLI command and YAML definition: +Azure Machine Learning model monitoring uses `az ml schedule` to schedule a monitoring job. You can create the out-of-box model monitor with the following CLI command and YAML definition: ```azurecli az ml schedule create -f ./out-of-box-monitoring.yaml ``` -The following YAML contains the definition for out-of-the-box model monitoring. --```yaml -# out-of-box-monitoring.yaml -$schema: http://azureml/sdk-2-0/Schedule.json -name: fraud_detection_model_monitoring -display_name: Fraud detection model monitoring -description: Loan approval model monitoring setup with minimal configurations --trigger: - # perform model monitoring activity daily at 3:15am - type: recurrence - frequency: day #can be minute, hour, day, week, month - interval: 1 # #every day - schedule: - hours: 3 # at 3am - minutes: 15 # at 15 mins after 3am --create_monitor: - compute: # specify a spark compute for monitoring job - instance_type: standard_e4s_v3 - runtime_version: 3.2 - monitoring_target: - endpoint_deployment_id: azureml:fraud-detection-endpoint:fraud-detection-deployment -``` +The following YAML contains the definition for the out-of-box model monitoring. -# [Python](#tab/python) +# [Python SDK](#tab/python) -You can use the following code to set up out-of-the-box model monitoring: +You can use the following code to set up the out-of-box model monitoring: ```python--from azure.identity import InteractiveBrowserCredential +from azure.identity import DefaultAzureCredential from azure.ai.ml import MLClient from azure.ai.ml.entities import (+ AlertNotification, MonitoringTarget, MonitorDefinition, MonitorSchedule, RecurrencePattern, RecurrenceTrigger,- SparkResourceConfiguration, + ServerlessSparkCompute ) # get a handle to the workspace-ml_client = MLClient(InteractiveBrowserCredential(), subscription_id, resource_group, workspace) +ml_client = MLClient( + DefaultAzureCredential(), + subscription_id="subscription_id", + resource_group_name="resource_group_name", + workspace_name="workspace_name", +) +# create the compute spark_compute = ServerlessSparkCompute( instance_type="standard_e4s_v3",- runtime_version="3.2" + runtime_version="3.3" +) ++# specify your online endpoint deployment +monitoring_target = MonitoringTarget( + ml_task="classification", + endpoint_deployment_id="azureml:credit-default:main" ) -monitoring_target = MonitoringTarget(endpoint_deployment_id="azureml:fraud_detection_endpoint:fraund_detection_deployment") -monitor_definition = MonitorDefinition(compute=spark_compute, monitoring_target=monitoring_target) +# create alert notification object +alert_notification = AlertNotification( + emails=['abc@example.com', 'def@example.com'] +) ++# create the monitor definition +monitor_definition = MonitorDefinition( + compute=spark_compute, + monitoring_target=monitoring_target, + alert_notification=alert_notification +) +# specify the schedule frequency recurrence_trigger = RecurrenceTrigger( frequency="day", interval=1, schedule=RecurrencePattern(hours=3, minutes=15) ) -model_monitor = MonitorSchedule(name="fraud_detection_model_monitoring", - trigger=recurrence_trigger, - create_monitor=monitor_definition) +# create the monitor +model_monitor = MonitorSchedule( + name="credit_default_monitor_basic", + trigger=recurrence_trigger, + create_monitor=monitor_definition +) poller = ml_client.schedules.begin_create_or_update(model_monitor) created_monitor = poller.result()- ``` # [Studio](#tab/azure-studio) 1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).-1. Under **Manage**, select **Monitoring**. +1. Go to your workspace. +1. Select **Monitoring** from the **Manage** section 1. Select **Add**. :::image type="content" source="media/how-to-monitor-models/add-model-monitoring.png" alt-text="Screenshot showing how to add model monitoring." lightbox="media/how-to-monitor-models/add-model-monitoring.png"::: -1. Select the model to monitor. The "Select deployment" dropdown list should be automatically populated if the model is deployed to an Azure Machine Learning online endpoint. -1. Select the deployment in the **Select deployment** box. -1. Select the training data to use as the comparison baseline in the **(Optional) Select training data** box. -1. Enter a name for the monitoring in **Monitor name**. -1. Select VM instance type for Spark pool in the **Select compute type** box. -1. Select "Spark 3.2" for the **Spark runtime version**. -1. Select your **Time zone** for monitoring the job run. -1. Select "Recurrence" or "Cron expression" scheduling. -1. For "Recurrence" scheduling, specify the repeat frequency, day, and time. For "Cron expression" scheduling, you would have to enter cron expression for monitoring run. -1. Select **Finish**. +1. On the **Basic settings** page, use **(Optional) Select model** to choose the model to monitor. +1. The **(Optional) Select deployment with data collection enabled** dropdown list should be automatically populated if the model is deployed to an Azure Machine Learning online endpoint. Select the deployment from the dropdown list. +1. Select the training data to use as the comparison reference in the **(Optional) Select training data** box. +1. Enter a name for the monitoring in **Monitor name** or keep the default name. +1. Notice that the virtual machine size is already selected for you. +1. Select your **Time zone**. +1. Select **Recurrence** or **Cron expression** scheduling. +1. For **Recurrence** scheduling, specify the repeat frequency, day, and time. For **Cron expression** scheduling, enter a cron expression for monitoring run. ++ :::image type="content" source="media/how-to-monitor-models/model-monitoring-basic-setup.png" alt-text="Screenshot of basic settings page for model monitoring." lightbox="media/how-to-monitor-models/model-monitoring-basic-setup.png"::: - :::image type="content" source="media/how-to-monitor-models/model-monitoring-basic-setup.png" alt-text="Screenshot of settings for model monitoring." lightbox="media/how-to-monitor-models/model-monitoring-basic-setup.png"::: +1. Select **Next** to go to the **Advanced settings** section. +1. Select **Next** on the **Configure data asset** page to keep the default datasets. +1. Select **Next** to go to the **Select monitoring signals** page. +1. Select **Next** to go to the **Notifications** page. Add your email to receive email notifications. +1. Review your monitoring details and select **Create** to create the monitor. ## Set up advanced model monitoring -Azure Machine Learning provides many capabilities for continuous model monitoring. See [Capabilities of model monitoring](concept-model-monitoring.md#capabilities-of-model-monitoring) for a list of these capabilities. In many cases, you need to set up model monitoring with advanced monitoring capabilities. In the following example, we set up model monitoring with these capabilities: +Azure Machine Learning provides many capabilities for continuous model monitoring. See [Capabilities of model monitoring](concept-model-monitoring.md#capabilities-of-model-monitoring) for a comprehensive list of these capabilities. In many cases, you need to set up model monitoring with advanced monitoring capabilities. In the following sections, you set up model monitoring with these capabilities: ++* Use of multiple monitoring signals for a broad view. +* Use of historical model training data or validation data as the comparison reference dataset. +* Monitoring of top N most important features and individual features. ++### Configure feature importance ++Feature importance represents the relative importance of each input feature to a model's output. For example, `temperature` might be more important to a model's prediction compared to `elevation`. Enabling feature importance can give you visibility into which features you don't want drifting or having data quality issues in production. -* Use of multiple monitoring signals for a broad view -* Use of historical model training data or validation data as the comparison baseline dataset -* Monitoring of top N features and individual features +To enable feature importance with any of your signals (such as data drift or data quality), you need to provide: -You can use Azure CLI, the Python SDK, or Azure Machine Learning studio for advanced setup of model monitoring. +- Your training dataset as the `reference_data` dataset. +- The `reference_data.data_column_names.target_column` property, which is the name of your model's output/prediction column. + +After enabling feature importance, you'll see a feature importance for each feature you're monitoring in the Azure Machine Learning model monitoring studio UI. ++You can use Azure CLI, the Python SDK, or the studio for advanced setup of model monitoring. # [Azure CLI](#tab/azure-cli) -You can create advanced model monitoring setup with the following CLI command and YAML definition: +Create advanced model monitoring setup with the following CLI command and YAML definition: ```azurecli az ml schedule create -f ./advanced-model-monitoring.yaml az ml schedule create -f ./advanced-model-monitoring.yaml The following YAML contains the definition for advanced model monitoring. -```yaml -# advanced-model-monitoring.yaml -$schema: http://azureml/sdk-2-0/Schedule.json -name: fraud_detection_model_monitoring -display_name: Fraud detection model monitoring -description: Fraud detection model monitoring with advanced configurations --trigger: - # perform model monitoring activity daily at 3:15am - type: recurrence - frequency: day #can be minute, hour, day, week, month - interval: 1 # #every day - schedule: - hours: 3 # at 3am - minutes: 15 # at 15 mins after 3am --create_monitor: - compute: - instance_type: standard_e4s_v3 - runtime_version: 3.2 - monitoring_target: - ml_task: classfiication - endpoint_deployment_id: azureml:fraud-detection-endpoint:fraud-detection-deployment - - monitoring_signals: - advanced_data_drift: # monitoring signal name, any user defined name works - type: data_drift - # target_dataset is optional. By default target dataset is the production inference data associated with Azure Machine Learning online endpoint - reference_data: - input_data: - path: azureml:my_model_training_data:1 # use training data as comparison baseline - type: mltable - data_context: training - target_column_name: fraud_detected - features: - top_n_feature_importance: 20 # monitor drift for top 20 features - metric_thresholds: - numerical: - jensen_shannon_distance: 0.01 - categorical: - pearsons_chi_squared_test: 0.02 - advanced_data_quality: - type: data_quality - # target_dataset is optional. By default target dataset is the production inference data associated with Azure Machine Learning online depoint - reference_data: - input_data: - path: azureml:my_model_training_data:1 - type: mltable - data_context: training - features: # monitor data quality for 3 individual features only - - feature_A - - feature_B - - feature_C - metric_thresholds: - numerical: - null_value_rate: 0.05 - categorical: - out_of_bounds_rate: 0.03 -- feature_attribution_drift_signal: - type: feature_attribution_drift - # production_data: is not required input here - # Please ensure Azure Machine Learning online endpoint is enabled to collected both model_inputs and model_outputs data - # Azure Machine Learning model monitoring will automatically join both model_inputs and model_outputs data and used it for computation - reference_data: - input_data: - path: azureml:my_model_training_data:1 - type: mltable - data_context: training - target_column_name: is_fraud - metric_thresholds: - normalized_discounted_cumulative_gain: 0.9 - - alert_notification: - emails: - - abc@example.com - - def@example.com -``` -# [Python](#tab/python) +# [Python SDK](#tab/python) -You can use the following code for advanced model monitoring setup: +Use the following code for advanced model monitoring setup: ```python-from azure.identity import InteractiveBrowserCredential +from azure.identity import DefaultAzureCredential from azure.ai.ml import Input, MLClient from azure.ai.ml.constants import (- MonitorFeatureType, - MonitorMetricName, MonitorDatasetContext, ) from azure.ai.ml.entities import ( AlertNotification,- FeatureAttributionDriftSignal, - FeatureAttributionDriftMetricThreshold, DataDriftSignal, DataQualitySignal,+ PredictionDriftSignal, DataDriftMetricThreshold, DataQualityMetricThreshold,+ PredictionDriftMetricThreshold, NumericalDriftMetrics, CategoricalDriftMetrics, DataQualityMetricsNumerical, DataQualityMetricsCategorical, MonitorFeatureFilter,- MonitorInputData, MonitoringTarget, MonitorDefinition, MonitorSchedule, from azure.ai.ml.entities import ( ) # get a handle to the workspace-ml_client = MLClient(InteractiveBrowserCredential(), subscription_id, resource_group, workspace) +ml_client = MLClient( + DefaultAzureCredential(), + subscription_id="subscription_id", + resource_group_name="resource_group_name", + workspace_name="workspace_name", +) +# create your compute spark_compute = ServerlessSparkCompute( instance_type="standard_e4s_v3",- runtime_version="3.2" + runtime_version="3.3" ) +# specify the online deployment (if you have one) monitoring_target = MonitoringTarget( ml_task="classification",- endpoint_deployment_id="azureml:fraud_detection_endpoint:fraund_detection_deployment" + endpoint_deployment_id="azureml:credit-default:main" ) -# training data to be used as baseline dataset +# training data to be used as reference dataset reference_data_training = ReferenceData( input_data=Input( type="mltable",- path="azureml:my_model_training_data:1" + path="azureml:credit-default-reference:1" ),- target_column_name="is_fraud", + target_column_name="DEFAULT_NEXT_MONTH", data_context=MonitorDatasetContext.TRAINING, ) # create an advanced data drift signal-features = MonitorFeatureFilter(top_n_feature_importance=20) +features = MonitorFeatureFilter(top_n_feature_importance=10) + metric_thresholds = DataDriftMetricThreshold( numerical=NumericalDriftMetrics( jensen_shannon_distance=0.01 advanced_data_drift = DataDriftSignal( metric_thresholds=metric_thresholds ) +# create an advanced prediction drift signal +metric_thresholds = PredictionDriftMetricThreshold( + categorical=CategoricalDriftMetrics( + jensen_shannon_distance=0.01 + ) +) ++advanced_prediction_drift = PredictionDriftSignal( + reference_data=reference_data_training, + metric_thresholds=metric_thresholds +) # create an advanced data quality signal-features = ['feature_A', 'feature_B', 'feature_C'] +features = ['SEX', 'EDUCATION', 'AGE'] metric_thresholds = DataQualityMetricThreshold( numerical=DataQualityMetricsNumerical( advanced_data_quality = DataQualitySignal( alert_enabled=False ) -# create feature attribution drift signal -metric_thresholds = FeatureAttributionDriftMetricThreshold(normalized_discounted_cumulative_gain=0.9) --feature_attribution_drift = FeatureAttributionDriftSignal( - reference_data=reference_data_training, - metric_thresholds=metric_thresholds, - alert_enabled=False -) - # put all monitoring signals in a dictionary monitoring_signals = { 'data_drift_advanced':advanced_data_drift,- 'data_quality_advanced':advanced_data_quality, - 'feature_attribution_drift':feature_attribution_drift + 'data_quality_advanced':advanced_data_quality } # create alert notification object alert_notification = AlertNotification( emails=['abc@example.com', 'def@example.com'] ) -# Finally monitor definition +# create the monitor definition monitor_definition = MonitorDefinition( compute=spark_compute, monitoring_target=monitoring_target, monitor_definition = MonitorDefinition( alert_notification=alert_notification ) +# specify the frequency on which to run your monitor recurrence_trigger = RecurrenceTrigger( frequency="day", interval=1, schedule=RecurrencePattern(hours=3, minutes=15) ) +# create your monitor model_monitor = MonitorSchedule(- name="fraud_detection_model_monitoring_complex", + name="credit_default_monitor_advanced", trigger=recurrence_trigger, create_monitor=monitor_definition ) poller = ml_client.schedules.begin_create_or_update(model_monitor) created_monitor = poller.result()- ``` # [Studio](#tab/azure-studio) -1. Complete the entires on the basic settings page as described in the [Set up out-of-box model monitoring](#set-up-out-of-the-box-model-monitoring) section. -1. Select **More options** to open the advanced setup wizard. +To set up advanced monitoring: -1. In the "Configure dataset" section, add a dataset to be used as the comparison baseline. We recommend using the model training data as the comparison baseline for data drift and data quality, and using the model validation data as the comparison baseline for prediction drift. --1. Select **Next**. +1. Complete the entires on the **Basic settings** page as described earlier in the [Set up out-of-box model monitoring](#set-up-out-of-box-model-monitoring) section. +1. Select **Next** to open the **Configure data asset** page of the **Advanced settings** section. +1. **Add** a dataset to be used as the reference dataset. We recommend that you use the model training data as the comparison reference dataset for data drift and data quality. Also, use the model validation data as the comparison reference dataset for prediction drift. :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-data.png" alt-text="Screenshot showing how to add datasets for the monitoring signals to use." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-data.png"::: -1. In the "Select monitoring signals" section, you see three monitoring signals already added if you have selected Azure Machine Learning online deployment earlier. These signals are: data drift, prediction drift, and data quality. All these prepopulated monitoring signals use recent past production data as the comparison baseline and use smart defaults for metrics and threshold. +1. Select **Next** to go to the **Select monitoring signals** page. On this page, you see some monitoring signals already added (if you selected an Azure Machine Learning online deployment earlier). The signals (data drift, prediction drift, and data quality) use recent, past production data as the comparison reference dataset and use smart defaults for metrics and thresholds. ++ :::image type="content" source="media/how-to-monitor-models/model-monitoring-monitoring-signals.png" alt-text="Screenshot showing default monitoring signals." lightbox="media/how-to-monitor-models/model-monitoring-monitoring-signals.png"::: + 1. Select **Edit** next to the data drift signal.+1. In the data drift **Edit signal** window, configure the following: - :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-select-signals.png" alt-text="Screenshot showing how to select monitoring signals." lightbox="media/how-to-monitor-models/model-monitoring-advanced-select-signals.png"::: + 1. For the production data asset, select your model inputs with the desired lookback window size. + 1. Select your training dataset to use as the reference dataset. + 1. Select the target (output) column. + 1. Select to monitor drift for the top N most important features, or monitor drift for a specific set of features. + 1. Select your preferred metrics and thresholds. -1. In the data drift **Edit signal** window, configure following: - 1. Change the baseline dataset to use training data. - 1. Monitor drift for top 1-20 important features, or monitor drift for specific set of features. - 1. Select your preferred metrics and set thresholds. -1. Select **Save** to return to the "Select monitoring signals" section. + :::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-signals.png" alt-text="Screenshot showing how to configure selected monitoring signals." lightbox="media/how-to-monitor-models/model-monitoring-configure-signals.png"::: - :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-edit-signal.png" alt-text="Screenshot showing how to edit signal settings for model monitoring." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-edit-signal.png"::: +1. Select **Save** to return to the **Select monitoring signals** page. +1. Select **Add** to open the **Edit Signal** window. +1. Select **Feature attribution drift (preview)** to configure the feature attribution drift signal as follows: -1. Select **Add** to add another signal. -1. In the "Add Signal" screen, select the **Feature Attribution Drift** panel. -1. Enter a name for Feature Attribution Drift signal. Feature attribution drift currently requires a few additional steps: -1. Configure your data assets for Feature Attribution Drift - 1. In your model creation wizard, add your custom data asset from your [custom data collection](how-to-collect-production-data.md) called 'model inputs and outputs' which combines your joined model inputs and data assets as a separate data context. - - :::image type="content" source="media/how-to-monitor-models/feature-attribution-drift-inputs-outputs.png" alt-text="Screenshot showing how to configure a custom data asset with inputs and outputs joined." lightbox="media/how-to-monitor-models/feature-attribution-drift-inputs-outputs.png"::: - - 1. Specify your training reference dataset that is used in the feature attribution drift component, and select your 'target column name' field, which is required to enable feature importance. - 1. Confirm your parameters are correct -1. Adjust the data window size according to your business case. -1. Adjust the threshold according to your need. -1. Select **Save** to return to the "Select monitoring signals" section. -1. If you're done with editing or adding signals, select **Next**. + 1. Select the production data asset with your model inputs and the desired lookback window size. + 1. Select the production data asset with your model outputs. + 1. Select the common column between these data assets to join them on. If the data was collected with the [data collector](how-to-collect-production-data.md), the common column is `correlationid`. + 1. (Optional) If you used the data collector to collect data where your model inputs and outputs are already joined, select the joined dataset as your production data asset and **Remove** step 2 in the configuration panel. + 1. Select your training dataset to use as the reference dataset. + 1. Select the target (output) column for your training dataset. + 1. Select your preferred metric and threshold. - :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-add-signal.png" alt-text="Screenshot showing settings for adding signals." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-add-signal.png"::: + :::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-feature-attribution-drift.png" alt-text="Screenshot showing how to configure feature attribution drift signal." lightbox="media/how-to-monitor-models/model-monitoring-configure-feature-attribution-drift.png"::: -1. In the "Notification" screen, enable alert notification for each signal. -1. Select **Next**. +1. Select **Save** to return to the **Select monitoring signals** page. - :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-notification.png" alt-text="Screenshot of settings on the notification screen." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-notification.png"::: + :::image type="content" source="media/how-to-monitor-models/model-monitoring-configured-signals.png" alt-text="Screenshot showing the configured signals." lightbox="media/how-to-monitor-models/model-monitoring-configured-signals.png"::: -1. Review your settings on the "Review monitoring settings" page. -1. Select **Create** to confirm your settings for advanced model monitoring. +1. When you're finished with your monitoring signals configuration, select **Next** to go to the **Notifications** page. +1. On the **Notifications** page, enable alert notifications for each signal and select **Next**. +1. Review your settings on the **Review monitoring settings** page. :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-review.png" alt-text="Screenshot showing review page of the advanced configuration for model monitoring." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-review.png"::: +1. Select **Create** to create your advanced model monitor. + -## Set up model monitoring by bringing your own production data to Azure Machine Learning +## Set up model monitoring by bringing your production data to Azure Machine Learning ++You can also set up model monitoring for models deployed to Azure Machine Learning batch endpoints or deployed outside of Azure Machine Learning. If you don't have a deployment, but you have production data, you can use the data to perform continuous model monitoring. To monitor these models, you must be able to: -You can also set up model monitoring for models deployed to Azure Machine Learning batch endpoints or deployed outside of Azure Machine Learning. If you have production data but no deployment, you can use the data to perform continuous model monitoring. To monitor these models, you must meet the following requirements: +* Collect production inference data from models deployed in production. +* Register the production inference data as an Azure Machine Learning data asset, and ensure continuous updates of the data. +* Provide a custom data preprocessing component and register it as an Azure Machine Learning component. -* You have a way to collect production inference data from models deployed in production. -* You can register the collected production inference data as an Azure Machine Learning data asset, and ensure continuous updates of the data. -* You can provide a data preprocessing component and register it as an Azure Machine Learning component. The Azure Machine Learning component must have these input and output signatures: +You must provide a custom data preprocessing component if your data isn't collected with the [data collector](how-to-collect-production-data.md). Without this custom data preprocessing component, the Azure Machine Learning model monitoring system won't know how to process your data into tabular form with support for time windowing. - | input/output | signature name | type | description | example value | +Your custom preprocessing component must have these input and output signatures: ++ | Input/Output | Signature name | Type | Description | Example value | ||||||- | input | data_window_start | literal, string | data window start-time in ISO8601 format. | 2023-05-01T04:31:57.012Z | - | input | data_window_end | literal, string | data window end-time in ISO8601 format. | 2023-05-01T04:31:57.012Z | - | input | input_data | uri_folder | The collected production inference data, which is registered as Azure Machine Learning data asset. | azureml:myproduction_inference_data:1 | - | output | preprocessed_data | mltable | A tabular dataset, which matches a subset of baseline data schema. | | + | input | `data_window_start` | literal, string | data window start-time in ISO8601 format. | 2023-05-01T04:31:57.012Z | + | input | `data_window_end` | literal, string | data window end-time in ISO8601 format. | 2023-05-01T04:31:57.012Z | + | input | `input_data` | uri_folder | The collected production inference data, which is registered as an Azure Machine Learning data asset. | azureml:myproduction_inference_data:1 | + | output | `preprocessed_data` | mltable | A tabular dataset, which matches a subset of the reference data schema. | | ++For an example of a custom data preprocessing component, see [custom_preprocessing in the azuremml-examples GitHub repo](https://github.com/Azure/azureml-examples/tree/main/cli/monitoring/components/custom_preprocessing). # [Azure CLI](#tab/azure-cli) az ml schedule create -f ./model-monitoring-with-collected-data.yaml The following YAML contains the definition for model monitoring with production inference data that you've collected. -```yaml -# model-monitoring-with-collected-data.yaml -$schema: http://azureml/sdk-2-0/Schedule.json -name: fraud_detection_model_monitoring -display_name: Fraud detection model monitoring -description: Fraud detection model monitoring with your own production data --trigger: - # perform model monitoring activity daily at 3:15am - type: recurrence - frequency: day #can be minute, hour, day, week, month - interval: 1 # #every day - schedule: - hours: 3 # at 3am - minutes: 15 # at 15 mins after 3am --create_monitor: - compute: - instance_type: standard_e4s_v3 - runtime_version: 3.2 - monitoring_target: - ml_task: classification - endpoint_deployment_id: azureml:fraud-detection-endpoint:fraud-detection-deployment - - monitoring_signals: - advanced_data_drift: # monitoring signal name, any user defined name works - type: data_drift - # define target dataset with your collected data - production_data: - input_data: - path: azureml:my_production_inference_data_model_inputs:1 # your collected data is registered as Azure Machine Learning asset - type: uri_folder - data_context: model_inputs - pre_processing_component: azureml:production_data_preprocessing:1 - reference_data: - input_data: - path: azureml:my_model_training_data:1 # use training data as comparison baseline - type: mltable - data_context: training - target_column_name: is_fraud - features: - top_n_feature_importance: 20 # monitor drift for top 20 features - metric_thresholds: - numberical: - jensen_shannon_distance: 0.01 - categorical: - pearsons_chi_squared_test: 0.02 - advanced_prediction_drift: # monitoring signal name, any user defined name works - type: prediction_drift - # define target dataset with your collected data - production_data: - input_data: - path: azureml:my_production_inference_data_model_outputs:1 # your collected data is registered as Azure Machine Learning asset - type: uri_folder - data_context: model_outputs - pre_processing_component: azureml:production_data_preprocessing:1 - reference_data: - input_data: - path: azureml:my_model_validation_data:1 # use training data as comparison baseline - type: mltable - data_context: validation - metric_thresholds: - categorical: - pearsons_chi_squared_test: 0.02 - advanced_data_quality: - type: data_quality - production_data: - input_data: - path: azureml:my_production_inference_data_model_inputs:1 # your collected data is registered as Azure Machine Learning asset - type: uri_folder - data_context: model_inputs - pre_processing_component: azureml:production_data_preprocessing:1 - reference_data: - input_data: - path: azureml:my_model_training_data:1 - type: mltable - data_context: training - metric_thresholds: - numerical: - null_value_rate: 0.03 - categorical: - out_of_bounds_rate: 0.03 - feature_attribution_drift_signal: - type: feature_attribution_drift - production_data: - # using production_data collected outside of Azure Machine Learning - - input_data: - path: azureml:my_model_inputs:1 - type: uri_folder - data_context: model_inputs - data_column_names: - correlation_id: correlation_id - pre_processing_component: azureml:model_inputs_preprocessing - data_window_size: P30D - - input_data: - path: azureml:my_model_outputs:1 - type: uri_folder - data_context: model_outputs - data_column_names: - correlation_id: correlation_id - prediction: is_fraund - prediction_probability: is_fraund_probability - pre_processing_component: azureml:model_outputs_preprocessing - data_window_size: P30D - reference_data: - input_data: - path: azureml:my_model_training_data:1 - type: mltable - data_context: training - target_column_name: is_fraud - metric_thresholds: - normalized_discounted_cumulative_gain: 0.9 - - alert_notification: - emails: - - abc@example.com - - def@example.com --``` -# [Python](#tab/python) +# [Python SDK](#tab/python) -Once you've satisfied the previous requirements, you can set up model monitoring using the following Python code: +Once you've satisfied the previous requirements, you can set up model monitoring with the following Python code: ```python from azure.identity import InteractiveBrowserCredential production_data = ProductionData( ) -# training data to be used as baseline dataset +# training data to be used as reference dataset reference_data_training = ReferenceData( input_data=Input( type="mltable", created_monitor = poller.result() # [Studio](#tab/azure-studio) -The studio currently doesn't support monitoring for models that are deployed outside of Azure Machine Learning. See the Azure CLI or Python tabs instead. +The studio currently doesn't support configuring monitoring for models that are deployed outside of Azure Machine Learning. See the Azure CLI or Python SDK tabs instead. ++Once you've configured your monitor with the CLI or SDK, you can view the monitoring results in the studio. For more information on interpreting monitoring results, see [Interpreting monitoring results](how-to-monitor-model-performance.md#interpret-monitoring-results). ## Set up model monitoring with custom signals and metrics -With Azure Machine Learning model monitoring, you have the option to define your own custom signal and implement any metric of your choice to monitor your model. You can register this signal as an Azure Machine Learning component. When your Azure Machine Learning model monitoring job runs on the specified schedule, it computes the metric(s) you have defined within your custom signal, just as it does for the prebuilt signals (data drift, prediction drift, data quality, & feature attribution drift). To get started with defining your own custom signal, you must meet the following requirement: +With Azure Machine Learning model monitoring, you can define your own custom signal and implement any metric of your choice to monitor your model. You can register this signal as an Azure Machine Learning component. When your Azure Machine Learning model monitoring job runs on the specified schedule, it computes the metric(s) you've defined within your custom signal, just as it does for the prebuilt signals (data drift, prediction drift, and data quality). -* You must define your custom signal and register it as an Azure Machine Learning component. The Azure Machine Learning component must have these input and output signatures: +To set up a custom signal to use for model monitoring, you must first define the custom signal and register it as an Azure Machine Learning component. The Azure Machine Learning component must have these input and output signatures: ### Component input signature -The component input DataFrame should contain a `mltable` with the processed data from the preprocessing component and any number of literals, each representing an implemented metric as part of the custom signal component. For example, if you have implemented one metric, `std_deviation`, then you'll need an input for `std_deviation_threshold`. Generally, there should be one input per metric with the name {metric_name}_threshold. +The component input DataFrame should contain the following items: - | signature name | type | description | example value | - ||||| - | production_data | mltable | A tabular dataset, which matches a subset of baseline data schema. | | - | std_deviation_threshold | literal, string | Respective threshold for the implemented metric. | 2 | +- An `mltable` with the processed data from the preprocessing component +- Any number of literals, each representing an implemented metric as part of the custom signal component. For example, if you've implemented the metric, `std_deviation`, then you'll need an input for `std_deviation_threshold`. Generally, there should be one input per metric with the name `<metric_name>_threshold`. ++| Signature name | Type | Description | Example value | +||||| +| production_data | mltable | A tabular dataset that matches a subset of the reference data schema. | | +| std_deviation_threshold | literal, string | Respective threshold for the implemented metric. | 2 | ### Component output signature The component output port should have the following signature. - | signature name | type | description | + | Signature name | Type | Description | ||||- | signal_metrics | mltable | The ml table that contains the computed metrics. The schema is defined in the signal_metrics schema section in the next section. | + | signal_metrics | mltable | The mltable that contains the computed metrics. The schema is defined in the next section [signal_metrics schema](#signal_metrics-schema). | #### signal_metrics schema-The component output DataFrame should contain four columns: `group`, `metric_name`, `metric_value`, and `threshold_value`: - | signature name | type | description | example value | +The component output DataFrame should contain four columns: `group`, `metric_name`, `metric_value`, and `threshold_value`. ++ | Signature name | Type | Description | Example value | |||||- | group | literal, string | Top level logical grouping to be applied to this custom metric. | TRANSACTIONAMOUNT | + | group | literal, string | Top-level logical grouping to be applied to this custom metric. | TRANSACTIONAMOUNT | | metric_name | literal, string | The name of the custom metric. | std_deviation |- | metric_value | mltable | The value of the custom metric. | 44,896.082 | - | threshold_value | | The threshold for the custom metric. | 2 | + | metric_value | numerical | The value of the custom metric. | 44,896.082 | + | threshold_value | numerical | The threshold for the custom metric. | 2 | -Here's an example output from a custom signal component computing the metric, `std_deviation`: +The following table shows an example output from a custom signal component that computes the `std_deviation` metric: | group | metric_value | metric_name | threshold_value | ||||| Here's an example output from a custom signal component computing the metric, `s | DIGITALITEMCOUNT | 7.238 | std_deviation | 2 | | PHYSICALITEMCOUNT | 5.509 | std_deviation | 2 | -An example custom signal component definition and metric computation code can be found in our GitHub repo at [https://github.com/Azure/azureml-examples/tree/main/cli/monitoring/components/custom_signal](https://github.com/Azure/azureml-examples/tree/main/cli/monitoring/components/custom_signal). +To see an example custom signal component definition and metric computation code, see [custom_signal in the azureml-examples repo](https://github.com/Azure/azureml-examples/tree/main/cli/monitoring/components/custom_signal). # [Azure CLI](#tab/azure-cli) -Once you've satisfied the previous requirements, you can set up model monitoring with the following CLI command and YAML definition: +Once you've satisfied the requirements for using custom signals and metrics, you can set up model monitoring with the following CLI command and YAML definition: ```azurecli az ml schedule create -f ./custom-monitoring.yaml ``` -The following YAML contains the definition for model monitoring with a custom signal. It's assumed that you have already created and registered your component with the custom signal definition to Azure Machine Learning. In this example, the `component_id` of the registered custom signal component is `azureml:my_custom_signal:1.0.0`: +The following YAML contains the definition for model monitoring with a custom signal. Some things to notice about the code: ++- It assumes that you've already created and registered your component with the custom signal definition in Azure Machine Learning. +- The `component_id` of the registered custom signal component is `azureml:my_custom_signal:1.0.0`. +- If you've collected your data with the [data collector](how-to-collect-production-data.md), you can omit the `pre_processing_component` property. If you wish to use a preprocessing component to preprocess production data not collected by the data collector, you can specify it. ```yaml # custom-monitoring.yaml trigger: interval: 7 # #every day create_monitor: compute:- instance_type: "standard_e8s_v3" - runtime_version: "3.2" + instance_type: "standard_e4s_v3" + runtime_version: "3.3" monitoring_signals: customSignal: type: custom create_monitor: path: azureml:my_production_data:1 data_context: test data_window:- trailing_window_size: P30D - trailing_window_offset: P7D + lookback_window_size: P30D + lookback_window_offset: P7D pre_processing_component: azureml:custom_preprocessor:1.0.0 metric_thresholds: - metric_name: std_deviation create_monitor: - abc@example.com ``` -# [Python](#tab/python) +# [Python SDK](#tab/python) The Python SDK currently doesn't support monitoring for custom signals. See the Azure CLI tab instead. The studio currently doesn't support monitoring for custom signals. See the Azur -## Next steps +## Interpret monitoring results ++After you've configured your model monitor and the first run has completed, you can navigate back to the **Monitoring** tab in Azure Machine Learning studio to view the results. ++- From the main **Monitoring** view, select the name of your model monitor to see the Monitor overview page. This page shows the corresponding model, endpoint, and deployment, along with details regarding the signals you configured. The next image shows a monitoring dashboard that includes data drift and data quality signals. Depending on the monitoring signals you configured, your dashboard might look different. ++ :::image type="content" source="media/how-to-monitor-models/monitoring-dashboard.png" alt-text="Screenshot showing a monitoring dashboard." lightbox="media/how-to-monitor-models/monitoring-dashboard.png"::: ++- Look in the **Notifications** section of the dashboard to see, for each signal, which features breached the configured threshold for their respective metrics: ++- Select the **data_drift** to go to the data drift details page. On the details page, you can see the data drift metric value for each numerical and categorical feature that you included in your monitoring configuration. When your monitor has more than one run, you'll see a trendline for each feature. ++ :::image type="content" source="media/how-to-monitor-models/data-drift-details-page.png" alt-text="Screenshot showing the details page of the data drift signal." lightbox="media/how-to-monitor-models/data-drift-details-page.png"::: ++- To view an individual feature in detail, select the name of the feature to view the production distribution compared to the reference distribution. This view also allows you to track drift over time for that specific feature. ++ :::image type="content" source="media/how-to-monitor-models/data-drift-individual-feature.png" alt-text="Screenshot showing the data drift details for an individual feature." lightbox="media/how-to-monitor-models/data-drift-individual-feature.png"::: ++- Return to the monitoring dashboard and select **data_quality** to view the data quality signal page. On this page, you can see the null value rates, out-of-bounds rates, and data type error rates for each feature you're monitoring. ++ :::image type="content" source="media/how-to-monitor-models/data-quality-details-page.png" alt-text="Screenshot showing the details page of the data quality signal." lightbox="media/how-to-monitor-models/data-quality-details-page.png"::: ++Model monitoring is a continuous process. With Azure Machine Learning model monitoring, you can configure multiple monitoring signals to obtain a broad view into the performance of your models in production. +++## Integrate Azure Machine Learning model monitoring with Azure Event Grid ++You can use events generated by Azure Machine Learning model monitoring to set up event-driven applications, processes, or CI/CD workflows with [Azure Event Grid](how-to-use-event-grid.md). You can consume events through various event handlers, such as Azure Event Hubs, Azure functions, and logic apps. Based on the drift detected by your monitors, you can take action programmatically, such as by setting up a machine learning pipeline to re-train a model and re-deploy it. ++To get started with integrating Azure Machine Learning model monitoring with Event Grid: ++1. Follow the steps in see [Set up in Azure portal](how-to-use-event-grid.md#set-up-in-azure-portal). Give your **Event Subscription** a name, such as MonitoringEvent, and select only the **Run status changed** box under **Event Types**. ++ > [!WARNING] + > + > Be sure to select **Run status changed** for the event type. Don't select **Dataset drift detected**, as it applies to data drift v1, rather than Azure Machine Learning model monitoring. ++1. Follow the steps in [Filter & subscribe to events](how-to-use-event-grid.md#filter--subscribe-to-events) to set up event filtering for your scenario. Navigate to the **Filters** tab and add the following **Key**, **Operator**, and **Value** under **Advanced Filters**: ++ - **Key**: `data.RunTags.azureml_modelmonitor_threshold_breached` + - **Value**: has failed due to one or more features violating metric thresholds + - **Operator**: String contains ++ With this filter, events are generated when the run status changes (from Completed to Failed, or from Failed to Completed) for any monitor within your Azure Machine Learning workspace. ++1. To filter at the monitoring level, use the following **Key**, **Operator**, and **Value** under **Advanced Filters**: ++ - **Key**: `data.RunTags.azureml_modelmonitor_threshold_breached` + - **Value**: `your_monitor_name_signal_name` + - **Operator**: String contains ++ Ensure that `your_monitor_name_signal_name` is the name of a signal in the specific monitor you want to filter events for. For example, `credit_card_fraud_monitor_data_drift`. For this filter to work, this string must match the name of your monitoring signal. You should name your signal with both the monitor name and the signal name for this case. ++1. When you've completed your **Event Subscription** configuration, select the desired endpoint to serve as your event handler, such as Azure Event Hubs. +1. After events have been captured, you can view them from the endpoint page: ++ :::image type="content" source="media/how-to-monitor-models/events-on-endpoint-page.png" alt-text="Screenshot showing events viewed from the endpoint page." lightbox="media/how-to-monitor-models/events-on-endpoint-page.png"::: ++You can also view events in the Azure Monitor **Metrics** tab: ++ :::image type="content" source="media/how-to-monitor-models/events-in-azure-monitor-metrics-tab.png" alt-text="Screenshot showing events viewed from the Azure monitor metrics tab." lightbox="media/how-to-monitor-models/events-in-azure-monitor-metrics-tab.png"::: ++++## Related content - [Data collection from models in production (preview)](concept-data-collection.md) - [Collect production data from models deployed for real-time inferencing](how-to-collect-production-data.md) |
machine-learning | How To R Modify Script For Production | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-modify-script-for-production.md | mlflow_log_param(<key-name>, <value>) ## Create an environment -To run your R script, you'll use the `ml` extension for Azure CLI, also referred to as CLI v2. The `ml` command uses a YAML job definitions file. For more information about submitting jobs with `az ml`, see [Train models with Azure Machine Learning CLI](how-to-train-model.md?tabs=azurecli#4-submit-the-training-job). +To run your R script, you'll use the `ml` extension for Azure CLI, also referred to as CLI v2. The `ml` command uses a YAML job definitions file. For more information about submitting jobs with `az ml`, see [Train models with Azure Machine Learning CLI](how-to-train-model.md?tabs=azurecli#3-submit-the-training-job). The YAML job file specifies an [environment](concept-environments.md). You'll need to create this environment in your workspace before you can run the job. |
machine-learning | How To Registry Network Isolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-registry-network-isolation.md | If you don't have a secure workspace configuration, you can create it using the :::image type="content" source="./media/how-to-registry-network-isolation/basic-network-isolation-registry.png" alt-text="Diagram of registry connected to Virtual network containing workspace and associated resources using private endpoint."::: -+## Limitations +If you are using an Azure Machine Learning registry with network isolation, you won't be able to see the assets in Studio. You also won't be able to perform any operations on Azure Machine Learning registry or assets under it using Studio. Please use the Azure Machine Learning CLI or SDK instead. ## Scenario: workspace configuration is secure and Azure Machine Learning registry is public This section describes the scenarios and required network configuration if you have a secure workspace configuration but using a public registry. |
machine-learning | How To Train Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md | When you train using the REST API, data and training scripts must be uploaded to ### 2. Create a compute resource for training > [!NOTE]-> To try [serverless compute](./how-to-use-serverless-compute.md), skip this step and proceed to [ 4. Submit the training job](#4-submit-the-training-job). +> To try [serverless compute](./how-to-use-serverless-compute.md), skip this step and proceed to [ 3. Submit the training job](#3-submit-the-training-job). An Azure Machine Learning compute cluster is a fully managed compute resource that can be used to run the training job. In the following examples, a compute cluster named `cpu-compute` is created. curl -X PUT \ -### 4. Submit the training job +### 3. Submit the training job # [Python SDK](#tab/python) |
machine-learning | How To Troubleshoot Batch Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md | __Reason__: The access token used to invoke the REST API for the endpoint/deploy __Solution__: When generating an authentication token to be used with the Batch Endpoint REST API, ensure the `resource` parameter is set to `https://ml.azure.com`. Please notice that this resource is different from the resource you need to indicate to manage the endpoint using the REST API. All Azure resources (including batch endpoints) use the resource `https://management.azure.com` for managing them. Ensure you use the right resource URI on each case. Notice that if you want to use the management API and the job invocation API at the same time, you'll need two tokens. For details see: [Authentication on batch endpoints (REST)](how-to-authenticate-batch-endpoint.md?tabs=rest). +### No valid deployments to route to. Please check that the endpoint has at least one deployment with positive weight values or use a deployment specific header to route. ++__Reason__: Default Batch Deployment isn't set correctly. ++__Solution__: ensure the default batch deployment is set correctly. You may need to update the default batch deployment. For details see: [Update the default batch deployment](how-to-use-batch-model-deployments.md?tabs=cli&#update-the-default-batch-deployment) + ## Limitations and not supported scenarios When designing machine learning solutions that rely on batch endpoints, some configurations and scenarios may not be supported. |
machine-learning | How To Custom Tool Package Creation And Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-custom-tool-package-creation-and-usage.md | Last updated 09/12/2023 # Custom tool package creation and usage -When developing flows, you can not only use the built-in tools provided by prompt flow, but also develop your own custom tool. In this document, we guide you through the process of developing your own tool package, offering detailed steps and advice on how to utilize your creation. +When developing flows, you can not only use the built-in tools provided by prompt flow, but also develop your own custom tool. In this document, we guide you through the process of developing your own tool package, offering detailed steps and advice on how to utilize the custom tool package. After successful installation, your custom "tool" can show up in the tool list: :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui.png" alt-text="Screenshot of custom tools in the UI tool list."lightbox = "./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui.png"::: ## Create your own tool package -Your tool package should be a python package. To develop your custom tool, follow the steps **Create your own tool package** and **build and share the tool package** in [Create and Use Tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). You can also [Add a tool icon](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/add-a-tool-icon.html) and [Add Category and tags](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/add-category-and-tags-for-tool.html) for your tool. +Your tool package should be a python package. To develop your custom tool, follow the steps **Create your own tool package** and **build and share the tool package** in [Create and Use Tool Package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). You can find more advanced development guidance in [How to develop a tool](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/https://docsupdatetracker.net/index.html). ## Prepare runtime -To add the custom tool to your tool list, it's necessary to create a runtime, which is based on a customized environment where your custom tool is preinstalled. Here we use [my-tools-package](https://pypi.org/project/my-tools-package/) as an example to prepare the runtime. +In order to add the custom tool to your tool list for use, it's necessary to prepare the runtime. Here we use [my-tools-package](https://pypi.org/project/my-tools-package/) as an example. ++**If you use the automatic runtime**, you can readily install the package by adding the custom tool package name into the `requirements.txt` file in the flow folder. Then select the 'Save and install' button to start installation. After completion, you can see the custom tools displayed in the tool list. To learn more, see [How to create and manage runtime](./how-to-create-manage-runtime.md). ++**If you use the compute instance runtime**, which should be based on a customized environment where your custom tool is preinstalled, please take the following steps: ### Create customized environment To add the custom tool to your tool list, it's necessary to create a runtime, wh 3. Change flow based on your requirements and run flow in the selected runtime. :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui-step-2.png" alt-text="Screenshot of flow in Azure Machine Learning studio showing adding a tool."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui-step-2.png"::: -## Test from VS Code extension -+## FAQ +### How to install the custom tool package in the VS Code extension? 1. Install prompt flow for VS Code extension :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/prompt-flow-vs-code-extension.png" alt-text="Screenshot of prompt flow VS Code extension."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/prompt-flow-vs-code-extension.png":::-2. Go to terminal and install your tool package in conda environment of the extension. Assume your conda env name is `prompt-flow`. +2. Go to terminal and install the tool package in conda environment of the extension. Assume your conda env name is `prompt-flow`. ```sh (local_test) PS D:\projects\promptflow\tool-package-quickstart> conda activate prompt-flow- (prompt-flow) PS D:\projects\promptflow\tool-package-quickstart> pip install .\dist\my_tools_package-0.0.1-py3-none-any.whl + (prompt-flow) PS D:\projects\promptflow\tool-package-quickstart> pip install my-tools-package==0.0.1 ``` 3. Go to the extension and open one flow folder. Select 'flow.dag.yaml' and preview the flow. Next, select `+` button and you can see your tools. You need to **reload the windows** to clean previous cache if you don't see your tool in the list. :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/auto-list-tool-in-extension.png" alt-text="Screenshot of the VS Code showing the tools." lightbox ="./media/how-to-custom-tool-package-creation-and-usage/auto-list-tool-in-extension.png"::: -## FAQ - ### Why is my custom tool not showing up in the UI? You can test your tool package using the following script to ensure that you've packaged your tool YAML files and configured the package tool entry point correctly. |
machine-learning | How To Monitor Generative Ai Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-monitor-generative-ai-applications.md | -> Monitoring is currently in public preview. These previews are provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. +> Model monitoring for generative AI applications is currently in public preview. These previews are provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Azure Machine Learning model monitoring for generative AI applications makes it easier for you to monitor your LLM applications in production for safety and quality on a cadence to ensure it's delivering maximum business impact. Monitoring ultimately helps maintain the quality and safety of your generative AI applications. Capabilities and integrations include: |
machine-learning | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/overview.md | Last updated 10/24/2023 # Overview of tools in prompt flow+This page provides an overview of the tools that are available in prompt flow. It also offers instructions on how to create your own custom tool and how to install custom tools. -The following table provides an index of tools in prompt flow. If existing tools don't meet your requirements, you can [develop your own custom tool and make a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). ++## An index of tools +The following table shows an index of tools in prompt flow. | Tool name | Description | Environment | Package name | ||--|-|--| The following table provides an index of tools in prompt flow. If existing tools | [Vector DB Lookup](./vector-db-lookup-tool.md) | Searches a vector-based query from existing vector database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) | | [Vector Index Lookup](./vector-index-lookup-tool.md) | Searches text or a vector-based query from Azure Machine Learning vector index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) | -To discover more custom tools developed by the open-source community, see [More custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html). --For the tools to use in the custom environment, see [Custom tool package creation and usage](../how-to-custom-tool-package-creation-and-usage.md#prepare-runtime) to prepare the runtime. Then the tools can be displayed in the tool list. +To discover more custom tools developed by the open-source community, see [More custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html). + + +## Remarks +- If existing tools don't meet your requirements, you can [develop your own custom tool and make a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). +- To install custom tools or add more tools to the custom environment, see [Custom tool package creation and usage](../how-to-custom-tool-package-creation-and-usage.md#prepare-runtime) to prepare the runtime. Then the tools can be displayed in the tool list. |
machine-learning | Reference Yaml Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-monitor.md | As the data used to train the model evolves in production, the distribution of t | `production_data` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `production_data.data_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_inputs` | |-| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | | +| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | | | `production_data.data_window_size` | ISO8601 format |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for data drift. | By default the data window size is the last monitoring period. | | | `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `reference_data.data_context` | String | The context of data, it refers to the context that dataset was used before | `model_inputs`, `training`, `test`, `validation` | | | `reference_data.target_column_name` | Object | **Optional**. If the 'reference_data' is training data, this property is required for monitoring top N features for data drift. | | | | `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.trailing_window_offset` and `reference_data.data_window.trailing_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format | |-| `reference_data_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is **required** if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | | +| `reference_data_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is **required** if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | | | `features` | Object | **Optional**. Target features to be monitored for data drift. Some models might have hundreds or thousands of features, it's always recommended to specify interested features for monitoring. | One of following values: list of feature names, `features.top_n_feature_importance`, or `all_features` | Default `features.top_n_feature_importance = 10` if `production_data.data_context` is `training`, otherwise, default is `all_features` | | `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_enabled` is `true`, user will receive alert notification. | | | Prediction drift tracks changes in the distribution of a model's prediction outp | `production_data` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | | | `production_data.data_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_outputs` | |-| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | | +| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | | | `production_data.data_window_size` | ISO8601 format |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for prediction drift. | By default the data window size is the last monitoring period.| | | `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use validation or testing data as comparison baseline. | | | | `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `reference_data.data_context` | String | The context of data, it refers to the context that dataset come from. | `model_outputs`, `testing`, `validation` | | | `reference_data.target_column_name` | String | The name of target column, **Required** if the `reference_data.data_context` is `testing` or `validation` | | | | `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.trailing_window_offset` and `reference_data.data_window.trailing_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format | |-| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. **Required** if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | | +| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. **Required** if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | | | `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_enabled` is `true`, user will receive alert notification. | | | | `metric_thresholds.numerical` | Object | Optional. List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed numerical metric names: `jensen_shannon_distance`, `normalized_wasserstein_distance`, `population_stability_index`, `two_sample_kolmogorov_smirnov_test`| | Data quality signal tracks data quality issues in production by comparing to tra | `production_data` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | | | `production_data.data_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_inputs`, `model_outputs` | |-| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | | +| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | | | `production_data.data_window_size` | ISO8601 format |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for data quality issues. | By default the data window size is the last monitoring period.| | | `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `reference_data.data_context` | String | The context of data, it refers to the context that dataset was used before | `model_inputs`, `model_outputs`, `training`, `test`, `validation` | | | `reference_data.target_column_name` | Object | **Optional**. If the 'reference_data' is training data, this property is required for monitoring top N features for data drift. | | | | `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.trailing_window_offset` and `reference_data.data_window.trailing_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format | |-| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | | +| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | | | `features` | Object | **Optional**. Target features to be monitored for data quality. Some models might have hundreds or thousands of features. It's always recommended to specify interested features for monitoring. | One of following values: list of feature names, `features.top_n_feature_importance`, or `all_features` | Default to `features.top_n_feature_importance = 10` if `reference_data.data_context` is `training`, otherwise default is `all_features` | | `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_enabled` is `true`, user will receive alert notification. | | | The feature attribution of a model may change over time due to changes in the di | `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | | | `production_data.data_context` | String | The context of data. It refers to production model inputs data. | `model_inputs`, `model_outputs`, `model_inputs_outputs` | | | `production_data.data_column_names` | Object | Correlation column name and prediction column names in `key:value` format, needed for data joining. | Allowed keys are: `correlation_id`, `prediction`, `prediction_probability` |-| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | | +| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | | | `production_data.data_window_size` | String |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for data quality issues. | By default the data window size is the last monitoring period.| | | `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `reference_data.data_context` | String | The context of data, it refers to the context that dataset was used before. Fro feature attribution drift, only `training` data allowed. | `training` | | | `reference_data.target_column_name` | String | **Required**. | | |-| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | | +| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | | | `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | Metric name and threshold for feature attribution drift in `key:value` format, where `key` is the metric name, and `value` is the threshold. When threshold is exceeded and `alert_enabled` is on, user will receive alert notification. | Allowed metric name: `normalized_discounted_cumulative_gain` | | |
mysql | Concepts Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md | After a restore from either **latest restore point** or **custom restore point** ### Backup-related questions - **How do I back up my server?**-By default, Azure Database for MySQL flexible server enables automated backups of your entire server (encompassing all databases created) with a default 7-day retention period. You can also trigger a manual backup using On-Demand backup feature. The other way to manually take a backup is by using community tools such as mysqldump as documented [here](../concepts-migrate-dump-restore.md#dump-and-restore-using-mysqldump-utility) or mydumper as documented [here](../concepts-migrate-mydumper-myloader.md#create-a-backup-using-mydumper). If you wish to back up an Azure Database for MySQL flexible server instance to a Blob storage, refer to our tech community blog [Backup Azure Database for MySQL flexible server to a Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/backup-azure-database-for-mysql-to-a-blob-storage/ba-p/803830). ++ By default, Azure Database for MySQL flexible server enables automated backups of your entire server (encompassing all databases created) with a default 7-day retention period. You can also trigger a manual backup using On-Demand backup feature. The other way to manually take a backup is by using community tools such as mysqldump as documented [here](../concepts-migrate-dump-restore.md#dump-and-restore-using-mysqldump-utility) or mydumper as documented [here](../concepts-migrate-mydumper-myloader.md#create-a-backup-using-mydumper). If you wish to back up an Azure Database for MySQL flexible server instance to a Blob storage, refer to our tech community blog [Backup Azure Database for MySQL flexible server to a Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/backup-azure-database-for-mysql-to-a-blob-storage/ba-p/803830). + - **Can I configure automatic backups to be retained for long term?**-No, currently we only support a maximum of 35 days of automated backup retention. You can take manual backups and use that for long-term retention requirement. ++ No, currently we only support a maximum of 35 days of automated backup retention. You can take manual backups and use that for long-term retention requirement. + - **What are the backup windows for my server? Can I customize it?**-The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken once daily. Transaction log backups occur every five minutes. Backup windows are inherently managed by Azure and can't be customized. ++ The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken once daily. Transaction log backups occur every five minutes. Backup windows are inherently managed by Azure and can't be customized. + - **Are my backups encrypted?**-All Azure Database for MySQL flexible server data, backups, and temporary files created during query execution are encrypted using AES 256-bit encryption. The storage encryption is always on and can't be disabled. ++ All Azure Database for MySQL flexible server data, backups, and temporary files created during query execution are encrypted using AES 256-bit encryption. The storage encryption is always on and can't be disabled. + - **Can I restore a single/few database(s)?**-Restoring a single/few database(s) or tables isn't supported. In case you want to restore specific databases, perform a Point in Time Restore and then extract the table(s) or database(s) needed. + + Restoring a single/few database(s) or tables isn't supported. In case you want to restore specific databases, perform a Point in Time Restore and then extract the table(s) or database(s) needed. + - **Is my server available during the backup window?**-Yes. Backups are online operations and are snapshot-based. The snapshot operation only takes few seconds and doesn't interfere with production workloads, ensuring high availability of the server. + + Yes. Backups are online operations and are snapshot-based. The snapshot operation only takes few seconds and doesn't interfere with production workloads, ensuring high availability of the server. + - **When setting up the maintenance window for the server, do we need to account for the backup window?**-No, backups are triggered internally as part of the managed service and have no bearing on the Managed Maintenance Window. + + No, backups are triggered internally as part of the managed service and have no bearing on the Managed Maintenance Window. - **Where are my automated backups stored and how do I manage their retention?**-Azure Database for MySQL flexible server automatically creates server backups and stores them in user-configured, locally redundant storage or in geo-redundant storage. These backup files can't be exported. The default backup retention period is seven days. You can optionally configure the database backup from 1 to 35 days. + + Azure Database for MySQL flexible server automatically creates server backups and stores them in user-configured, locally redundant storage or in geo-redundant storage. These backup files can't be exported. The default backup retention period is seven days. You can optionally configure the database backup from 1 to 35 days. - **How can I validate my backups?**-The best way to validate availability of successfully completed backups is to view the full-automated backups taken within the retention period in the Backup and Restore blade. If a backup fails, it isn't listed in the available backups list, and the backup service will try every 20 minutes to take a backup until a successful backup is taken. These backup failures are due to heavy transactional production loads on the server. + + The best way to validate availability of successfully completed backups is to view the full-automated backups taken within the retention period in the Backup and Restore blade. If a backup fails, it isn't listed in the available backups list, and the backup service will try every 20 minutes to take a backup until a successful backup is taken. These backup failures are due to heavy transactional production loads on the server. + - **Where can I see the backup usage?**-In the Azure portal, under the Monitoring tab - Metrics section, you can find the [Backup Storage Used](./concepts-monitoring.md) metric, which can help you monitor the total backup usage. + + In the Azure portal, under the Monitoring tab - Metrics section, you can find the [Backup Storage Used](./concepts-monitoring.md) metric, which can help you monitor the total backup usage. + - **What happens to my backups if I delete my server?**-If you delete the server, all backups that belong to the server are also deleted and can't be recovered. To protect server resources post deployment from accidental deletion or unexpected changes, administrators can use [management locks](../../azure-resource-manager/management/lock-resources.md). ++ If you delete the server, all backups that belong to the server are also deleted and can't be recovered. To protect server resources post deployment from accidental deletion or unexpected changes, administrators can use [management locks](../../azure-resource-manager/management/lock-resources.md). - **What happens to my backups when I restore a server?**-If you restore a server, then it always results in a creation of a net new server that has been restored using original server's backups. The old backup from the original server is not copied over to the newly restored server and it remains with the original server. However, for the newly created server the first snapshot backup is scheduled immediately after a server is created and the service ensures daily automated backups are taken and stored as per configured server retention period. ++ If you restore a server, then it always results in a creation of a net new server that has been restored using original server's backups. The old backup from the original server is not copied over to the newly restored server and it remains with the original server. However, for the newly created server the first snapshot backup is scheduled immediately after a server is created and the service ensures daily automated backups are taken and stored as per configured server retention period. - **How am I charged and billed for my use of backups?**-Azure Database for MySQL flexible server provides up to 100% of your provisioned server storage as backup storage at no added cost. Any more backup storage used is charged in GB per month as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/server/). Backup storage billing is also governed by the backup retention period selected and backup redundancy option chosen, apart from the transactional activity on the server, which impacts the total backup storage used directly. ++ Azure Database for MySQL flexible server provides up to 100% of your provisioned server storage as backup storage at no added cost. Any more backup storage used is charged in GB per month as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/server/). Backup storage billing is also governed by the backup retention period selected and backup redundancy option chosen, apart from the transactional activity on the server, which impacts the total backup storage used directly. + - **How are backups retained for stopped servers?**-No new backups are performed for stopped servers. All older backups (within the retention window) at the time of stopping the server are retained until the server is restarted, post which backup retention for the active server is governed by its backup retention window. ++ No new backups are performed for stopped servers. All older backups (within the retention window) at the time of stopping the server are retained until the server is restarted, post which backup retention for the active server is governed by its backup retention window. - **How will I be billed for backups for a stopped server?**-While your server instance is stopped, you're charged for provisioned storage (including Provisioned IOPS) and backup storage (backups stored within your specified retention window). Free backup storage is limited to the size of your provisioned database and only applies to active servers. ++ While your server instance is stopped, you're charged for provisioned storage (including Provisioned IOPS) and backup storage (backups stored within your specified retention window). Free backup storage is limited to the size of your provisioned database and only applies to active servers. + - **How is my backup data protected?**-Azure database for MySQL Flexible server protects your backup data by blocking any operations that could lead to loss of recovery points for the duration of the configured retention period. Backups taken during the retention period can only be read for the purpose of restoration and are deleted post retention period. Also, all backups in Azure Database for MySQL flexible server are encrypted using AES 256-bit encryption for the data stored at rest. ++ Azure database for MySQL Flexible server protects your backup data by blocking any operations that could lead to loss of recovery points for the duration of the configured retention period. Backups taken during the retention period can only be read for the purpose of restoration and are deleted post retention period. Also, all backups in Azure Database for MySQL flexible server are encrypted using AES 256-bit encryption for the data stored at rest. ++- **How does a Point-In-Time Restore (PITR) operation affect IOPS usage?** ++ During a PITR operation in Azure Database for MySQL - Flexible Server, a new server is created and data is copied from the source serverΓÇÖs storage to the new serverΓÇÖs storage. This process results in an increased IOPS usage on the source server. This increase in IOPS usage is a normal occurrence and does not indicate any issues with the source server or the PITR operation. Once the PITR operation is complete, the IOPS usage on the source server will return to its usual levels. ### Restore-related questions Azure database for MySQL Flexible server protects your backup data by blocking a Azure portal supports Point In Time Restore for all servers, allowing users to restore to latest or custom restore points. To manually restore your server from the backups taken by mysqldump/myDumper, see [Restore your database using myLoader](../concepts-migrate-mydumper-myloader.md#restore-your-database-using-myloader). - **Why is my restore taking so much time?**-The estimated time for the recovery of the server depends on several factors: ++ The estimated time for the recovery of the server depends on several factors: - The size of the databases. As a part of the recovery process, the database needs to be hydrated from the last physical backup and hence the time taken to recover will be proportional to the size of the database. - The active portion of transaction activity that needs to be replayed to recover. Recovery can take longer depending on the added transaction activity from the last successful checkpoint. - The network bandwidth if the restore is to a different region. The estimated time for the recovery of the server depends on several factors: - The presence of primary keys in the tables in the database. For faster recovery, consider adding primary keys for all the tables in your database. - **Will modifying session level database variables impact restoration?**+ Modifying session level variables and running DML statements in a MySQL client session can impact the PITR (point in time restore) operation, as these modifications don't get recorded in the binary log that is used for backup and restore operation. For example, [foreign_key_checks](http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_foreign_key_checks) are one such session-level variable, which if disabled for running a DML statement which violates the foreign key constraint results in PITR (point in time restore) failure. The only workaround in such a scenario would be to select a PITR time earlier than the time at which [foreign_key_checks](http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_foreign_key_checks) were disabled. Our recommendation is to NOT modify any session variables for a successful PITR operation. ## Next steps |
mysql | Concepts Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md | Last updated 04/26/2023 This article provides considerations and guidelines for configuring server parameters in Azure Database for MySQL flexible server. +> [!NOTE] +> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. ++ ## What are server variables? The MySQL engine provides many different [server variables/parameters](https://dev.mysql.com/doc/refman/5.7/en/server-option-variable-reference.html) that can be used to configure and tune engine behavior. Some parameters can be set dynamically during runtime while others are "static", requiring a server restart in order to apply. Refer to the following sections to learn more about the limits of the several co ### lower_case_table_names -For MySQL version 5.7, default value is 1 in Azure Database for MySQL flexible server. It is important to note that while it is possible to change the supported value to 2, reverting from 2 back to 1 is not allowed. Please contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value. -For [MySQl version 8.0+](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html) lower_case_table_names can only be configured when initializing the server. [Learn more](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html). Changing the lower_case_table_names setting after the server is initialized is prohibited. For MySQL version 8.0, default value is 1 in Azure Database for MySQL flexible server. Supported value for MySQL version 8.0 are 1 and 2 in Azure Database for MySQL flexible server. Please contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value during server creation. +For MySQL version 5.7, default value is 1 in Azure Database for MySQL flexible server. It's important to note that while it is possible to change the supported value to 2, reverting from 2 back to 1 isn't allowed. Contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value. +For [MySQL version 8.0+](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html) lower_case_table_names can only be configured when initializing the server. [Learn more](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html). Changing the lower_case_table_names setting after the server is initialized is prohibited. For MySQL version 8.0, default value is 1 in Azure Database for MySQL flexible server. Supported value for MySQL version 8.0 are 1 and 2 in Azure Database for MySQL flexible server. Contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value during server creation. +++### innodb_tmpdir ++The [innodb_tmpdir](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_tmpdir) parameter in Azure Database for MySQL Flexible Server is used to define directory for temporary sort files created during online ALTER TABLE operations that rebuild. The default value of innodb_tmpdir is `/mnt/temp`. This location corresponds to the [temporary storage SSD](./concepts-service-tiers-storage.md#service-tiers-size-and-server-types), available in GiB with each server compute size. This location is ideal for operations that donΓÇÖt require a large amount of space. +If more space is needed, you can set innodb_tmpdir to `/app/work/tmpdir`. This utilizes your storage, capacity available on your Azure Database for MySQL Flexible Server. This can be useful for larger operations that require more temporary storage. +It's important to note that utilizing `/app/work/tmpdir` results in slower performance compared to the [default temp storage (SSD)](./concepts-service-tiers-storage.md#service-tiers-size-and-server-types) `/mnt/temp`. The choice should be made based on the specific requirements of the operations. +The information provided for the `innodb_tmpdir` is applicable to the parameters [innodb_temp_tablespaces_dir](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_temp_tablespaces_dir), [tmpdir](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_tmpdir), and [slave_load_tmpdir](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#sysvar_replica_load_tmpdir) where the default value `/mnt/temp` is common, and the alternative directory `/app/work/tmpdir` is available for configuring increased temporary storage, with a trade-off in performance based on specific operational requirements. ### log_bin_trust_function_creators In Azure Database for MySQL flexible server, binary logs are always enabled (tha The binary logging format is always **ROW** and all connections to the server **ALWAYS** use row-based binary logging. With row-based binary logging, security issues don't exist and binary logging can't break, so you can safely allow [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to remain **ON**. -If [`log_bin_trust_function_creators`] is set to OFF, if you try to create triggers you may get errors similar to *you do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*. +If [`log_bin_trust_function_creators`] is set to OFF, if you try to create triggers you may get errors similar to *you don't have the SUPER privilege, and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*. ### innodb_buffer_pool_size The binary log contains "events" that describe database changes such as table cr ### event_scheduler -In Azure Database for MySQL flexible server, the `event_schedule` server parameter manages creating, scheduling, and running events, i.e., tasks that run according to a schedule, and they're run by a special event scheduler thread. When the `event_scheduler` parameter is set to ON, the event scheduler thread is listed as a daemon process in the output of SHOW PROCESSLIST. You can create and schedule events using the following SQL syntax: +In Azure Database for MySQL flexible server, the `event_schedule` server parameter manages creating, scheduling, and running events, that is, tasks that run according to a schedule, and they're run by a special event scheduler thread. When the `event_scheduler` parameter is set to ON, the event scheduler thread is listed as a daemon process in the output of SHOW PROCESSLIST. You can create and schedule events using the following SQL syntax: ```sql CREATE EVENT <event name> |
mysql | Concepts Service Tiers Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md | You can create an Azure Database for MySQL flexible server instance in one of th | Storage size | 20 GiB to 16 TiB | 20 GiB to 16 TiB | 20 GiB to 16 TiB | | Database backup retention period | 1 to 35 days | 1 to 35 days | 1 to 35 days | -\** With the exception of 64,80, and 96 vCores, which has 504, 504 and 672 GiB of memory respectively. +\** With the exception of 64,80, and 96 vCores, which has 504 GiB, 504 GiB and 672 GiB of memory respectively. \* Ev5 compute provides best performance among other VM series in terms of QPS and latency. learn more about performance and region availability of Ev5 compute from [here](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698). The unique feature of the burstable compute tier is its ability to ΓÇ£burstΓÇ¥, However, itΓÇÖs important to note that once a burstable instance exhausts its CPU credits, it operates at its base CPU performance. For example, the base CPU performance of a Standard_B1s is 20% that is, 0.2 Vcore. If Burstable tier server is running a workload that requires more CPU performance than the base level, and it has exhausted its CPU credits, the server may experience performance limitations and eventually could affect various system operations for your server. -Therefore, while the Burstable compute tier offers significant cost and flexibility advantages for certain types of workloads, **it is not recommended for production workloads** that require consistent CPU performance. Note that the Burstable tier doesn't support functionality of creating [Read Replicas](./concepts-read-replicas.md) and [High availability](./concepts-high-availability.md) feature. For such workloads and features, other compute tiers, such as the General Purpose or Business Critical are more appropriate. +Therefore, while the Burstable compute tier offers significant cost and flexibility advantages for certain types of workloads, **it is not recommended for production workloads** that require consistent CPU performance. The Burstable tier doesn't support functionality of creating [Read Replicas](./concepts-read-replicas.md) and [High availability](./concepts-high-availability.md) feature. For such workloads and features, other compute tiers, such as the General Purpose or Business Critical are more appropriate. -For more information on the Azure's B-series CPU credit model, refer to the [B-series burstable instances](../../virtual-machines/sizes-b-series-burstable.md) and [B-series CPU credit model](../../virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model.md#b-series-cpu-credit-model). +For more information on the Azure's B-series CPU credit model, see the [B-series burstable instances](../../virtual-machines/sizes-b-series-burstable.md) and [B-series CPU credit model](../../virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model.md#b-series-cpu-credit-model). ### Monitoring CPU credits in burstable tier Monitoring your CPU credit balance is crucial for maintaining optimal performanc [CPU Credit Consumed](./concepts-monitoring.md): This metric indicates the number of CPU credits consumed by your instance. Monitoring this metric can help you understand your instanceΓÇÖs CPU usage patterns and manage its performance effectively. -[CPU Credit Remaining](./concepts-monitoring.md): This metric shows the number of CPU credits remaining for your instance. Keeping an eye on this metric can help you prevent your instance from degrading in performance due to exhausting its CPU credits. +[CPU Credit Remaining](./concepts-monitoring.md): This metric shows the number of CPU credits remaining for your instance. Keeping an eye on this metric can help you prevent your instance from degrading in performance due to exhausting its CPU credits. If the CPU Credit Remaining metric drops below a certain level (for example, less than 30% of the total available credits), this would indicate that the instance is at risk of exhausting its CPU credits if the current CPU load continues. For more information, on [how to setup alerts on metrics, refer to this guide](./how-to-alert-on-metric.md). |
mysql | Concepts Storage Iops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-storage-iops.md | Moreover, Additional IOPS with pre-provisioned refers to the flexibility of incr Autoscale IOPS offer the flexibility to scale IOPS on demand, eliminating the need to pre-provision a specific amount of IO per second. By enabling Autoscale IOPS, your server will automatically adjust IOPS based on workload requirements. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs. +**Dynamic Scaling**: Autoscale IOPS dynamically adjust the IOPS limit of your database server based on the actual demand of your workload. This ensures optimal performance without manual intervention or configuration +**Handling Workload Spikes**: Autoscale IOPS enable your database to seamlessly handle workload spikes or fluctuations without compromising the performance of your applications. This feature ensures consistent responsiveness even during peak usage periods. +**Cost Savings**: Unlike the Pre-provisioned IOPS where a fixed IOPS limit is specified and paid for regardless of usage, Autoscale IOPS lets you pay only for the number of I/O operations that you consume. With this feature, you'll only be charged for the IO your server actually utilizes, avoiding unnecessary provisioning and expenses for underutilized resources. This ensures both cost savings and optimal performance, making it a smart choice for managing your database workload efficiently. Autoscale IOPS: The Autoscale feature might not provide significant advantages i ## Frequent Asked Questions #### How to move from pre-provisioned IOPS to Autoscale IOPS?-- Access your Azure portal and locate the relevant Azure database for Azure Database for MySQL flexible server.+- Access your Azure portal and locate the relevant Azure Database for MySQL flexible server. - Go to the Settings blade and choose the Compute + Storage section. - Within the IOPS section, opt for Auto Scale IOPS and save the settings to apply the modifications. #### How soon does Autoscale IOPS take effect after making the change? Once you enable Autoscale IOPS for Azure Database for MySQL flexible server and save the settings, the changes take effect immediately after the deployment to the resource has completed successfully. This means that the Autoscale IOPS feature will be applied to your database without any delay. +#### How does a Point-In-Time Restore (PITR) operation affect IOPS usage? +During a PITR operation in Azure Database for MySQL - Flexible Server, a new server is created and data is copied from the source serverΓÇÖs storage to the new serverΓÇÖs storage. This process results in an increased IOPS usage on the source server. This increase in IOPS usage is a normal occurrence and doesn't indicate any issues with the source server or the PITR operation. Once the PITR operation is complete, the IOPS usage on the source server returns to its usual levels. For more information on PITR, you can refer to the +[Backup and Restore section](./concepts-backup-restore.md) in the Azure Database for MySQL - Flexible Server documentation. + #### How to know when IOPS have scaled up and scaled down when the server is using Autoscale IOPS feature? Or Can I monitor IOPS usage for my server? Refer to [ΓÇ£Monitor Storage performanceΓÇ¥](#monitor-storage-performance) section, which will help to identify if your server has scaled up or scaled down during specific time window. |
mysql | Create Automation Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/create-automation-tasks.md | Creating an automation task doesn't immediately incur charges. The automation ta ||| ## Stop server task-Here's an example to configure stop tasks for a Azure Database for MySQL flexible server instance. +Here's an example to configure stop tasks for an Azure Database for MySQL flexible server instance. 1. Select **Stop MySQL Flexible server** task. |
openshift | Howto Deploy Java Jboss Enterprise Application Platform App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md | -This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to ARO. The offer automatically provisions a number of resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro). +This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to ARO. The offer automatically provisions resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro). ## Prerequisites This article uses the Azure Marketplace offer for JBoss EAP to accelerate your j - A Red Hat account with complete profile. If you don't have one, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). -- Use [Azure Cloud Shell](/azure/cloud-shell/quickstart) using the Bash environment. Be sure the Azure CLI version is 2.43.0 or higher.+- A local developer command line with a UNIX command environment and Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli). - [![Image of button to launch Cloud Shell in a new window.](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) +- The `mysql` CLI. For instructions see [How To Install MySQL](https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-20-04). - > [!NOTE] - > You can also execute this guidance from a local developer command line with the Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli). - > - > If you are using a local developer command line, you must install the `mysql` CLI. For instructions see [How To Install MySQL](https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-20-04). +> [!NOTE] +> You can also execute this guidance from the [Azure Cloud Shell](/azure/cloud-shell/quickstart). This approach has all the prerequisite tools pre-installed. +> +> [![Image of button to launch Cloud Shell in a new window.](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) - Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role and the [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview) This article uses the Azure Marketplace offer for JBoss EAP to accelerate your j ## Get a Red Hat pull secret -The Azure Marketplace offer you use in this article requires a Red Hat pull secret. This section shows you how to get a Red Hat pull secret for Azure Red Hat OpenShift. To learn about what a Red Hat pull secret is and why you need it, see the [Get a Red Hat pull secret](/azure/openshift/tutorial-create-cluster#get-a-red-hat-pull-secret-optional) section in [Tutorial: Create an Azure Red Hat OpenShift 4 cluster](/azure/openshift/tutorial-create-cluster). +The Azure Marketplace offer used in this article requires a Red Hat pull secret. This section shows you how to get a Red Hat pull secret for Azure Red Hat OpenShift. To learn about what a Red Hat pull secret is and why you need it, see the [Get a Red Hat pull secret](/azure/openshift/tutorial-create-cluster#get-a-red-hat-pull-secret-optional) section in [Tutorial: Create an Azure Red Hat OpenShift 4 cluster](/azure/openshift/tutorial-create-cluster). Use the following steps to get the pull secret. Use the following steps to deploy a service principal and get its Application (c 1. Provide a description of the secret and a duration. When you're done, select **Add**. 1. After the client secret is added, the value of the client secret is displayed. Copy this value because you can't retrieve it later. Be sure to copy the **Value** and not the **Secret ID**. -You've now created your Microsoft Entra application, service principal, and client secret. +You've created your Microsoft Entra application, service principal, and client secret. ## Create a Red Hat Container Registry service account Use the following steps to create a Red Hat Container Registry service account a - Note down the **username**, including the prepended string (that is, `XXXXXXX|username`). Use this username when you sign in to `registry.redhat.io`. - Note down the **password**. Use this password when you sign in to `registry.redhat.io`. -You've now created your Red Hat Container Registry service account. +You've created your Red Hat Container Registry service account. ## Deploy JBoss EAP on Azure Red Hat OpenShift While you wait, you can set up the database. The following sections show you how to set up Azure Database for MySQL - Flexible Server. -### Set environment variables in the Azure Cloud Shell +### Set environment variables in the command line shell The application is a Jakarta EE application backed by a MySQL database, and is deployed to the OpenShift cluster using Source-to-Image (S2I). For more information about S2I, see the [S2I Documentation](http://red.ht/eap-aro-s2i). -Continuing in the Azure Cloud Shell, use the following command to set up some environment variables: +Open a shell, or Cloud Shell, and set the following environment variables. Replace the substitutions as appropriate. ```azurecli-interactive RG_NAME=<resource-group-name> SERVER_NAME=<database-server-name> DB_DATABASE_NAME=testdb ADMIN_USERNAME=myadmin-ADMIN_PASSWORD=<mysql-admin-password> +ADMIN_PASSWORD=Secret123456 DB_USERNAME=testuser DB_PASSWORD=Secret123456 PROJECT_NAME=eaparo-sample Replace the placeholders with the following values, which are used throughout th - `<resource-group-name>`: The name of resource group you created previously - for example, `eaparo033123rg`. - `<database-server-name>`: The name of your MySQL server, which should be unique across Azure - for example, `eaparo033123mysql`.-- `<mysql-admin-password>`: The admin password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).+- `ADMIN_PASSWORD`: The admin password of your MySQL database server. This article was tested using the password shown. Consult the database documentation for password rules. - `<red-hat-container-registry-service-account-username>` and `<red-hat-container-registry-service-account-password>`: The username and password of the Red Hat Container Registry service account you created before. -It's a good idea to save the fully filled out name/value pairs in a text file, in case the Azure Cloud Shell times out before you're done executing the commands. That way, you can paste them into a new instance of the Cloud Shell and easily continue. +It's a good idea to save the fully filled out name/value pairs in a text file, in case the shell exits or the Azure Cloud Shell times out before you're done executing the commands. That way, you can paste them into a new instance of the shell or Cloud Shell and easily continue. -These name/value pairs are essentially "secrets". For a production-ready way to secure Azure Red Hat OpenShift, including secret management, see [Security for the Azure Red Hat OpenShift landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/azure-red-hat-openshift/security). +These name/value pairs are essentially "secrets." For a production-ready way to secure Azure Red Hat OpenShift, including secret management, see [Security for the Azure Red Hat OpenShift landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/azure-red-hat-openshift/security). ### Create and initialize the database You now have a MySQL database server running and ready to connect with your app. ## Verify the functionality of the deployment -The steps in this section show you how to verify that the deployment has successfully completed. +The steps in this section show you how to verify that the deployment completes successfully. If you navigated away from the **Deployment is in progress** page, the following steps show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to step 5. If you navigated away from the **Deployment is in progress** page, the following 1. In the navigation pane, select **Outputs**. This list shows the output values from the deployment, which includes some useful information. -1. Open Azure Cloud Shell, paste the value from the **cmdToGetKubeadminCredentials** field, and execute it. You see the admin account and credential for signing in to the OpenShift cluster console portal. The following example shows an admin account: +1. Open the shell or Azure Cloud Shell, paste the value from the **cmdToGetKubeadminCredentials** field, and execute it. You see the admin account and credential for signing in to the OpenShift cluster console portal. The following example shows an admin account: ```azurecli az aro list-credentials --resource-group eaparo033123rg --name clusterf9e8b9 If you navigated away from the **Deployment is in progress** page, the following Next, use the following steps to connect to the OpenShift cluster using the OpenShift CLI: -1. In the Azure Cloud Shell, use the following commands to download the latest OpenShift 4 CLI for GNU/Linux. If running on an OS other than GNU/Linux, download the appropriate binary for that OS. +1. In the shell or Azure Cloud Shell, use the following commands to download the latest OpenShift 4 CLI for GNU/Linux. If running on an OS other than GNU/Linux, download the appropriate binary for that OS. ```azurecli-interactive cd ~ Next, use the following steps to connect to the OpenShift cluster using the Open echo 'export PATH=$PATH:~/openshift' >> ~/.bashrc && source ~/.bashrc ``` -1. Paste the value from the **cmdToLoginWithKubeadmin** field into the Azure Cloud Shell, and execute it. You should see the `login successful` message and the project you're using. The following content is an example of the command to connect to the OpenShift cluster using the OpenShift CLI. +1. Paste the value from the **cmdToLoginWithKubeadmin** field into the shell or Azure Cloud Shell, and execute it. You should see the `login successful` message and the project you're using. The following content is an example of the command to connect to the OpenShift cluster using the OpenShift CLI. ```azurecli-interactive oc login \ The steps in this section show you how to deploy an app on the cluster. Use the following steps to deploy the app to the cluster. The app is hosted in the GitHub repo [rhel-jboss-templates/eap-coffee-app](https://github.com/Azure/rhel-jboss-templates/tree/main/eap-coffee-app). -1. In the Azure Cloud Shell, run the following commands to create a project, apply a permission to enable S2I to work, image the pull secret, and link the secret to the relative service accounts in the project for image pulling. Disregard the git warning about "'detached HEAD' state". +1. In the shell or Azure Cloud Shell, run the following commands. The commands create a project, apply a permission to enable S2I to work, image the pull secret, and link the secret to the relative service accounts in the project to enable the image pull. Disregard the git warning about "'detached HEAD' state." ```azurecli-interactive git clone https://github.com/Azure/rhel-jboss-templates.git |
operator-insights | Concept Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-types.md | A Data Product ingests data from one or more sources, digests and enriches this A data type is used to refer to an individual data source. Data types can be from outside the Data Product, such as from a network element. Data types can also be created within the Data Product itself by aggregating or enriching information from other data types. -Data Product operators can choose which data types to use and the data retention period for each data type. +Data Product operators can choose the data retention period for each data type. ## Data type contents Data types are presented as child resources of the Data Product within the Azure :::image type="content" source="media/concept-data-types/data-types.png" alt-text="Screenshot of Data Types portal page."::: -- Data Product operators can turn off individual data types to avoid incurring processing and storage costs associated with a data type that isn't valuable for their specific use cases.-- Data Product operators can configure different data retention periods for each data type as shown in the Data Retention page. For example, data types containing personal data are typically configured with a shorter retention period to comply with privacy legislation.+Data Product operators can configure different data retention periods for each data type as shown in the Data Retention page. For example, data types containing personal data are typically configured with a shorter retention period to comply with privacy legislation. :::image type="content" source="media/concept-data-types/data-types-data-retention.png" alt-text="Screenshot of Data Types Data Retention portal page."::: |
operator-nexus | Howto Credential Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-credential-rotation.md | + + Title: Azure Operator Nexus credential rotation +description: Instructions on Credential Rotation Lifecycle Management requests. +++ Last updated : 01/29/2024+++++# Credential rotation management for on-premises devices ++This document provides an overview of the credential rotation support request that needs to be raised for requesting credential rotation on the nexus instance. ++## Prerequisites ++- Target cluster and fabric must be in running and healthy state. ++## Create support request ++Raise credential rotation request by [contacting support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Below details are required in order to perform the credential rotation on the required target instance: + - Type of credential that needs to be rotated. Specify if the request is for fabric device or BMC or Storage or Console User or for all four types. + - Provide Tenant ID. + - Provide Subscription ID. + - Provide Resource Group Name in which the target cluster or fabric resides based on type of credential that needs to be rotated. + - Provide Target Cluster or Fabric Name based on type of credential that needs to be rotated. + - Provide Target Cluster or Fabric ARM ID based on type of credential that needs to be rotated. + - Provide Customer Key Vault ID to which rotated credentials of target cluster instance needs to be updated. ++For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/). |
operator-nexus | Troubleshoot Hardware Validation Failure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-hardware-validation-failure.md | + + Title: Azure Operator Nexus troubleshooting hardware validation failure +description: Troubleshoot Hardware Validation Failure for Azure Operator Nexus. +++ Last updated : 01/26/2024+++++# Troubleshoot hardware validation failure in Nexus Cluster ++This article describes how to troubleshoot a failed server hardware validation. Hardware validation is run as part of cluster deploy action. ++## Prerequisites ++- Gather the following information: + - Subscription ID + - Cluster name and resource group +- The user needs access to the Cluster's Log Analytics Workspace (LAW) ++## Locating hardware validation results ++1. Navigate to cluster resource group in the subscription +2. Expand the cluster Log Analytics Workspace (LAW) resource for the cluster +3. Navigate to the Logs tab +4. Hardware validation results can be fetched with a query against the HWVal_CL table as per the following example +++## Examining hardware validation results ++The Hardware Validation result for a given server includes the following categories. ++- system_info +- drive_info +- network_info +- health_info +- boot_info ++Expanding `result_detail` for a given category shows detailed results. ++## Troubleshooting specific failures ++### System info category ++* Memory/RAM related failure (memory_capacity_GB) + * Memory specs are defined in the SKU. + * Memory below threshold value indicates missing or failed DIMM(s). Failed DIMM(s) would also be reflected in the `health_info` category. ++ ```json + { + "field_name": "memory_capacity_GB", + "comparison_result": "Fail", + "expected": "512", + "fetched": "480" + } + ``` ++* CPU Related Failure (cpu_sockets) + * CPU specs are defined in the SKU. + * Failed `cpu_sockets` check indicates a failed CPU or CPU count mismatch. ++ ```json + { + "field_name": "cpu_sockets", + "comparison_result": "Fail", + "expected": "2", + "fetched": "1" + } + ``` ++* Model Check Failure (Model) + * Failed `Model` check indicates that wrong server is racked in the slot or there's a cabling mismatch. ++ ```json + { + "field_name": "Model", + "comparison_result": "Fail", + "expected": "R750", + "fetched": "R650" + } + ``` ++### Drive info category ++* Disk Check Failure + * Drive specs are defined in the SKU + * Mismatched capacity values indicate incorrect drives or drives inserted in to incorrect slots. + * Missing capacity and type fetched values indicate drives that are failed, missing or inserted in to incorrect slots. ++ ```json + { + "field_name": "Disk_0_Capacity_GB", + "comparison_result": "Fail", + "expected": "893", + "fetched": "3576" + } + ``` ++ ```json + { + "field_name": "Disk_0_Capacity_GB", + "comparison_result": "Fail", + "expected": "893", + "fetched": "" + } + ``` ++ ```json + { + "field_name": "Disk_0_Type", + "comparison_result": "Fail", + "expected": "SSD", + "fetched": "" + } + ``` ++### Network info category ++* NIC Check Failure + * Dell server NIC specs are defined in the SKU. + * Mismatched link status indicates loose or faulty cabling or crossed cables. + * Mismatched model indicates incorrect NIC card is inserted in to slot. + * Missing link/model fetched values indicate NICs that are failed, missing or inserted in to incorrect slots. ++ ```json + { + "field_name": "NIC.Slot.3-1-1_LinkStatus", + "comparison_result": "Fail", + "expected": "Up", + "fetched": "Down" + } + ``` ++ ```json + { + "field_name": "NIC.Embedded.2-1-1_LinkStatus", + "comparison_result": "Fail", + "expected": "Down", + "fetched": "Up" + } + ``` ++ ```json + { + "field_name": "NIC.Slot.3-1-1_Model", + "comparison_result": "Fail", + "expected": "ConnectX-6", + "fetched": "BCM5720" + } + ``` ++ ```json + { + "field_name": "NIC.Slot.3-1-1_LinkStatus", + "comparison_result": "Fail", + "expected": "Up", + "fetched": "" + } + ``` ++ ```json + { + "field_name": "NIC.Slot.3-1-1_Model", + "comparison_result": "Fail", + "expected": "ConnectX-6", + "fetched": "" + } + ``` ++* NIC Check L2 Switch Information + * HW Validation reports L2 switch information for each of the server interfaces. + * The switch connection ID (switch interface MAC) and switch port connection ID (switch interface label) are informational. ++ ```json + { + "field_name": "NIC.Slot.3-1-1_SwitchConnectionID", + "expected": "unknown", + "fetched": "c0:d6:82:23:0c:7d", + "comparison_result": "Info" + } + ``` ++ ```json + { + "field_name": "NIC.Slot.3-1-1_SwitchPortConnectionID", + "expected": "unknown", + "fetched": "Ethernet10/1", + "comparison_result": "Info" + } + ``` ++* Release 3.6 introduced cable checks for bonded interfaces. + * Mismatched cabling is reported in the result_log. + * Cable check validates that that bonded NICs connect to switch ports with same Port ID. In the following example PCI 3/1 and 3/2 connect to "Ethernet1/1" and "Ethernet1/3" respectively on TOR, triggering a failure for HWV. ++ ```json + { + "network_info": { + "network_info_result": "Fail", + "result_detail": [ + { + "field_name": "NIC.Slot.3-1-1_SwitchPortConnectionID", + "fetched": "Ethernet1/1", + }, + { + "field_name": "NIC.Slot.3-2-1_SwitchPortConnectionID", + "fetched": "Ethernet1/3", + } + ], + "result_log": [ + "Cabling problem detected on PCI Slot 3" + ] + }, + } + ``` ++### Health info category ++* Health Check Sensor Failure + * Server health checks cover various hardware component sensors. + * A failed health sensor indicates a problem with the corresponding hardware component. + * The following examples indicate fan, drive and CPU failures respectively. ++ ```json + { + "field_name": "System Board Fan1A", + "comparison_result": "Fail", + "expected": "Enabled-OK", + "fetched": "Enabled-Critical" + } + ``` ++ ```json + { + "field_name": "Solid State Disk 0:1:1", + "comparison_result": "Fail", + "expected": "Enabled-OK", + "fetched": "Enabled-Critical" + } + ``` ++ ```json + { + "field_name": "CPU.Socket.1", + "comparison_result": "Fail", + "expected": "Enabled-OK", + "fetched": "Enabled-Critical" + } + ``` ++* Health Check Lifecycle Log (LC Log) Failures + * Dell server health checks fail for recent Critical LC Log Alarms. + * The hardware validation plugin logs the alarm ID, name, and timestamp. + * Recent LC Log critical alarms indicate need for further investigation. + * The following example shows a failure for a critical Backplane voltage alarm. ++ ```json + { + "field_name": "LCLog_Critical_Alarms", + "expected": "No Critical Errors", + "fetched": "53539 2023-07-22T23:44:06-05:00 The system board BP1 PG voltage is outside of range.", + "comparison_result": "Fail" + } + ``` ++* Health Check Server Power Action Failures + * Dell server health check fail for failed server power-up or failed iDRAC reset. + * A failed server control action indicates an underlying hardware issue. + * The following example shows failed power on attempt. ++ ```json + { + "field_name": "Server Control Actions", + "expected": "Success", + "fetched": "Failed", + "comparison_result": "Fail" + } + ``` ++ ```json + "result_log": [ + "Server power up failed with: server OS is powered off after successful power on attempt", + ] + ``` ++* Health Check Power Supply Failure and Redundancy Considerations + * Dell server health checks warn when one power supply is missing or failed. + * Power supply "field_name" might be displayed as 0/PS0/Power Supply 0 and 1/PS1/Power Supply 1 for the first and second power supplies respectively. + * A failure of one power supply doesn't trigger an HW validation device failure. ++ ```json + { + "field_name": "Power Supply 1", + "expected": "Enabled-OK", + "fetched": "UnavailableOffline-Critical", + "comparison_result": "Warning" + } + ``` ++ ```json + { + "field_name": "System Board PS Redundancy", + "expected": "Enabled-OK", + "fetched": "Enabled-Critical", + "comparison_result": "Warning" + } + ``` ++### Boot info category ++* Boot Device Check Considerations + * The `boot_device_name` check is currently informational. + * Mismatched boot device name shouldn't trigger a device failure. ++ ```json + { + "comparison_result": "Info", + "expected": "NIC.PxeDevice.1-1", + "fetched": "NIC.PxeDevice.1-1", + "field_name": "boot_device_name" + } + ``` ++* PXE Device Check Considerations + * This check validates the PXE device settings. + * Failed `pxe_device_1_name` or `pxe_device_1_state` checks indicate a problem with the PXE configuration. + * Failed settings need to be fixed to enable system boot during deployment. ++ ```json + { + "field_name": "pxe_device_1_name", + "expected": "NIC.Embedded.1-1-1", + "fetched": "NIC.Embedded.1-2-1", + "comparison_result": "Fail" + } + ``` ++ ```json + { + "field_name": "pxe_device_1_state", + "expected": "Enabled", + "fetched": "Disabled", + "comparison_result": "Fail" + } + ``` ++## Adding servers back into the Cluster after a repair ++After Hardware is fixed, run BMM Replace following instructions from the following page [BMM actions](howto-baremetal-functions.md). +++ |
payment-hsm | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/getting-started.md | tags: azure-resource-manager Previously updated : 01/25/2022 Last updated : 01/25/2024 -## Availability --Azure Payment HSM is currently available in the following regions: --- East US-- West US-- South Central US-- Central US-- North Europe-- West Europe--## Prerequisites --Azure Payment HSM customers must have: --- Access to the Thales Customer Portal (Customer ID)-- Thales smart cards and card reader for payShield Manager--## Cost --The HSM devices will be charged based on the [Azure Payment HSM pricing page](https://azure.microsoft.com/pricing/details/payment-hsm/). All other Azure resources for networking and virtual machines will incur regular Azure costs too. --## payShield customization considerations --If you are using payShield on-premises today with a custom firmware, a porting exercise is required to update the firmware to a version compatible with the Azure deployment. Please contact your Thales account manager to request a quote. --Ensure that the following information is provided: --- Customization hardware platform (e.g., payShield 9000 or payShield 10K)-- Customization firmware number--## Support --For details on Azure Payment HSM prerequisites, support channels, and division of support responsibility between Microsoft, Thales, and the customer, see the [Azure Payment HSM service support guide](support-guide.md). +1. First, engage with your Microsoft account manager and get your business cases approved by Azure Payment HSM Product Manager. See [Getting started with Azure Payment HSM](getting-started.md). Ask your Microsoft account manager and CSA to send a request [via email](mailto:paymentHSMRequest@microsoft.com). +2. The Azure Payment HSM comes with payShield Manager license so you can remotely manage the HSM; you must have Thales smart cards and card readers for payShield Manager before onboarding Azure payment HSM. The minimum requirement is one compatible USB Smartcard reader with at least 5 payShield Manager Smartcards. Contact your Thales sales representative for the purchase or using existing compatible smart cards and readers. For more information, see the [Payment HSM support: Prerequisites](support-guide.md#prerequisites). ++3. Provide your contact information to the Microsoft account team and the Azure Payment HSM Product Manager [via email](mailto:paymentHSMRequest@microsoft.com), so they can set up your Thales support account. + + A Thales Customer ID will be created, so you can submit payShield 10K support issues as well as download documentation, software and firmware from Thales portal. The Thales Customer ID can be used by customer team to create individual account access to Thales support portal. ++ | Email Form | + |--| + |Trading Name:| + |Full Address:<br><br><br> + |Country:| + |Post Code:| + |Contact:| + | Address Type: Civil / Military | + | Telephone No. (with Country Code): | + | Is it state owned/governmental: Y / N + |Located in a Free trade zone: Y / N| + +4. You must next engage with the Microsoft CSAs to plan your deployment, and to understand the networking requirements and constraints/workarounds before onboarding the service. For details, see: + - [Azure Payment HSM deployment scenarios](deployment-scenarios.md) + - [Solution design for Azure Payment HSM](solution-design.md) + - [Azure Payment HSM "fastpathenabled" feature flag and tag](fastpathenabled.md) + - [Azure Payment HSM traffic inspection](inspect-traffic.md) + +5. Contact Microsoft support to get your subscription approved and receive feature registration, to access the Azure payment HSM service. See [Register the Azure Payment HSM resource providers](register-payment-hsm-resource-providers.md?tabs=azure-cli). You will not be charged at this step. +6. Follow the [Tutorials](create-payment-hsm.md) and [How-To Guides](register-payment-hsm-resource-providers.md) to create payment HSMs. Customer billing will start when the HSM resource is created. +7. Upgrade the payShield 10K firmware to their desired version. +8. Review the support process and scope here for Microsoft support and Thales's support: [Azure Payment HSM Service support guide ](support-guide.md). +9. Monitor your payShield 10K using standard SNMP V3 tools. payShield Monitor is an additional product available to provide continuous monitoring of HSMs. Contact Thales Sales rep for licensing information. ## Next steps |
payment-hsm | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/overview.md | Azure Payment HSM is a "BareMetal" service delivered using [Thales payShield 10K Payment HSMs are provisioned and connected directly to users' virtual network, and HSMs are under users' sole administration control. HSMs can be easily provisioned as a pair of devices and configured for high availability. Users of the service utilize [Thales payShield Manager](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-manager) for secure remote access to the HSMs as part of their Azure-based subscription. Multiple subscription options are available to satisfy a broad range of performance and multiple application requirements that can be upgraded quickly in line with end-user business growth. Azure payment HSM service offers highest performance level 2500 CPS. -Payment HSM devices are a variation of [Dedicated HSM](../dedicated-hsm/index.yml) devices, with more advanced cryptographic modules and features; for example, a payment HSM never decrypts the PIN value in transit. - The Azure Payment HSM solution uses hardware from [Thales](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) as a vendor. Customers have [full control and exclusive access](overview.md#customer-managed-hsm-in-azure) to the Payment HSM. > [!IMPORTANT]-> Azure Payment HSM a highly specialized service. We highly recommend that you review the [Azure Payment HSM pricing page](https://azure.microsoft.com/services/payment-hsm/) and [Getting started with Azure Payment HSM](getting-started.md#support). +> Azure Payment HSM a highly specialized service. We highly recommend that you review the [Azure Payment HSM pricing page](https://azure.microsoft.com/pricing/details/payment-hsm/#pricing) and [Getting started with Azure Payment HSM](getting-started.md). ++## Azure payment HSM high-level architecture ++After a Payment HSM is provisioned, the HSM device is connected directly to a customer's virtual network, with full remote HSM management capabilities, through Thales payShield Manager and the payShield Trusted Management Device (TMD). ++Two host network interfaces and one management network interface are created at HSM provision. + ## Why use Azure Payment HSM? |
payment-hsm | Support Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/support-guide.md | This article outlines the Azure Payment HSM prerequisites, support channels, and ## Prerequisites -Microsoft will work with Thales to ensure that customers meet the prerequisites before starting the onboarding process. +Microsoft works with Thales to ensure that customers meet the prerequisites before starting the onboarding process. - Customers must have access to the [Thales CPL Customer Support Portal](https://supportportal.thalesgroup.com/csm) (Customer ID).-- Customers must have Thales smart cards and card readers for payShield Manager. If a customer need to purchase smart cards or card readers they should contact their Thales representatives, or find their contacts through the [Thales contact page](https://cpl.thalesgroup.com/contact-us).+- Customers must have Thales smart cards and card readers for payShield Manager. If a customer need to purchase smart cards or card readers they should contact their Thales representatives, or find their contacts through the [Thales contact page](https://cpl.thalesgroup.com/contact-us): + - **Item**: 971-000135-001-000 + - **Description**: PS10-RMGT-KIT2 - payShield Manager Starter Kit - for software V1.4A (1.8.3) and above + - **Items Included**: 2 Thales Card Readers, 30 PayShield Manager Smartcards + + Compatible smart cards have a blue band and are labeled "payShield Manager Card". These are the only smart cards compatible with the ciphers used to enable over-network use. - If a customer need to purchase a payShield Trusted Management Device (TMD), they should contact their Thales representatives or find their contacts through the [Thales contact page](https://cpl.thalesgroup.com/contact-us).-- Customers must download and review the "Hosted HSM End User Guide", which is available through the Thales CPL Customer Support Portal. The Hosted HSM End User Guide will provide more details on the changes to payShield to this service.+- Customers must download and review the "Hosted HSM End User Guide", which is available through the Thales CPL Customer Support Portal. The Hosted HSM End User Guide provides more details on the changes to payShield to this service. - Customers must review the "Azure Payment HSM - Get Ready for payShield 10K" guide that they received from Microsoft. (Customers who do not have the guide may request it from [Microsoft Support](#microsoft-support).) - If a customer is new to payShield or the remote management option, they should take the formal training courses available from Thales and its approved partners. - If a customer is using payShield on premises today with custom firmware, they must conduct a porting exercise to update the firmware to a version compatible with the Azure deployment. Contact a Thales account manager to request a quote. ## Firmware and license support -The HSM base firmware installed is Thales payShield10K base software version 1.4a 1.8.3 with the Premium Package license. Versions below 1.4a 1.8.3. are not supported. Customers must ensure that they only upgrade to a firmware version that meets their compliance requirements. +The HSM base firmware installed is Thales payShield10K base software version 1.4a 1.8.3. Versions below 1.4a 1.8.3. are not supported. Customers must ensure that they only upgrade to a firmware version that meets their compliance requirements. ++The licenses included in Azure payment HSM: -The Premium Package license included in Azure payment HSM features: - Premium Key Management-- Magnetic Stripe Issuing +- Magnetic Stripe Issuing - Magnetic Stripe Transaction Processing-- EMV Chip, Contactless & Mobile Issuing -- EMV Transaction Processing -- Premium Data Protection +- EMV Chip, Contactless & Mobile Issuing +- EMV Transaction Processing +- User Authentication +- Data Protection +- Legacy Commands - Remote payShield Manager - Hosted HSM -Customers are responsible for applying payShield security patches and upgrading payShield firmware for their provisioned HSMs, as needed. If customers have questions or require assistance, they should work with Thales support. +Customer set up the performance level (60CPS, 250 CPS, 2500 CPS) and LMK (1 LMK, 2LMK) when HSM is created. ++Customers are responsible for applying payShield security patches and upgrading payShield firmware for their provisioned HSMs as needed. If customers have questions or require assistance, they should work with Thales support. ++Microsoft is responsible for applying payShield security patches to unallocated HSMs. ++## Availability ++Azure Payment HSM is currently available in the following regions: -Microsoft is responsible for applying payShield security patches to unallocated HSMs. +- East US +- West US +- South Central US +- Central US +- North Europe +- West Europe ## Microsoft support -Microsoft will provide support for hardware issues, networking issues, and provisioning issues. Enterprise customers should contact their CSAM to find out details of their support contract . +Microsoft provides support for hardware issues, networking issues, and provisioning issues. Enterprise customers should contact their CSAM to find out details of their support contract. Microsoft support can be contacted by creating a support ticket through the Azure portal: - From the Azure portal homepage, select the "Support + troubleshooting" icon (a question mark in a circle) in the upper-right. - Select the "Help + Support" button.-- Select "Create a support request".+- Select "Create a support request." - On the "New support request" screen, select "Technical" as your issue type, and then "Payment HSM" as the service type. ## Thales support |
postgresql | Concepts Networking Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private-link.md | For a list to PaaS services that support Private Link functionality, review the The same public service instance can be referenced by multiple private endpoints in different VNets/subnets, even if they belong to different users/subscriptions (including within differing Microsoft Entra ID tenants) or if they have overlapping address spaces. +> [!NOTE] +> **Important Prerequisite:** Azure Database for PostgreSQL flexible server support for Private Endpoints in Preview requires enablement of [**Azure Database for PostgreSQL flexible server Private Endpoint capability** preview feature in your subscription](../../azure-resource-manager/management/preview-features.md). +> Only **after preview feature is enabled** you can create servers which are PE capable, i.e. can be networked using Private Link. ++ ## Key Benefits of Azure Private Link **Azure Private Link** provides the following benefits: Cross Feature Availability Matrix for preview of Private Endpoint in Azure Datab | Connection pooling with PGBouncer | Yes | Works as designed | | Private Endpoint DNS | Yes | Works as designed and [documented](../../private-link/private-endpoint-dns.md) | -> [!NOTE] -> Azure Database for PostgreSQL flexible server support for Private Endpoints in Preview requires enablement of [**Azure Database for PostgreSQL flexible server Private Endpoint capability** preview feature in your subscription](../../azure-resource-manager/management/preview-features.md). -> Only **after preview feature is enabled** you can create servers which are PE capable, i.e. can be networked using Private Link. --- ### Connect from an Azure VM in Peered Virtual Network |
postgresql | Concepts Scaling Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md | Title: Scaling resources -description: This article describes the resource scaling in Azure Database for PostgreSQL - Flexible Server. +description: This article describes the resource scaling in Azure Database for PostgreSQL flexible server. -# Scaling resources in Azure Database for PostgreSQL - Flexible Server +# Scaling resources in Azure Database for PostgreSQL flexible server [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] -Azure Database for PostgreSQL flexible server supports both **vertical** and **horizontal** scaling options. +Azure Database for PostgreSQL flexible server supports both vertical and horizontal scaling options. -You can scale **vertically** by adding more resources to the Azure Database for PostgreSQL flexible server instance, such as increasing the instance-assigned number of CPUs and memory. Network throughput of your instance depends on the values you choose for CPU and memory. Once an Azure Database for PostgreSQL flexible server instance is created, you can independently change the CPU (vCores), the amount of storage, and the backup retention period. The number of vCores can be scaled up or down. However, the storage size however can only be increased. In addition, you can scale the backup retention period, up or down, from 7 to 35 days. The resources can be scaled using multiple tools, for instance [Azure portal](./quickstart-create-server-portal.md) or the [Azure CLI](./quickstart-create-server-cli.md). +**Vertical scaling**: You can scale vertically by adding more resources to the Azure Database for PostgreSQL flexible server instance, such as increasing the instance-assigned number of CPUs and memory. Network throughput of your instance depends on the values you choose for CPU and memory. -> [!NOTE] +After an Azure Database for PostgreSQL flexible server instance is created, you can independently change the: ++- CPU (vCores). +- Amount of storage. +- Backup retention period. ++The number of vCores can be scaled up or down, but the storage size can only be increased. You can also scale the backup retention period, up or down, from 7 to 35 days. The resources can be scaled by using multiple tools, for instance, the [Azure portal](./quickstart-create-server-portal.md) or the [Azure CLI](./quickstart-create-server-cli.md). ++> [!NOTE] > After you increase the storage size, you can't go back to a smaller storage size. -You can scale **horizontally** by creating [read replicas](./concepts-read-replicas.md). Read replicas let you scale your read workloads onto separate Azure Database for PostgreSQL flexible server instances, without affecting the performance and availability of the primary instance. +**Horizontal scaling**: You can scale horizontally by creating [read replicas](./concepts-read-replicas.md). Read replicas let you scale your read workloads onto separate Azure Database for PostgreSQL flexible server instances. They don't affect the performance and availability of the primary instance. -When you change the number of vCores or the compute tier, the instance is restarted for the new server type to take effect. During this time the system is switching over to the new server type, no new connections can be established, and all uncommitted transactions are rolled back. The overall time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restart typically takes a minute or less, but it can be higher and can take several minutes, depending on transactional activity at the time the restart was initiated. +When you change the number of vCores or the compute tier, the instance is restarted for the new server type to take effect. During this time, the system switches over to the new server type. No new connections can be established, and all uncommitted transactions are rolled back. -If your application is sensitive to loss of in-flight transactions that may occur during compute scaling, we recommend implementing transaction [retry pattern](../single-server/concepts-connectivity.md#handling-transient-errors). +The overall time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restart typically takes a minute or less, but it can be several minutes. Timing depends on the transactional activity when the restart was initiated. ++If your application is sensitive to loss of in-flight transactions that might occur during compute scaling, we recommend implementing a transaction [retry pattern](../single-server/concepts-connectivity.md#handling-transient-errors). Scaling the storage doesn't require a server restart in most cases. Similarly, backup retention period changes are an online operation. To improve the restart time, we recommend that you perform scale operations during off-peak hours. That approach reduces the time needed to restart the database server. -## Near-zero downtime scaling +## Near-zero downtime scaling ++Near-zero downtime scaling is a feature designed to minimize downtime when you modify storage and compute tiers. If you modify the number of vCores or change the compute tier, the server undergoes a restart to apply the new configuration. During this transition to the new server, no new connections can be established. -Near-zero downtime scaling is a feature designed to minimize downtime when modifying storage and compute tiers. If you modify the number of vCores or change the compute tier, the server undergoes a restart to apply the new configuration. During this transition to the new server, no new connections can be established. Typically, this process with regular scaling could take anywhere between 2 to 10 minutes. However, with the new 'Near-zero downtime' scaling feature this duration is reduced to less than 30 seconds. This significant reduction in downtime during scaling resources greatly improves the overall availability of your database instance. +Typically, this process could take anywhere between 2 to 10 minutes with regular scaling. With the new near-zero downtime scaling feature, this duration is reduced to less than 30 seconds. This reduction in downtime during scaling resources improves the overall availability of your database instance. ### How it works -When updating your Azure Database for PostgreSQL flexible server instance in scaling scenarios, we create a new copy of your server (VM) with the updated configuration, synchronize it with your current one, briefly switch to the new copy with a 30-second interruption, and retire the old server, all at no extra cost to you. This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers, and the experience remains consistent for both HA and non-HA servers. This feature is enabled in all Azure regions and there's **no customer action required** to use this capability. +When you update your Azure Database for PostgreSQL flexible server instance in scaling scenarios, we create a new copy of your server (VM) with the updated configuration. We synchronize it with your current one, and switch to the new copy with a 30-second interruption. Then we retire the old server. The process occurs all at no extra cost to you. ++This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers. The experience remains consistent for both high-availablity (HA) and non-HA servers. This feature is enabled in all Azure regions. *No customer action is required* to use this capability. > [!NOTE]-> Near-zero downtime scaling process is the _default_ operation. However, in cases where the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near-zero downtime scaling. +> The near-zero downtime scaling process is the _default_ operation. When the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near-zero downtime scaling. -### Precise Downtime Expectations +### Precise downtime expectations -* **Downtime Duration**: In most cases, the downtime ranges from 10 to 30 seconds. -* **Additional Considerations**: After a scaling event, there's an inherent DNS `Time-To-Live` (TTL) period of approximately 30 seconds. This period isn't directly controlled by the scaling process but is a standard part of DNS behavior. So, from an application perspective, the total downtime experienced during scaling could be in the range of **40 to 60 seconds**. +* **Downtime duration**: In most cases, the downtime ranges from 10 to 30 seconds. +* **Other considerations**: After a scaling event, there's an inherent DNS `Time-To-Live` (TTL) period of approximately 30 seconds. This period isn't directly controlled by the scaling process. It's a standard part of DNS behavior. From an application perspective, the total downtime experienced during scaling could be in the range of 40 to 60 seconds. -#### Considerations and limitations +#### Considerations and limitations -- In order for near-zero downtime scaling to work, you should enable all [inbound/outbound connections between the IPs in the delegated subnet when using VNET integrated networking](../flexible-server/concepts-networking-private.md#virtual-network-concepts). If these aren't enabled near zero downtime scaling process will not work and scaling will occur through the standard scaling workflow.-- Near-zero downtime scaling won't work if there are regional capacity constraints or quota limits on customer subscriptions.-- Near-zero downtime scaling doesn't work for replica server, as it is only supported on the primary server. For replica server it will automatically go through regular scaling process.-- Near-zero downtime scaling won't work if a [virtual network injected server with delegated subnet](../flexible-server/concepts-networking-private.md#virtual-network-concepts) doesn't have sufficient usable IP addresses. If you have a standalone server, one extra IP address is necessary, and for a HA-enabled server, two extra IP addresses are required.-- Replication Slots - Be aware that logical replication slots aren't preserved during near-zero downtime failover event. To maintain logical replication slots and ensure data consistency after a scale operation, it is recommended to use the [pg_failover_slot](https://github.com/EnterpriseDB/pg_failover_slots) extension. For more details, refer [Enabling extension in Flexible Server](../flexible-server/concepts-extensions.md#pg_failover_slots-preview).-- ^ For HA enabled servers, near-zero downtime scaling is currently enabled for a limited set of regions. We will be enabling this to more regions in a phased manner based upon the regional capacity.+- For near-zero downtime scaling to work, enable all [inbound/outbound connections between the IPs in the delegated subnet when you use virtual network integrated networking](../flexible-server/concepts-networking-private.md#virtual-network-concepts). If these connections aren't enabled, the near-zero downtime scaling process doesn't work and scaling occurs through the standard scaling workflow. +- Near-zero downtime scaling doesn't work if there are regional capacity constraints or quota limits on customer subscriptions. +- Near-zero downtime scaling doesn't work for a replica server because it's only supported on the primary server. For a replica server, it automatically goes through a regular scaling process. +- Near-zero downtime scaling doesn't work if a [virtual network-injected server with a delegated subnet](../flexible-server/concepts-networking-private.md#virtual-network-concepts) doesn't have sufficient usable IP addresses. If you have a standalone server, one extra IP address is necessary. For an HA-enabled server, two extra IP addresses are required. +- Logical replication slots aren't preserved during a near-zero downtime failover event. To maintain logical replication slots and ensure data consistency after a scale operation, use the [pg_failover_slot](https://github.com/EnterpriseDB/pg_failover_slots) extension. For more information, see [Enabling extension in a flexible server](../flexible-server/concepts-extensions.md#pg_failover_slots-preview). +- For HA-enabled servers, near-zero downtime scaling is currently enabled for a limited set of regions. More regions will be enabled in a phased manner based on regional capacity. ## Related content |
postgresql | Concepts Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-server-parameters.md | Title: Server parameters -description: Describes the server parameters in Azure Database for PostgreSQL - Flexible Server. --+ Title: Server parameters - Azure Database for PostgreSQL - Flexible Server +description: Describes the server parameters in Azure Database for PostgreSQL - Flexible Server ++ Previously updated : 1/25/2024 Last updated : 01/30/2024 # Server parameters in Azure Database for PostgreSQL - Flexible Server [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] -Azure Database for PostgreSQL flexible server provides a subset of configurable parameters for each server. For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/config-setting.html). +Azure Database for PostgreSQL provides a subset of configurable parameters for each server. For more information on +Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/runtime-config.html). -## An overview of PostgreSQL parameters +## An overview of PostgreSQL parameters -Azure Database for PostgreSQL server is pre-configured with optimal default values for each parameter on creation. Static parameters require a server restart. Further parameters that require superuser access cannot be configured by the user. +Azure Database for PostgreSQL - Flexible Server comes preconfigured with optimal default settings for each parameter. Parameters are categorized into one of the following types: -In order to review which parameters are available to view or to modify, we recommend going into the Azure portal, and to the Server Parameters page. You can also configure parameters on a per-user or per-database basis using `ALTER DATABASE` or `ALTER ROLE` commands. +* **Static parameters**: Parameters of this type require a server restart to implement any changes. +* **Dynamic parameters**: Parameters in this category can be altered without needing to restart the server instance; + however, changes will only apply to new connections established after the modification. +* **Read-only parameters**: Parameters within this grouping aren't user-configurable due to their critical role in + maintaining the reliability, security, or other operational aspects of the service. ->[!NOTE] -> Since Azure Database for PostgreSQL is a managed database service, users are not provided host or OS access to view or modify configuration files such as `postgresql.conf`. The content of the file is automatically updated based on parameter changes in the Server Parameters page. +To determine the category to which a parameter belongs, you can check the Azure portal under the **Server parameters** blade, where they're grouped into respective tabs for easy identification. +### Modification of server parameters -Here's the list of some of the parameters: +Various methods and levels are available to customize your parameters according to your specific needs. +#### Global - server level - | Parameter Name | Description | -|-|--| -| **max_connections** | You can tune max_connections on Azure Database for PostgreSQL flexible server, where it can be set to 5000 connections. See the [limits documentation](concepts-limits.md) for more details. Although it is not the best practice to set this value higher than several hundreds. See [Postgres Wiki](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections) for more details. If you are considering higher values, consider using [connection pooling](concepts-pgbouncer.md) instead. | -| **shared_buffers** | The 'shared_buffers' setting changes depending on the selected SKU (SKU determines the memory available). General Purpose servers have 2 GB shared_buffers for 2 vCores; Memory Optimized servers have 4 GB shared_buffers for 2 vCores. The shared_buffers setting scales linearly (approximately) as vCores increase in a tier. | -| **shared_preload_libraries** | This parameter is available for configuration with a predefined set of supported extensions. We always load the `azure` extension (used for maintenance tasks), and the `pg_stat_statements` extension (you can use the pg_stat_statements.track parameter to control whether the extension is active). | -| **connection_throttling** | You can enable or disable temporary connection throttling per IP for too many invalid password login failures. | - | **work_mem** | This parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Increasing this parameter value can help Azure Database for PostgreSQL flexible server perform larger in-memory scans instead of spilling to disk, which is faster. This configuration is beneficial if your workload contains few queries but with many complex sorting tasks, and you have ample available memory. Be careful however, as one complex query may have number of sort, hash operations running concurrently. Each one of those operations uses as much memory as it value allows before it starts writing to disk based temporary files. Therefore on a relatively busy system total memory usage is many times of individual work_mem parameter. If you do decide to tune this value globally, you can use formula Total RAM * 0.25 / max_connections as initial value. Azure Database for PostgreSQL flexible server supports range of 4096-2097152 kilobytes for this parameter. | -| **effective_cache_size** | The effective_cache_size parameter estimates how much memory is available for disk caching by the operating system and within the database shared_buffers itself. This parameter is just a planner "hint" and does not allocate or reserve any memory. Index scans are most likely to be used against higher values; otherwise, sequential scans are used if the value is low. Recommendations are to set effective_cache_size at 50%-75% of the machineΓÇÖs total RAM. | -| **maintenance_work_mem** | The maintenance_work_mem parameter basically provides the maximum amount of memory to be used by maintenance operations like vacuum, create index, and alter table add foreign key operations. Default value for that parameter is 64 KB. ItΓÇÖs recommended to set this value higher than work_mem. | -| **effective_io_concurrency** | Sets the number of concurrent disk I/O operations that Azure Database for PostgreSQL flexible server expects can be executed simultaneously. Raising this value increases the number of I/O operations that any individual Azure Database for PostgreSQL flexible server session attempts to initiate in parallel. The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans. | - |**require_secure_transport** | If your application doesn't support SSL connectivity to the server, you can optionally disable secured transport from your client by turning `OFF` this parameter value. | - |**log_connections** | This parameter may be read-only, as on Azure Database for PostgreSQL flexible server all connections are logged and intercepted to make sure connections are coming in from right sources for security reasons. | -|**log_disconnections** | This parameter may be read-only, as on Azure Database for PostgreSQL flexible server all disconnections are logged. | +For altering settings globally at the instance or server level, navigate to the **Server parameters** blade in the Azure portal, or use other available tools such as Azure CLI, REST API, ARM templates, and third-party tools. ++> [!NOTE] +> Since Azure Database for PostgreSQL is a managed database service, users are not provided host or operating system access to view or modify configuration files such as `postgresql.conf`. The content of the file is automatically updated based on parameter changes made using one of the methods described above. +++#### Granular levels ++You can adjust parameters at more granular levels, thereby overriding globally set values. The scope and duration of +these modifications depend on the level at which they're made: ++* **Database level**: Utilize the `ALTER DATABASE` command for database-specific configurations. +* **Role or user level**: Use the `ALTER USER` command for user-centric settings. +* **Function, procedure level**: When defining a function or procedure, you can specify or alter the configuration parameters that will be set when the function is called. +* **Table level**: As an example, you can modify parameters related to autovacuum at this level. +* **Session level**: For the duration of an individual database session, you can adjust specific parameters. PostgreSQL facilitates this with the following SQL commands: + * The `SET` command lets you make session-specific adjustments. These changes serve as the default settings during the current session. Access to these changes may require specific `SET` privileges, and the limitations about modifiable and read-only parameters described above do apply. The corresponding SQL function is `set_config(setting_name, new_value, is_local)`. + * The `SHOW` command allows you to examine existing parameter settings. Its SQL function equivalent is `current_setting(setting_name text)`. ++Here's the list of some of the parameters. ++## Memory ++### shared_buffers ++| Attribute | Value | +|:|-:| +| Default value | 25% of total RAM | +| Allowed value | 10-75% of total RAM | +| Type | Static | +| Level | Global | +| Azure-Specific Notes | The `shared_buffers` setting scales linearly (approximately) as vCores increase in a tier. | ++#### Description ++The `shared_buffers` configuration parameter determines the amount of system memory allocated to the PostgreSQL database for buffering data. It serves as a centralized memory pool that's accessible to all database processes. When data is needed, the database process first checks the shared buffer. If the required data is present, it's quickly retrieved, thereby bypassing a more time-consuming disk read. By serving as an intermediary between the database processes and the disk, `shared_buffers` effectively reduces the number of required I/O operations. ++### huge_pages ++| Attribute | Value | +|:|-:| +| Default value | TRY | +| Allowed value | TRY, ON, OFF | +| Type | Static | +| Level | Global | +| Azure-Specific Notes | For servers with 4 or more vCores, huge pages are automatically allocated from the underlying operating system. Feature isn't available for servers with fewer than 4 vCores. The number of huge pages is automatically adjusted if any shared memory settings are changed, including alterations to `shared_buffers`. | ++#### Description ++Huge pages are a feature that allows for memory to be managed in larger blocks - typically 2 MB, as opposed to the "classic" 4 KB pages. Utilizing huge pages can offer performance advantages in several ways: they reduce the overhead associated with memory management tasks like fewer Translation Lookaside Buffer (TLB) misses and shorten the time needed for memory management, effectively offloading the CPU. Specifically, in PostgreSQL, huge pages can only be utilized for the shared memory area, a significant part of which is allocated for shared buffers. Another advantage is that huge pages prevent the swapping of the shared memory area out to disk, further stabilizing performance. ++#### Recommendations ++* For servers with significant memory resources, it's advisable to avoid disabling huge pages, as doing so could compromise performance. +* If you start with a smaller server that doesn't support huge pages but anticipate scaling up to a server that does, keeping the `huge_pages` setting at `TRY` is recommended for seamless transition and optimal performance. ++### work_mem ++| Attribute | Value | +|:--|--:| +| Default value | 4MB | +| Allowed value | 4MB-2GB | +| Type | Dynamic | +| Level | Global and granular | ++#### Description ++The `work_mem` parameter in PostgreSQL controls the amount of memory allocated for certain internal operations, such as sorting and hashing, within each database session's private memory area. Unlike shared buffers, which are in the shared memory area, `work_mem` is allocated in a per-session or per-query private memory space. By setting an adequate `work_mem` size, you can significantly improve the efficiency of these operations, reducing the need to write temporary data to disk. ++#### Key points ++* **Private connection memory**: `work_mem` is part of the private memory used by each database session, distinct from the shared memory area used by `shared_buffers`. +* **Query-specific usage**: Not all sessions or queries use `work_mem`. Simple queries like `SELECT 1` are unlikely to require any `work_mem`. However, more complex queries involving operations like sorting or hashing can consume one or multiple chunks of `work_mem`. +* **Parallel operations**: For queries that span multiple parallel backends, each backend could potentially utilize one or multiple chunks of `work_mem`. ++#### Monitoring and adjusting `work_mem` ++It's essential to continuously monitor your system's performance and adjust `work_mem` as necessary, primarily if slow query execution times related to sorting or hashing operations occur. Here are ways you can monitor it using tools available in the Azure portal: ++* **[Query performance insight](concepts-query-performance-insight.md)**: Check the **Top queries by temporary files** tab to identify queries that are generating temporary files, suggesting a potential need to increase the `work_mem`. +* **[Troubleshooting guides](concepts-troubleshooting-guides.md)**: Utilize the **High temporary files** tab in the troubleshooting guides to identify problematic queries. ++##### Granular adjustment +While managing the `work_mem` parameter, it's often more efficient to adopt a granular adjustment approach rather than setting a global value. This approach not only ensures that you allocate memory judiciously based on the specific needs of different processes and users but also minimizes the risk of encountering out-of-memory issues. HereΓÇÖs how you can go about it: ++* **User-Level**: If a specific user is primarily involved in aggregation or reporting tasks, which are memory-intensive, consider customizing the `work_mem` value for that user using the `ALTER ROLE` command to enhance the performance of their operations. ++* **Function/Procedure Level**: In cases where specific functions or procedures are generating substantial temporary files, increasing the `work_mem` at the specific function or procedure level can be beneficial. This can be done using the `ALTER FUNCTION` or `ALTER PROCEDURE` command to specifically allocate more memory to these operations. ++* **Database Level**: Alter `work_mem` at the database level if only specific databases are generating high amounts of temporary files. ++* **Global Level**: If an analysis of your system reveals that most queries are generating small temporary files, while only a few are creating large ones, it may be prudent to globally increase the `work_mem` value. This would facilitate most queries to process in memory, thus avoiding disk-based operations and improving efficiency. However, always be cautious and monitor the memory utilization on your server to ensure it can handle the increased `work_mem`. ++##### Determining the minimum `work_mem` value for sorting operations ++To find the minimum `work_mem` value for a specific query, especially one generating temporary disk files during the sorting process, you would start by considering the temporary file size generated during the query execution. For instance, if a query is generating a 20 MB temporary file: ++1. Connect to your database using psql or your preferred PostgreSQL client. +2. Set an initial `work_mem` value slightly higher than 20 MB to account for additional headers when processing in memory, using a command such as: `SET work_mem TO '25MB'`. +3. Execute `EXPLAIN ANALYZE` on the problematic query on the same session. +4. Review the output for `ΓÇ£Sort Method: quicksort Memory: xkB"`. If it indicates `"external merge Disk: xkB"`, raise the `work_mem` value incrementally and retest until `"quicksort Memory"` appears, signaling that the query is now operating in memory. +5. After determining the value through this method, it can be applied either globally or on more granular levels as described above to suit your operational needs. +++### maintenance_work_mem ++| Attribute | Value | +|:|--:| +| Default value | 211MB | +| Allowed value | 1MB-2GB | +| Type | Dynamic | +| Level | Global and granular | +| Azure-Specific Notes | | ++#### Description +`maintenance_work_mem` is a configuration parameter in PostgreSQL that governs the amount of memory allocated for maintenance operations, such as `VACUUM`, `CREATE INDEX`, and `ALTER TABLE`. Unlike `work_mem`, which affects memory allocation for query operations, `maintenance_work_mem` is reserved for tasks that maintain and optimize the database structure. Adjusting this parameter appropriately can help enhance the efficiency and speed of database maintenance operations. ->[!NOTE] -> As you scale Azure Database for PostgreSQL flexible server SKUs up or down, affecting available memory to the server, you may wish to tune your memory global parameters, such as `work_mem` or `effective_cache_size` accordingly based on information shared in the article. - ## Next steps For information on supported PostgreSQL extensions, see [the extensions document](concepts-extensions.md). |
postgresql | How To Server Logs Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-cli.md | Title: Download server logs for Azure Database for PostgreSQL - Flexible Server with Azure CLI -description: This article describes how to download server logs using Azure CLI. + Title: Download server logs for Azure Database for PostgreSQL flexible server with Azure CLI +description: This article describes how to download server logs by using the Azure CLI. -# List and download Azure Database for PostgreSQL - Flexible Server logs by using the Azure CLI +# List and download Azure Database for PostgreSQL flexible server logs by using the Azure CLI [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] -This article shows you how to list and download Azure Database for PostgreSQL flexible server logs using Azure CLI. +This article shows you how to list and download Azure Database for PostgreSQL flexible server logs by using the Azure CLI. ## Prerequisites -This article requires that you're running the Azure CLI version 2.39.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --You need to sign-in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account. +- You must be running the Azure CLI version 2.39.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). +- Sign in to your account by using the [az login](/cli/azure/reference-index#az-login) command. The `id` property refers to the **Subscription ID** for your Azure account. ```azurecli-interactive az login ``` -Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list). +Select the specific subscription under your account by using the [az account set](/cli/azure/account) command. Make a note of the `id` value from the `az login` output to use as the value for the `subscription` argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscriptions, use [az account list](/cli/azure/account#az-account-list). ```azurecli az account set --subscription <subscription id> ``` -## List server logs using Azure CLI +## List server logs by using the Azure CLI -Once you're configured the prerequisites and connected to your required subscription. You can list the server logs from your Azure Database for PostgreSQL flexible server instance by using the following command. +After you configure the prerequisites and connect to your required subscription, you can list the server logs from your Azure Database for PostgreSQL flexible server instance by using the following command. -> [!Note] -> You can configure your server logs in the same way as above using the [Server Parameters](./howto-configure-server-parameters-using-portal.md), setting the appropriate values for these parameters: _logfiles.download_enable_ to ON to enable this feature, and _logfiles.retention_days_ to define retention in days. Initially, server logs occupy data disk space for about an hour before moving to backup storage for the set retention period. +> [!NOTE] +> You can configure your server logs in the same way as just shown by using the [server parameters](./howto-configure-server-parameters-using-portal.md). Set the appropriate values for these parameters. Set `logfiles.download_enable` to ON to enable this feature. Set `logfiles.retention_days` to define retention in days. Initially, server logs occupy data disk space for about an hour before moving to backup storage for the set retention period. ```azurecli az postgres flexible-server server-logs list --resource-group <myresourcegroup> --server-name <serverlogdemo> --out <table> ``` -Here are the details for the above command +Here are the details for the preceding command. -|**LastModifiedTime** |**Name** |**ResourceGroup**|**SizeInKb**|**TypePropertiesType**|**Url** | +|LastModifiedTime |Name |ResourceGroup|SizeInKb|TypePropertiesType|URL | |-|||--||| |2024-01-10T13:20:15+00:00|serverlogs/postgresql_2024_01_10_12_00_00.log|myresourcegroup|242 |LOG |`https://00000000000.blob.core.windows.net/serverlogs/postgresql_2024_01_10_12_00_00.log?`| |2024-01-10T14:20:37+00:00|serverlogs/postgresql_2024_01_10_13_00_00.log|myresourcegroup|237 |LOG |`https://00000000000.blob.core.windows.net/serverlogs/postgresql_2024_01_10_13_00_00.log?`| |2024-01-10T15:20:58+00:00|serverlogs/postgresql_2024_01_10_14_00_00.log|myresourcegroup|237 |LOG |`https://00000000000.blob.core.windows.net/serverlogs/postgresql_2024_01_10_14_00_00.log?`| |2024-01-10T16:21:17+00:00|serverlogs/postgresql_2024_01_10_15_00_00.log|myresourcegroup|240 |LOG |`https://00000000000.blob.core.windows.net/serverlogs/postgresql_2024_01_10_15_00_00.log?`| +The output table here lists `LastModifiedTime`, `Name`, `ResourceGroup`, `SizeInKb`, and `Download Url` of the server logs. -The output table here lists `LastModifiedTime`, `Name`, `ResourceGroup`, `SizeInKb` and `Download Url` of the Server Logs. --By default `LastModifiedTime` is set to 72 hours, for listing files older than 72 hours, use flag `--file-last-written <Time:HH>` +By default, `LastModifiedTime` is set to 72 hours. For listing files older than 72 hours, use the flag `--file-last-written <Time:HH>`. ```azurecli az postgres flexible-server server-logs list --resource-group <myresourcegroup> --server-name <serverlogdemo> --out table --file-last-written <144> ``` -## Download server logs using Azure CLI +## Download server logs by using the Azure CLI The following command downloads the preceding server logs to your current directory. az postgres flexible-server server-logs download --resource-group <myresourcegro ``` ## Next steps-- To enable and disable Server logs from portal, you can refer to the [article.](./how-to-server-logs-portal.md)-- Learn more about [Logging](./concepts-logging.md)++- To enable and disable server logs from the portal, see [Enable, list, and download server logs for Azure Database for PostgreSQL flexible server](./how-to-server-logs-portal.md). +- Learn more about [logging](./concepts-logging.md). |
postgresql | Troubleshoot Password Authentication Failed For User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/troubleshoot-password-authentication-failed-for-user.md | + + Title: Password authentication failed for user - Azure Database for PostgreSQL - Flexible Server +description: Provides resolutions for a connection error - password authentication failed for user `<user-name>`. +++++ Last updated : 01/30/2024+++# Password authentication failed for user `<user-name>` +This article helps you solve a problem that might occur when connecting to Azure Database for PostgreSQL - Flexible Server. +++## Symptoms +When attempting to connect to Azure Database for PostgreSQL - Flexible Server, you may encounter the following error message: ++> psql: error: connection to server at "\<server-name\>.postgres.database.azure.com" (x.x.x.x), port 5432 failed: FATAL: password authentication failed for user "\<user-name\>" ++This error indicates that the password provided for the user `<user-name>` is incorrect. ++Following the initial password authentication error, you might see another error message indicating that the client is trying to reconnect to the server, this time without SSL encryption. The failure here's due to the server's `pg_hba.conf` configuration not permitting unencrypted connections. +++> connection to server at "\<server-name\>.postgres.database.azure.com" (x.x.x.x), port 5432 failed: FATAL: no pg_hba.conf entry for host "y.y.y.y", user "\<user-name\>", database "postgres", no encryption +++When using a `libpq` client that supports SSL, such as tools like `psql`, `pg_dump`, or `pgbench`, it's standard behavior to try connecting once with SSL and once without. The reason for this approach is that the server can have different `pg_hba` rules for SSL and non-SSL connections. +The combined error message you receive in this scenario looks like this: +++> psql: error: connection to server at "\<server-name\>.postgres.database.azure.com" (x.x.x.x), port 5432 failed: FATAL: password authentication failed for user "\<user-name\>" +connection to server at "\<server-name\>.postgres.database.azure.com" (x.x.x.x), port 5432 failed: FATAL: no pg_hba.conf entry for host "y.y.y.y", user "\<user-name\>", database "postgres", no encryption +++To avoid this dual attempt and specify the desired SSL mode, you can use the `sslmode` connection option in your client configuration. For instance, if you're using `libpq` variables in the bash shell, you can set the SSL mode by using the following command: ++```bash +export PGSSLMODE=require +``` +++## Cause +The error encountered when connecting to Azure Database for PostgreSQL - Flexible Server primarily stems from issues related to password authentication: ++* **Incorrect password** +The password authentication failed for user `<user-name>` error occurs when the password for the user is incorrect. This could happen due to a mistyped password, a recent password change that hasn't been updated in the connection settings, or other similar issues. ++* **User or role created without a password** +Another possible cause of this error is creating a user or role in PostgreSQL without specifying a password. Executing commands like `CREATE USER <user-name>` or `CREATE ROLE <role-name>` without an accompanying password statement results in a user or role with no password set. Attempting to connect with such a user or role without setting a password will lead to authentication failure with password authentication failed error. ++* **Potential security breach** +If the authentication failure is unexpected, particularly if there are multiple failed attempts recorded, it could indicate a potential security breach. Unauthorized access attempts might trigger such errors. ++## Resolution +If you're encountering the "password authentication failed for user `<user-name>`" error, follow these steps to resolve the issue. ++* **Try connecting with a different tool** ++ If the error comes from an application, attempt to connect to the database using a different tool, such as `psql` or pgAdmin, with the same username and password. This step helps determine if the issue is specific to the client or a broader authentication problem. Keep in mind any relevant firewall rules that might affect connectivity. For instructions on connecting using different tools, refer to the "Connect" blade in the Azure portal. ++* **Change the password** ++ If you still encounter password authentication issues after trying a different tool, consider changing the password for the user. For the administrator user, you can change the password directly in the Azure portal as described in this [link](how-to-manage-server-portal.md#reset-admin-password). For other users, or the administrator user under certain conditions, you can change the password from the command line. Ensure that you're logged in to the database as a user with the `CREATEROLE` attribute and the `ADMIN` option on their role. The command to change the password is: ++ ```sql + ALTER USER <user-name> PASSWORD '<new-password>'; + ``` ++* **Set password for user or role created without one** ++ If the cause of the error is the creation of a user or role without a password, log in to your PostgreSQL instance and set the password for the role. For roles created without the `LOGIN` privilege, make sure to grant this privilege along with setting the password: ++ ```sql + ALTER ROLE <role-name> LOGIN; + ALTER ROLE <role-name> PASSWORD '<new-password>'; + ``` + +* **Identify the Attacker's IP Address and Secure Your Database** ++ If you suspect a potential security breach is causing unauthorized access to your Azure Database for PostgreSQL - Flexible Server, follow these steps to address the issue: ++ 1. **Enable log capturing** + If log capturing isn't already on, get it set up now. It's key for keeping an eye on database activities and catching any odd access patterns. There are several ways to do this, including Azure Monitor Log Analytics and server logs, which help store and analyze database event logs. + * **Log Analytics**, Check out the setup instructions for Azure Monitor Log Analytics here: [Configure and access logs in Azure Database for PostgreSQL - Flexible Server](how-to-configure-and-access-logs.md). + * **Server logs**, For hands-on log management, head over to the Azure portal's server logs section here: [Enable, list and download server logs for Azure Database for PostgreSQL - Flexible Server](how-to-server-logs-portal.md). ++ 2. **Identify the attacker's IP address** + * Review the logs to find the IP address from which the unauthorized access attempts are being made. If the attacker is using a `libpq`-based tool, you'll see the IP address in the log entry associated with the failed connection attempt: + > connection to server at "\<server-name\>.postgres.database.azure.com" (x.x.x.x), port 5432 failed: FATAL: no pg_hba.conf entry for host "y.y.y.y", user "\<user-name\>", database "postgres", no encryption + + In this example, `y.y.y.y` is the IP address from which the attacker is trying to connect. ++ * **Modify the `log_line_prefix`** + To improve logging and make it easier to troubleshoot, you should modify the `log_line_prefix` parameter in your PostgreSQL configuration to include the remote host's IP address. To log the remote host name or IP address, add the `%h` escape code to your `log_line_prefix`. + + For instance, you can change your `log_line_prefix` to the following format for comprehensive logging: ++ ```bash + log_line_prefix = '%t [%p]: [%l-1] db=%d,user=%u,app=%a,client=%h ' + ``` + + This format includes: + + * `%t` for the timestamp of the event + * `%p` for the process ID + * `[%l-1]` for the session line number + * `%d` for the database name + * `%u` for the user name + * `%a` for the application name + * `%h` for the client IP address + + + By using this log line prefix, you're able to track the time, process ID, user, application, and client IP address associated with each log entry, providing valuable context for each event in the server log. ++ 3. **Block the attacker's IP address** + Dig into the logs to spot any suspicious IP addresses that keep showing up in unauthorized access attempts. Once you find these IPs, immediately block them in your firewall settings. This cuts off their access and prevent any more unauthorized attempts. Additionally, review your firewall rules to ensure they're not too permissive. Overly broad rules can expose your database to potential attacks. Limit access to only known and necessary IP ranges. + +++By following these steps, you should be able to resolve the authentication issues and successfully connect to your Azure Database for PostgreSQL - Flexible Server. +++++++ |
postgresql | Concepts Single To Flexible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md | description: Concepts about migrating your Single server to Azure Database for P Previously updated : 03/31/2023 Last updated : 01/25/2024 - ++ - references_regions -# Migration tool - Azure database for PostgreSQL Single Server to Flexible Server +# Migration tool - Azure Database for PostgreSQL - Single Server to Flexible Server Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in two deployment modes: - Flexible Server The following table lists the different tools available for performing the migra | Tool | Mode | Pros | Cons | | : | : | : | : |-| Single to Flex Migration tool (**Recommended**) | Offline | - Managed migration service.<br />- No complex setup/pre-requisites required<br />- Simple to use portal-based migration experience<br />- Fast offline migration tool<br />- No limitations in terms of size of databases it can handle. | Downtime to applications. | +| Single to Flex Migration tool (**Recommended**) | Offline or Online* | - Managed migration service.<br />- No complex setup/pre-requisites required<br />- Simple to use portal-based migration experience<br />- Fast offline migration tool<br />- No limitations in terms of size of databases it can handle. | Downtime to applications. | | pg_dump and pg_restore | Offline | - Tried and tested tool that is in use for a long time<br />- Suited for databases of size less than 10 GB<br />| - Need prior knowledge of setting up and using this tool<br />- Slow when compared to other tools<br />Significant downtime to your application. |-| Azure DMS | Online | - Minimal downtime to your application<br />- Free of cost | - Complex setup<br />- High chances of migration failures<br />- Can't handle database of sizes > 1 TB<br />- Can't handle write-intensive workload | --The next section of the document gives an overview of the Single to Flex Migration tool, its implementation, limitations, and the experience that makes it the recommended tool to perform migrations from single to flexible server. > [!NOTE] > The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image. ++The next section of the document gives an overview of the Single to Flex Migration tool, its implementation, limitations, and the experience that makes it the recommended tool to perform migrations from single to flexible server. ## Single to Flexible Migration tool - Overview The single to flex migration tool is a hosted solution where we spin up a purpose-built docker container in the target Flexible server VM and drive the incoming migrations. This docker container spins up on-demand when a migration is initiated from a single server and gets decommissioned once the migration is completed. The migration container uses a new binary called [pgcopydb](https://github.com/dimitri/pgcopydb) that provides a fast and efficient way of copying databases from one server to another. Though pgcopydb uses the traditional pg_dump and pg_restore for schema migration, it implements its own data migration mechanism that involves multi-process streaming parts from source to target. Also, pgcopydb bypasses pg_restore way of index building and drives that internally in a way that all indexes are built concurrently. So, the data migration process is quicker with pgcopydb. Following is the process diagram of the new version of the migration tool. ## Pre-migration validations-We noticed many migrations fail due to setup issues on source and target server. Most of the issues can be categorized into the following buckets: -* Issues related to authentication/permissions for the migration user on source and target server. -* [Prerequisites](#migration-prerequisites) not being taken care of, before running the migration. -* Unsupported features/configurations between the source and target. ++We noticed many migrations fail due to setup issues on source and target server. Most of the issues can be categorized into the following buckets: +- Issues related to authentication/permissions for the migration user on source and target server. +- [Prerequisites](#migration-prerequisites) not being taken care of, before running the migration. +- Unsupported features/configurations between the source and target. Pre-migration validation helps you verify if the migration setup is intact to perform a successful migration. Checks are done against a rule set and any potential problems along with the remedial actions are shown to take corrective measures. ### How to use pre-migration validation?+ A new parameter called **Migration option** is introduced while creating a migration. You can pick any of the following options-* **Validate** - Use this option to check your server and database readiness for migration to the target. **This option will not start data migration and will not require any downtime to your servers.** -The result of the Validate option can be - - **Succeeded** - No issues were found and you can plan for the migration - - **Failed** - There were errors found during validation, which can fail the migration. Go through the list of errors along with their suggested workarounds and take corrective measures before planning the migration. - - **Warning** - Warnings are informative messages that you need to keep in mind while planning the migration. +- **Validate** - Use this option to check your server and database readiness for migration to the target. **This option will not start data migration and will not require any downtime to your servers.** +The result of the Validate option can be + - **Succeeded** - No issues were found and you can plan for the migration + - **Failed** - There were errors found during validation, which can fail the migration. Go through the list of errors along with their suggested workarounds and take corrective measures before planning the migration. + - **Warning** - Warnings are informative messages that you need to keep in mind while planning the migration. - Plan your migrations better by performing pre-migration validations in advance to know the potential issues you might encounter while performing migrations. + Plan your migrations better by performing pre-migration validations in advance to know the potential issues you might encounter while performing migrations. -* **Migrate** - Use this option to kickstart the migration without going through validation process. It's recommended to perform validation before triggering a migration to increase the chances for a successful migration. Once validation is done, you can use this option to start the migration process. +- **Migrate** - Use this option to kickstart the migration without going through validation process. It's recommended to perform validation before triggering a migration to increase the chances for a successful migration. Once validation is done, you can use this option to start the migration process. -* **Validate and Migrate** - In this option, validations are performed and then migration gets triggered if all checks are in **succeeded** or **warning** state. Validation failures don't start the migration between source and target servers. +- **Validate and Migrate** - In this option, validations are performed and then migration gets triggered if all checks are in **succeeded** or **warning** state. Validation failures don't start the migration between source and target servers. We recommend customers to use pre-migration validations in the following way: 1) Choose **Validate** option and run pre-migration validation on an advanced date of your planned migration. We recommend customers to use pre-migration validations in the following way: > Pre-migration validations is generally available in all public regions. Support for CLI will be introduced at a later point in time. ## Migration of users/roles, ownerships and privileges+ Along with data migration, the tool automatically provides the following built-in capabilities: - Migration of users/roles present on your source server to the target server. - Migration of ownership of all the database objects on your source server to the target server. - Migration of permissions of database objects on your source server such as GRANTS/REVOKES to the target server. > [!NOTE] -> This functionality is enabled by default for flexible servers in all Azure public regions. It will be enabled for flexible servers in gov clouds and China regions soon. +> This functionality is enabled by default for flexible servers in all Azure public regions. It will be enabled for flexible servers in gov clouds and China regions soon. ## Limitations - You can have only one active migration or validation to your Flexible server.-- The source and target server must be in the same Azure region. Cross region migrations are enabled only for servers in India, China and UAE as Flexible server may not be available in all regions within these geographies.+- The source and target server must be in the same Azure region. Cross region migrations are enabled only for servers in India, China and UAE as Flexible server might not be available in all regions within these geographies. - The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details and firewall rules. - The migration tool shows the number of tables copied from source to target server. You need to manually validate the data in target server post migration.-- The tool migrates only user databases. System databases like azure_sys, azure_maintenance or template databases such as template0, template1 will not be migrated. +- The tool migrates only user databases. System databases like azure_sys, azure_maintenance or template databases such as template0, template1 will not be migrated. > [!NOTE] > The following limitations are applicable only for flexible servers on which the migration of users/roles functionality is enabled. - Azure Active Directory users present on your source server are not migrated to the target server. To mitigate this limitation, manually create all Azure Active Directory users on your target server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before triggering a migration. If Azure Active Directory users aren't created on target server, migration fails with appropriate error message. - If the target flexible server uses SCRAM-SHA-256 password encryption method, connection to flexible server using the users/roles on single server fails since the passwords are encrypted using md5 algorithm. To mitigate this limitation, choose the option **MD5** for **password_encryption** server parameter on your flexible server.+ ## Experience Get started with the Single to Flex migration tool by using any of the following methods: Get started with the Single to Flex migration tool by using any of the following Here, we go through the phases of an overall database migration journey, with guidance on how to use Single to Flex migration tool in the process. -### Getting started +### Get started #### Application compatibility The following table summarizes the list of networking scenarios supported by the ##### Allowlist required extensions -The migration tool automatically allowlists all extensions used by your single server databases on your flexible server except for the ones whose libraries need to be loaded at the server start. +The migration tool automatically allowlists all extensions used by your single server databases on your flexible server except for the ones whose libraries need to be loaded at the server start. Use the following select command to list all the extensions used on your Single server databases. If yes, then follow the below steps. Go to the server parameters blade and search for **shared_preload_libraries** parameter. PG_CRON and PG_STAT_STATEMENTS extensions are selected by default. Select the list of above extensions used by your single server databases to this parameter and select Save. For the changes to take effect, server restart would be required. Use the **Save and Restart** option and wait for the flexible server to restart. Use the **Save and Restart** option and wait for the flexible server to restart. > If TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, POSTGRES_FDW or PG_PARTMAN extensions are used in your single server, please raise a support request since the migration tool does not handle these extensions. ##### Create Azure Active Directory users on target server+ > [!NOTE] > This pre-requisite is applicable only for flexible servers on which the migration of users/roles functionality is enabled. Execute the following query on your source server to get the list of Azure Active Directory users.+ ```sql SELECT r.rolname- FROM - pg_roles r - JOIN pg_auth_members am ON r.oid = am.member - JOIN pg_roles m ON am.roleid = m.oid - WHERE - m.rolname IN ( - 'azure_ad_admin', - 'azure_ad_user', - 'azure_ad_mfa' - ); -``` + FROM + pg_roles r + JOIN pg_auth_members am ON r.oid = am.member + JOIN pg_roles m ON am.roleid = m.oid + WHERE + m.rolname IN ( + 'azure_ad_admin', + 'azure_ad_user', + 'azure_ad_mfa' + ); +``` Create the Azure Active Directory users on your target flexible server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before creating a migration. ### Migration Trigger the migration of your production databases using the **Migrate** or **Va - Make changes to your application to point the connection strings to flexible server. - Monitor the database performance closely to see if it requires performance tuning. -## Next steps +## Related content - [Migrate to Flexible Server by using the Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md) - [Migrate to Flexible Server by using the Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md) |
postgresql | How To Migrate Single To Flexible Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md | The migration tool comes with a simple, wizard-based experience on the Azure por :::image type="content" source="./media/concepts-single-to-flexible/flexible-overview.png" alt-text="Screenshot of the flexible Overview page." lightbox="./media/concepts-single-to-flexible/flexible-overview.png"::: -4. Select the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you're using the migration tool, an empty grid appears with a prompt to begin your first migration. +4. Select the **Create** button to start a migration from Single Server to Flexible Server. If this is the first time you're using the migration tool, an empty grid appears with a prompt to begin your first migration. :::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-grid.png" alt-text="Screenshot of the Migration tab in flexible." lightbox="./media/concepts-single-to-flexible/flexible-migration-grid.png"::: The first tab is **Setup**. Just in case you missed it, allowlist necessary exte **Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and doesn't accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name. +**Source server type** indicates the source. In this case, it is Azure Database for PostgreSQL Single server + **Migration Option** gives you the option to perform validations before triggering a migration. You can pick any of the following options - **Validate** - Checks your server and database readiness for migration to the target. - **Migrate** - Skips validations and starts migrations. It's always a good practice to choose **Validate** or **Validate and Migrate** o If **Online** migration is selected, it requires Logical replication to be turned on in the source Single server. If it's not turned on, the migration tool automatically turns on logical replication at the source Single server. Replication can also be set up manually under **Replication** tab in the Single server side pane by setting the Azure replication support level to **Logical**. Either approach restarts the source single server. -Select the **Next** button. +Select the **Next : Connect to Source** button. ### Source tab The **Source** tab prompts you to give details related to the Single Server that is the source of the databases. - After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Servers under that resource group across regions. Select the source that you want to migrate databases from. You can migrate databases from a Single Server to a target Flexible Server in the same region. Cross region migrations are enabled only for servers in India, China and UAE. After you choose the Single Server source, the **Location**, **PostgreSQL version**, and **Server admin login name** boxes are populated automatically. The server admin login name is the admin username used to create the Single Server. In the **Password** box, enter the password for that admin user. The migration tool performs the migration of single server databases as the admin user. -After filling out all the fields, select the **Next** button. +After filling out all the fields, click the **Connect to source** link. This validates that the source server details entered are correct and source server is reachable. +++Select the **Next : Select migration target** button to continue. ### Target tab -The **Target** tab displays metadata for the Flexible Server target, like subscription name, resource group, server name, location, and PostgreSQL version. +The **Target** tab displays metadata for the Flexible Server target, such as subscription name, resource group, server name, location, and PostgreSQL version. :::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-target.png" alt-text="Screenshot of target database server details." lightbox="./media/concepts-single-to-flexible/flexible-migration-target.png"::: -For **Server admin login name**, the tab displays the admin username used during the creation of the Flexible Server target. Enter the corresponding password for the admin user. +For **Server admin login name**, the tab displays the admin username used during the creation of the Flexible Server target. Enter the corresponding password for the admin user. After filling out the password, click the **Connect to target** link. This validates that the target server details entered are correct and target server is reachable. -Select the **Next** button. +Click the **Next** button to select the databases to migrate. ### Select Database(s) for Migration tab -Under this tab, there's a list of user databases inside the Single Server. You can select and migrate up to eight databases in a single migration attempt. If there are more than eight user databases, the migration process is repeated between the source and target servers for the next set of databases. +Under this tab, there's a list of user databases inside the Single Server. You can select and migrate up to eight databases in a single migration attempt. If there are more than eight user databases, the migration process is repeated between the source and target servers for the next set of databases. By default, selected databases with the same name on the target are overwritten. :::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-database.png" alt-text="Screenshot of Databases to migrate." lightbox="./media/concepts-single-to-flexible/flexible-migration-database.png"::: >[!NOTE] > The tool migrates only user databases. System databases or template databases such as template0, template1 will not be migrated. -### Review +Click the **Next** button to review the details. ++### Summary -The **Review** tab summarizes all the details for creating the validation or migration. Review the details and click on the start button. +The **Summary** tab summarizes all the details for creating the validation or migration. Review the details and click on the start button. :::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-review.png" alt-text="Screenshot of details to review for the migration." lightbox="./media/concepts-single-to-flexible/flexible-migration-review.png"::: After you click the start button, a notification appears in a few seconds to say :::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-monitor.png" alt-text="Screenshot of recently created migration details." lightbox="./media/concepts-single-to-flexible/flexible-migration-monitor.png"::: -The grid that displays the migrations has these columns: **Name**, **Status**, **Source DB server**, **Resource group**, **Region**, **Databases**, and **Start time**. The entries are displayed in the descending order of the start time with the most recent entry on the top. +The grid that displays the migrations has these columns: **Name**, **Status**, **Migration type**, **Migration mode**, **Source server**, **Source server type**, **Databases**, **Start time** and **Duration**. The entries are displayed in the descending order of the start time with the most recent entry on the top. You can use the refresh button to refresh the status of the validation or migration. You can also select the migration name in the grid to see the associated details. The validation moves to the **Succeeded** state if all validations are either in :::image type="content" source="./media/concepts-single-to-flexible/validation-successful.png" alt-text="Screenshot of the validation grid." lightbox="./media/concepts-single-to-flexible/validation-successful.png"::: -The validation grid has the following columns -- **Finding** - Represents the validation rules that are used to check readiness for migration.-- **Finding Status** - Represents the result for each rule and can have any of the three values+The validation grid has the +- **Validation details for instance** and **Validation details for databases** sections which represent the validation rules that are used to check readiness for migration. +- **Validation Status** - Represents the result for each rule and can have any of the three values - **Succeeded** - If no errors were found. - **Failed** - If there are validation errors. - **Warning** - If there are validation warnings.-- **Impacted Object** - Represents the object name for which the errors or warnings are raised. -- **Object Type** - This can have the value **Database** for database level validations and **Instance** for server level validations.+- **Duration** - Time taken for the Validation operation. +- **Start and End time** - Start and end time of the validation operation in UTC. -The validation moves to **Validation Failed** state if there are any errors in the validation. Click on the **Finding** in the grid whose status is **Failed** and a fan-out pane gives the details and the corrective action you should take to avoid this error. +The **Validation status** moves to **Failed** state if there are any errors in the validation. Click on the **Validation name** or **Database name** validation that has failed and a fan-out pane gives the details and the corrective action you should take to avoid this error. :::image type="content" source="./media/concepts-single-to-flexible/validation-failed.png" alt-text="Screenshot of the validation grid with failed status." lightbox="./media/concepts-single-to-flexible/validation-failed.png"::: In this option, validations are performed first before migration starts. After t - If validation has errors, the migration moves into a **Failed** state. - If validation completes without any error, the migration starts and the workflow will move into the sub state of **Migrating Data**. -You can see the results of validation under the **Validation** tab and monitor the migration under the **Migration** tab. +You can see the results of **Validate and Migrate** once the operation is complete. :::image type="content" source="./media/concepts-single-to-flexible/validate-and-migrate-1.png" alt-text="Screenshot showing validations tab in details page." lightbox="./media/concepts-single-to-flexible/validate-and-migrate-1.png"::: - ### Online migration > [!NOTE] -> Support for **Online** migrations is currently available in UK South, South Africa North, UAE North, and all regions across Asia and Australia. +> Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image. + In case of both **Migrate** as well as **Validate and Migrate**, completion of the Online migration requires another step - a Cutover action is required from the user. After the copy/clone of the base data is complete, the migration moves to `WaitingForUserAction` state and `WaitingForCutoverTrigger` substate. In this state, user can trigger cutover from the portal by selecting the migration. |
postgresql | Best Practices Migration Service Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/best-practices-migration-service-postgresql.md | + + Title: Best practices to migrate into Flexible Server +description: Best practices for a seamless migration into Azure database for PostgreSQL. +++ Last updated : 01/30/2024++++++# Best practices for seamless migration into Azure Database for PostgreSQL Preview +++This article explains common pitfalls encountered and best practices to ensure a smooth and successful migration to Azure Database for PostgreSQL. ++## Premigration validation ++As a first step in the migration, run the premigration validation before you perform a migration. You can use the **Validate** and **Validate and Migrate** options on the migration setup page. Premigration validation conducts thorough checks against a predefined rule set. The goal is to identify potential problems and provide actionable insights for remedial actions. Keep running premigration validation until it results in a **Succeeded** state. Select [premigration validations](concepts-premigration-migration-service.md) to know more. ++## Target Flexible server configuration ++During the initial base copy of data, multiple insert statements are executed on the target, which generates WALs (Write Ahead Logs). Until these WALs are archived, the logs consume storage at the target and the storage required by the database. ++To calculate the number, sign in to the source instance and execute this command for all the Database(s) to be migrated: ++`SELECT pg_size_pretty( pg_database_size('dbname') );` ++It's advisable to allocate sufficient storage on the Flexible server, equivalent to 1.25 times or 25% more storage than what is being used per the output to the command above. [Storage Autogrow](../../flexible-server/how-to-auto-grow-storage-portal.md) can also be used. ++> [!IMPORTANT] +> Storage size can't be reduced in manual configuration or Storage Autogrow. Each step in the Storage configuration spectrum doubles in size, so estimating the required storage beforehand is prudent. ++The quickstart to [Create an Azure Database for PostgreSQL flexible server using the portal](../../flexible-server/quickstart-create-server-portal.md) is an excellent place to begin. [Compute and storage options in Azure Database for PostgreSQL - Flexible Server](../../flexible-server/concepts-compute-storage.md) also gives detailed information about each server configuration. ++## Migration timeline ++Each migration has a maximum lifetime of seven days (168 hours) once it starts and will time out after seven days. You can complete your migration and application cutover once the data validation and all checks are complete to avoid the migration from timing out. In Online migrations, after the initial base copy is complete, the cutover window lasts three days (72 hours) before timing out. In Offline migrations, the applications should stop writing to the Database to prevent data loss. Similarly, for Online migration, keep traffic low throughout the migration. ++Most nonprod servers (dev, UAT, test, staging) are migrated using offline migrations. Since these servers have less data than the production servers, the migration completes fast. For production server migration, you need to know the time it would take to complete the migration to plan for it in advance. ++The time taken for a migration to complete depends on several factors. It includes the number of databases, size, number of tables inside each database, number of indexes, and data distribution across tables. It also depends on the SKU of the target server and the IOPS available on the source instance and target server. Given the many factors that can affect the migration time, it's hard to estimate the total time for the migration to complete. The best approach would be to perform a test migration with your workload. ++The following phases are considered for calculating the total downtime to perform production server migration. ++- **Migration of PITR** - The best way to get a good estimate on the time taken to migrate your production database server would be to take a point-in time restore of your production server and run the offline migration on this newly restored server. ++- **Migration of Buffer** - After completing the above step, you can plan for actual production migration during a time period when the application traffic is low. This migration can be planned on the same day or probably a week away. By this time, the size of the source server might have increased. Update your estimated migration time for your production server based on the amount of this increase. If the increase is significant, you can consider doing another test using the PITR server. But for most servers the size increase shouldn't be significant enough. ++- **Data Validation** ++Once the migration is completed for the production server, you need to verify if the data in the flexible server is an exact copy of the source instance. Customers can use open-source/third-party tools or can do the validation manually. Prepare the validation steps you would like to do before the actual migration. Validation can include: + +- Row count match for all the tables involved in the migration. ++- Matching counts for all the database objects (tables, sequences, extensions, procedures, indexes) ++- Comparing max or min IDs of key application-related columns ++ > [!NOTE] + > The size of databases needs to be the right metric for validation. The source instance might have bloats/dead tuples, which can bump up the size of the source instance. It's completely normal to have size differences between source instances and target servers. If there's an issue in the first three steps of validation, it indicates a problem with the migration. ++- **Migration of server settings** - Any custom server parameters, firewall rules (if applicable), tags, and alerts must be manually copied from the source instance to the target. ++- **Changing connection strings** - The application should change its connection strings to a flexible server after successful validation. This activity is coordinated with the application team to change all the references of connection strings pointing to the source instance. In the flexible server, the user parameter can be used in the **user=username** format in the connection string. ++For example: psql -h **myflexserver**.postgres.database.azure.com -u user1 -d db1 ++While a migration often runs without a hitch, it's good practice to plan for contingencies if more time is required for debugging or if a migration needs to be restarted. ++## Migration speed benchmarking ++The following table shows the time it takes to perform migrations for databases of various sizes using the migration service. The migration was performed using a flexible server with the SKU – **Standard_D4ds_v4(4 cores, 16GB Memory, 128 GB disk, and 500 iops)** ++| Database size | Approximate time taken (HH:MM) | +| : | : | +| 1 GB | 00:01 | +| 5 GB | 00:03 | +| 10 GB | 00:08 | +| 50 GB | 00:35 | +| 100 GB | 01:00 | +| 500 GB | 04:00 | +| 1,000 GB | 07:00 | ++> [!NOTE] +> The above numbers give you an approximation of the time taken to complete the migration. We strongly recommend running a test migration with your workload to get a precise value for migrating your server. ++> [!IMPORTANT] +> Pick a higher SKU for your flexible server to perform faster migrations. Azure Database for PostgreSQL Flexible server supports near zero downtime Compute & IOPS scaling so the SKU can be updated with minimal downtime. You can always change the SKU to match the application needs post-migration. ++### Improve migration speed - Parallel migration of tables ++A powerful SKU is recommended for the target, as the PostgreSQL migration service runs out of a container on the Flexible server. A powerful SKU enables more tables to be migrated in parallel. You can scale the SKU back to your preferred configuration after the migration. This section contains steps to improve the migration speed in case the data distribution among the tables needs to be more balanced and/or a more powerful SKU doesn't significantly impact the migration speed. ++If the data distribution on the source is highly skewed, with most of the data present in one table, the allocated compute for migration needs to be fully utilized, and it creates a bottleneck. So, we split large tables into smaller chunks, which are then migrated in parallel. This feature applies to tables with more than 10000000 (10 m) tuples. Splitting the table into smaller chunks is possible if one of the following conditions is satisfied. ++1. The table must have a column with a simple (not composite) primary key or unique index of type int or significant int. ++ > [!NOTE] + > In the case of approaches #2 or #3, the user must carefully evaluate the implications of adding a unique index column to the source schema. Only after confirmation that adding a unique index column will not affect the application should the user go ahead with the changes. ++1. If the table doesn't have a simple primary key or unique index of type int or significant int but has a column that meets the data type criteria, the column can be converted into a unique index using the command below. This command doesn't require a lock on the table. ++ ```sql + create unique index concurrently partkey_idx on <table name> (column name); + ``` ++1. If the table doesn't have a simple int/big int primary key or unique index or any column that meets the data type criteria, you can add such a column using [ALTER](https://www.postgresql.org/docs/current/sql-altertable.html) and drop it post-migration. Running the ALTER command requires a lock on the table. ++ ```sql + alter table <table name> add column <column name> big serial unique; + ``` ++If any of the above conditions are satisfied, the table is migrated in multiple partitions in parallel, which should provide a marked increase in the migration speed. ++#### How it works ++- The migration service looks up the maximum and minimum integer value of the table's Primary key/Unique index that must be split up and migrated in parallel. +- If the difference between the minimum and maximum value is more than 10000000 (10 m), then the table is split into multiple parts, and each part is migrated in parallel. ++In summary, the PostgreSQL migration service migrates a table in parallel threads and reduces the migration time if: ++- The table has a column with a simple primary key or unique index of type int or significant int. +- The table has at least 10000000 (10 m) rows so that the difference between the minimum and maximum value of the primary key is more than 10000000 (10 m). +- The SKU used has idle cores, which can be used for migrating the table in parallel. ++## Vacuum bloat in the PostgreSQL database ++Over time, as data is added, updated, and deleted, PostgreSQL might accumulate dead rows and wasted storage space. This bloat can lead to increased storage requirements and decreased query performance. Vacuuming is a crucial maintenance task that helps reclaim this wasted space and ensures the database operates efficiently. Vacuuming addresses issues such as dead rows and table bloat, ensuring efficient use of storage. More importantly, it helps ensure a quicker migration as the migration time taken is a function of the Database size. ++PostgreSQL provides the VACUUM command to reclaim storage occupied by dead rows. The `ANALYZE` option also gathers statistics, further optimizing query planning. For tables with heavy write activity, the `VACUUM` process can be more aggressive using `VACUUM FULL`, but it requires more time to execute. ++- Standard Vacuum ++```sql +VACUUM your_table; +``` ++- Vacuum with Analyze ++```sql +VACUUM ANALYZE your_table; +``` ++- Aggressive Vacuum for Heavy Write Tables ++```sql +VACUUM FULL your_table; +``` ++In this example, replace your_table with the actual table name. The `VACUUM` command without **FULL** reclaims space efficiently, while `VACUUM ANALYZE` optimizes query planning. The `VACUUM FULL` option should be used judiciously due to its heavier performance impact. ++Some Databases store large objects, such as images or documents, that can contribute to database bloat over time. The `VACUUMLO` command is designed for large objects in PostgreSQL. ++- Vacuum Large Objects ++```sql +VACUUMLO; +``` ++Regularly incorporating these vacuuming strategies ensures a well-maintained PostgreSQL database. ++## Special consideration ++There are special conditions that typically refer to unique circumstances, configurations, or prerequisites that learners need to be aware of before proceeding with a tutorial or module. These conditions could include specific software versions, hardware requirements, or additional tools that are necessary for successful completion of the learning content. ++### Database with postgres_fdw extension ++The [postgres_fdw module](https://www.postgresql.org/docs/current/postgres-fdw.html) provides the foreign-data wrapper postgres_fdw, which can be used to access data stored in external PostgreSQL servers. If your database uses this extension, the following steps must be performed to ensure a successful migration. ++1. Temporarily remove (unlink) Foreign data wrapper on the source instance. +1. Perform data migration of rest using the Migration service. +1. Restore the Foreign data wrapper roles, user, and Links to the target after migration. ++### Database with postGIS extension ++The Postgis extension has breaking changes/compact issues between different versions. If you migrate to a flexible server, the application should be checked against the newer postGIS version to ensure that the application isn't impacted or that the necessary changes must be made. The [postGIS news](https://postgis.net/news/) and [release notes](https://postgis.net/docs/release_notes.html#idm45191) are a good starting point to understand the breaking changes across versions. ++### Database connection cleanup ++Sometimes, you might encounter this error when starting a migration: ++`CL003:Target database cleanup failed in the pre-migration step. Reason: Unable to kill active connections on the target database created by other users. Please add the pg_signal_backend role to the migration user using the command 'GRANT pg_signal_backend to <migrationuser>' and try a new migration.` ++In this case, you can grant the `migration user` permission to close all active connections to the database or close the connections manually before retrying the migration. ++## Related content ++- [Migration service](concepts-migration-service-postgresql.md) +- [Known issues and limitations](concepts-known-issues-migration-service.md) |
postgresql | Concepts Known Issues Migration Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-known-issues-migration-service.md | + + Title: "Migration service - known issues and limitations" +description: Providing the limitations of the migration service in Azure Database for PostgreSQL +++ Last updated : 01/30/2024+++++# Known issues and limitations - migration service in Azure Database for PostgreSQL Preview +++This article describes the known issues and limitations associated with the migrations service in Azure Database for PostgreSQL. ++## Common limitations ++Here are common limitations that apply to migration scenarios: ++- You can have only one active migration or validation to your Flexible server. ++- The migration service doesn't migrate users and roles. ++- The migration service shows the number of tables copied from source to target. You must manually check the data and PostgreSQL objects on the target server post-migration. ++- The migration service only migrates user databases, not system databases such as template_0 and template_1. ++- The migration service doesn't support moving POSTGIS, TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, PG_PARTMAN extensions from source to target. ++- You can't move extensions not supported by the Azure Database for PostgreSQL ΓÇô Flexible server. The supported extensions are in [Extensions - Azure Database for PostgreSQL](/azure/postgresql/flexible-server/concepts-extensions). ++- User-defined collations can't be migrated into Azure Database for PostgreSQL ΓÇô flexible server. ++- You can't migrate to an older version. For instance, you can't migrate from PostgreSQL 15 to Azure Database for PostgreSQL version 14. ++- The migration service only works with preferred or required SSLMODE values. ++- The migration service doesn't support superuser privileges and objects. ++- The following PostgreSQL objects can't be migrated into the PostgreSQL flexible server target: + - Create casts + - Creation of FTS parsers and FTS templates + - Users with superuser roles + - Create TYPE ++- The migration service doesn't support migration at the object level, that is, at the table level or schema level. ++- The migration service is unable to perform migration when the source database is Azure Database for PostgreSQL single server with no public access or is an on-premises/AWS using a private IP, and the target Azure Database for PostgreSQL Flexible Server is accessible only through a private endpoint. ++- Migration to burstable SKUs is not supported; databases must first be migrated to a non-burstable SKU and then scaled down if needed. ++## Related content ++- [Migration service](concepts-migration-service-postgresql.md) +- [Network setup](how-to-network-setup-migration-service.md) +- [Premigration validations](concepts-premigration-migration-service.md) |
postgresql | Concepts Migration Service Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-migration-service-postgresql.md | + + Title: Migration service in Azure Database for PostgreSQL +description: Concepts about migrating into Azure database for PostgreSQL - Flexible Server. +++ Last updated : 01/30/2024++++++# Migration service in Azure Database for PostgreSQL Preview +++The migration service in Azure Database for PostgreSQL simplifies the process of moving your PostgreSQL databases to Azure, offering migration options from an Azure Database for PostgreSQL single server, AWS RDS for PostgreSQL, on-premises servers, and Azure virtual machines (VMs). The migration service is designed to help you move to Azure Database for PostgreSQL - Flexible Server with ease and confidence. ++Some advantages for using the migration service include: ++- Managed migration service. +- Support for schema and data migrations. +- No complex setup. +- Simple to use portal/cli based migration experience. +- No limitations in terms of size of databases it can handle. ++For information about migration from single server to flexible server, visit [Migration tool](../concepts-single-to-flexible.md). +++## Why choose flexible server? ++Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in this deployment mode: flexible server is the next-generation managed PostgreSQL service in Azure that provides maximum flexibility over your database and built-in cost-optimizations and offers several advantages over peer products. ++- **[Superior performance](../../flexible-server/overview.md)** - Flexible server runs on Linux VM that is best suited to run PostgreSQL engine. ++- **[Cost Savings](../../flexible-server/how-to-deploy-on-azure-free-account.md)** ΓÇô Flexible server allows you to stop and start an on-demand server to lower your TCO. Your compute tier billing is stopped immediately, which allows you to have significant cost savings during development and testing and for time-bound predictable production workloads. ++- **[Support for new PG versions](../../flexible-server/concepts-supported-versions.md)** - Flexible server supports all major PostgreSQL versions beginning with version 11. ++- **Minimized Latency** ΓÇô You can collocate your flexible server in the same availability zone as the application server, resulting in a minimal latency. ++- **[Connection Pooling](../../flexible-server/concepts-pgbouncer.md)** - Flexible server has a built-in connection pooling mechanism using **pgBouncer** to support thousands of active connections with low overhead. ++- **[Server Parameters](../../flexible-server/concepts-server-parameters.md)** - Flexible server offers a rich set of server parameters for configuration and tuning. ++- **[Custom Maintenance Window](../../flexible-server/concepts-maintenance.md)** - You can schedule the maintenance window of the flexible server for a specific day and time of the week. ++- **[High Availability](../../flexible-server/concepts-high-availability.md)** - Flexible server supports HA within the same availability zone and across availability zones by configuring a warm standby server in sync with the primary. ++- **[Security](../../flexible-server/concepts-security.md)** - Flexible server offers multiple layers of information protection and encryption to protect your data. ++## How to migrate to Azure Database for PostgreSQL flexible server? ++The options you can consider migrating from the source PostgreSQL instance to the Flexible server are: ++**Offline migration** ΓÇô In an offline migration, all applications connecting to your source instance are stopped, and the database(s) are copied to a flexible server. ++**Online migration** - In an online migration, applications connecting to your source instance aren't stopped while database(s) are copied to a flexible server. The initial copy of the databases is followed by replication to keep the flexible server in sync with the source instance. A cutover is performed when the flexible server completely syncs with the source instance, resulting in minimal downtime. ++The following table gives an overview of offline and online options. ++| Option | PROs | CONs | Recommended For +||||| +| Offline | - Simple, easy, and less complex to execute.<br />- Very fewer chances of failure.<br />- No restrictions regarding database objects it can handle | Downtime to applications. | - Best for scenarios where simplicity and a high success rate are essential.<br>- Ideal for scenarios where the database can be taken offline without significant impact on business operations.<br>- Suitable for databases when the migration process can be completed within a planned maintenance window. | +| Online | - Very minimal downtime to application. <br /> - Ideal for large databases and customers having limited downtime requirements. | - Replication used in online migration has multiple [restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html) (for example, Primary Keys needed in all tables). <br /> - Tough and more complex to execute than offline migration. <br /> - Greater chances of failure due to the complexity of migration. <br /> - There's an impact on the source instance's storage and computing if the migration runs for a long time. The impact needs to be monitored closely during migration. | - Best suited for businesses where continuity is critical and downtime must be kept to an absolute minimum.<br>- Recommended for databases when the migration process needs to occur without interrupting ongoing operations. | ++The following table lists the various sources supported by the migration service. ++| PostgreSQL Source Type | Offline Migration | Online Migration | +||-|| +| [Azure Database for PostgreSQL ΓÇô Single server](../how-to-migrate-single-to-flexible-portal.md) | Supported | Supported | +| [AWS RDS for PostgreSQL](tutorial-migration-service-offline-aws.md) | Supported | Planned for future release | +| [On-premises](tutorial-migration-service-offline-iaas.md) | Supported | Planned for future release | +| [Azure VM](tutorial-migration-service-offline-iaas.md) | Supported | Planned for future release | ++++## Advantages of the migration service in Azure Database for PostgreSQL Over Azure DMS (Classic) ++Below are the key benefits of using this service for your PostgreSQL migrations: +- **Fully Managed Service**: The migration Service in Azure Database for PostgreSQL is a fully managed service, meaning that we handle the complexities of the migration process. +- **Comprehensive Migration**: Supports both schema and data migrations, ensuring a complete and accurate transfer of your entire database environment to Azure +- **Ease of Setup**: Designed to be user-friendly, eliminating complex setup procedures that can often be a barrier to starting a migration project. +- **No Data Size Constraints**: With the ability to handle databases of any size, the service surpasses the 1TB data migration limit of Azure DMS(Classic), making it suitable for all types of database migrations. +- **Addressing DMS(Classic) Limitations**: The migration service resolves many of the issues and limitations encountered with Azure DMS (Classic), leading to a more reliable migration process. +- **Interface Options**: Users can choose between a portal-based interface for an intuitive experience or a command-line interface (CLI) for automation and scripting, accommodating various user preferences. +++## Get started ++Get started with the Single to Flexible migration tool by using any of the following methods: ++- [Offline migration from on-premises or IaaS](tutorial-migration-service-offline-iaas.md) +- [Offline migration from AWS RDS for PostgreSQL](tutorial-migration-service-offline-aws.md) ++## Additional information ++The migration service is a hosted solution where we use binary called [pgcopydb](https://github.com/dimitri/pgcopydb) that provides a fast and efficient way of copying databases from the source PostgreSQL instance to the target. ++## Related content ++- [Premigration validations](concepts-premigration-migration-service.md) +- [Migrate from on-premises and Azure VMs](tutorial-migration-service-offline-iaas.md) +- [Migrate from AWS RDS for PostgreSQL](tutorial-migration-service-offline-aws.md) +- [Network setup](how-to-network-setup-migration-service.md) +- [Known issues and limitations](concepts-known-issues-migration-service.md) |
postgresql | Concepts Premigration Migration Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-premigration-migration-service.md | + + Title: "Migration service - premigration validations" +description: premigration validations to identify issues before running migrations +++ Last updated : 01/30/2024+++++# Premigration validations for the migrations service in Azure Database for PostgreSQL Preview +++Premigration validation is a set of rules that involves assessing and verifying the readiness of a source database system for migration to Azure Database for PostgreSQL. This process identifies and addresses potential issues affecting the database's migration or post-migration operation. ++## How do you use the premigration validation feature? ++To use premigration validation when migrating to Azure Database for PostgreSQL - flexible server, you can select the appropriate migration option either through the Azure portal during the setup or by specifying the `--migration-option` parameter in the Azure CLI when creating a migration. Here's how to do it in both methods: ++### Use the Azure portal ++- Navigate to the migration tab within the Azure Database for PostgreSQL. ++- Select the **Create** button ++- In the Setup page, choose the migration option that includes validation. This could be labeled as **validate**, **validate and migrate** ++ :::image type="content" source="media\concepts-premigration-migration-service\premigration-option.png" alt-text="Screenshot of premigration option to start migration." lightbox="media\concepts-premigration-migration-service\premigration-option.png"::: ++### Use Azure CLI ++- Open your command-line interface. ++- Ensure you have the Azure CLI installed and you're logged into your Azure account using az sign-in. ++- The version should be at least 2.56.0 or above to use the migration option. ++Construct your migration task creation command with the Azure CLI. ++```bash +az postgres flexible-server migration create --subscription <subscription ID> --resource-group <Resource group Name> --name <Flexible server Name> --migration-name <Unique migration ID> --migration-option ValidateAndMigrate --properties "Path of the JSON File" --migration-mode offline +``` ++Include the `--migration-option` parameter followed by the option validate to perform only the premigration **Validate**, **Migrate**, or **ValidateAndMigrate** to perform validation and then proceed with the migration if the validation is successful. ++## Pre-migration validation options ++You can pick any of the following options. ++- **Validate** - Use this option to check your server and database readiness for migration to the target. **This option will not start data migration and will not require any server downtime.** + - Plan your migrations better by performing premigration validations in advance to know the potential issues you might encounter while performing migrations. ++- **Migrate** - Use this option to kickstart the migration without going through a validation process. Perform validation before triggering a migration to increase the chances of success. Once validation is done, you can use this option to start the migration process. ++- **ValidateandMigrate** - This option performs validations, and migration gets triggered if all checks are in the **succeeded** or **warning** state. Validation failures don't start the migration between source and target servers. ++We recommend that customers use premigration validations to identify issues before running migrations. This helps you to plan your migrations better and avoid any surprises during the migration process. ++1. Choose the **Validate** option and run premigration validation on an advanced date of your planned migration. ++1. Analyze the output and take any remedial actions for any errors. ++1. Rerun Step 1 until the validation is successful. ++1. Start the migration using the **Validate and Migrate** option on the planned date and time. ++## Validation states ++The result post running the validated option can be: ++- **Succeeded** - No issues were found, and you can plan for the migration +- **Failed** - There were errors found during validation, which can cause the migration to fail. Review the list of errors and their suggested workarounds and take corrective measures before planning the migration. +- **Warning** - Warnings are informative messages you must remember while planning the migration. +++## Related content ++- [Migration service](concepts-migration-service-postgresql.md) +- [Known issues and limitations](concepts-known-issues-migration-service.md) +- [Network setup](how-to-network-setup-migration-service.md) |
postgresql | How To Network Setup Migration Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/how-to-network-setup-migration-service.md | + + Title: "Migration service - networking scenarios" ++description: Network scenarios for connecting source and target +++ Last updated : 01/30/2024+++++# Network guide for migration service in Azure Database for PostgreSQL Preview +++This document outlines various scenarios for connecting a source database to an Azure Database for PostgreSQL using the migration service. Each scenario presents different networking requirements and configurations to establish a successful connection for migration. Specific details vary based on the actual network setup and requirements of the source and target environments. ++## Scenario 1: On-premises source to Azure Database for PostgreSQL with public access ++**Networking Steps:** ++- The source database server must have a public IP address. +- Configure the firewall to allow outbound connections on the PostgreSQL port (default 5432). +- Ensure the source database server is accessible over the internet. +- Verify the network configuration by testing connectivity from the target Azure Database for PostgreSQL to the source database, confirming that the migration service can access the source data. ++## Scenario 2: Private IP on-premises source to virtual network-Integrated Azure Database for PostgreSQL via Express Route/IPSec VPN +++**Networking Steps:** ++- Set up a Site-to-Site VPN or ExpressRoute for a secure, reliable connection between the on-premises network and Azure. +- Configure Azure's Virtual Network (virtual network) to allow access from the on-premises IP range. +- Set up Network Security Group (NSG) rules to allow traffic on the PostgreSQL port (default 5432) from the on-premises network. +- Verify the network configuration by testing connectivity from the target Azure Database for PostgreSQL to the source database, confirming that the migration service can access the source data. ++## Scenario 3: AWS RDS for PostgreSQL to Azure Database for PostgreSQL +++The source database in another cloud provider (AWS) must have a public IP or a direct connection to Azure. ++**Networking Steps:** ++- **Public Access:** + - If your AWS RDS instance isn't publicly accessible, you can modify the instance to allow connections from Azure. This can be done through the AWS Management Console by changing the Publicly Accessible setting to Yes. + - In the AWS RDS security group, add an inbound rule to allow traffic from the Azure Database for PostgreSQL's public IP address/domain. ++- **Private Access** + - Establish a secure connection using express route, or a VPN from AWS to Azure. + - In the AWS RDS security group, add an inbound rule to allow traffic from the Azure Database for PostgreSQL's public IP address/domain or the range of IP addresses in the Azure virtual network on the PostgreSQL port (default 5432). + - Create an Azure Virtual Network (virtual network) where your Azure Database for PostgreSQL resides. Configure the virtual network's Network Security Group (NSG) to allow outbound connections to the AWS RDS instance's IP address on the PostgreSQL port. + - Set up NSG rules in Azure to permit incoming connections from the cloud provider, AWS RDS IP range. + - Test the connectivity between AWS RDS and Azure Database for PostgreSQL to ensure no network issues. ++## Scenario 4: Azure VMs to Azure Database for PostgreSQL (different virtual networks) ++This scenario describes connectivity between an Azure VMs and an Azure Database for PostgreSQL located in different virtual networks. Virtual network peering and appropriate NSG rules are required to facilitate traffic between the VNets. +++**Networking Steps:** ++- Set up virtual network peering between the two VNets to enable direct network connectivity. +- Configure NSG rules to allow traffic between the VNets on the PostgreSQL port. ++## Scenario 5: Azure VMs to Azure PostgreSQL (same virtual network) ++When an Azure VM and Azure Database for PostgreSQL are within the same virtual network, the configuration is straightforward. NSG rules should be set to allow internal traffic on the PostgreSQL port, with no additional firewall rules necessary for the Azure Database for PostgreSQL since the traffic remains within the VNet. +++**Networking Steps:** ++- Ensure that the VM and the PostgreSQL server are in the same virtual network. +- Configure NSG rules to allow traffic within the virtual network on the PostgreSQL port. +- No other firewall rules are needed for the Azure Database for PostgreSQL since the traffic is internal to the virtual network. ++## Resources for Networking Setup ++- To establish an **ExpressRoute** connection, refer to the [Azure ExpressRoute Overview](/azure/expressroute/expressroute-introduction). +- For setting up an **IPsec VPN**, consult the guide on [Azure Point-to-Site VPN connections](/azure/vpn-gateway/point-to-site-about). +- For virtual network peering, [Azure Virtual Network peering](/azure/virtual-network/virtual-network-peering-overview) ++## Related content ++- [Migration service](concepts-migration-service-postgresql.md) +- [Known issues and limitations](concepts-known-issues-migration-service.md) +- [Premigration validations](concepts-premigration-migration-service.md) +- https://ops.microsoft.com/#/repos/b6b6fd6c-9d21-fafb-c32b-81062ab07537 |
postgresql | Tutorial Migration Service Offline Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/tutorial-migration-service-offline-aws.md | + + Title: "Tutorial: Offline migration from AWS RDS using the migration service with the Azure portal and Azure CLI" +description: "Learn to migrate seamlessly from AWS RDS to Azure Database for PostgreSQL - Flexible Server using the new migration service in Azure, simplifying the transition while ensuring data integrity and efficient deployment." +++ Last updated : 01/30/2024++++++# Tutorial: Offline migration to Azure Database for PostgreSQL from AWS RDS PostgreSQL using migration service Preview +++This tutorial guides you in migrating a PostgreSQL instance from your AWS RDS to Azure Database for a PostgreSQL flexible server using the Azure portal and Azure CLI. ++The migration service in Azure Database for PostgreSQL is a fully managed service that's integrated into the Azure portal and Azure CLI. It's designed to simplify your migration journey to Azure Database for PostgreSQL flexible server. ++In this document, you learn: +> [!div class="checklist"] +> - Prerequisites +> - Configure the migration task +> - Monitor the migration +> - Post migration +++#### [Portal](#tab/portal) ++## Configure the migration task ++The migration service comes with a simple, wizard-based experience on the Azure portal. ++1. Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard. ++1. Go to your Azure Database for the PostgreSQL flexible server. ++1. In the **Overview** tab of the flexible server, on the left menu, scroll down to **Migration** and select it. ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\offline-portal-select-migration-pane.png" alt-text="Screenshot of the migration selection." lightbox="media\tutorial-migration-service-offline-iaas\offline-portal-select-migration-pane.png"::: ++1. Select the **Create** button to migrate from AWS RDS to a flexible server. ++ > [!NOTE] + > The first time you use the migration service, an empty grid appears with a prompt to begin your first migration. ++ If migrations to your flexible server target have already been created, the grid now contains information about attempted migrations. ++1. Select the **Create** button to go through a wizard-based series of tabs to perform a migration. ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\portal-offline-create-migration.png" alt-text="Screenshot of the create migration page." lightbox="media\tutorial-migration-service-offline-iaas\portal-offline-create-migration.png"::: ++## Setup ++The first tab is the setup tab. ++The user needs to provide multiple details related to the migration like the migration name, source server type, option, and mode. ++- **Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and doesn't accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name. ++- **Source Server Type** - Depending on your PostgreSQL source, you can select AWS RDS for PostgreSQL. ++- **Migration Option** - Allows you to perform validations before triggering a migration. You can pick any of the following options + - **Validate** - Checks your server and database readiness for migration to the target. + - **Migrate** - Skips validations and starts migrations. + - **Validate and Migrate** - Performs validation before triggering a migration. Migration gets triggered if there are no validation failures. + - Choosing the **Validate** or **Validate and Migrate** option is always a good practice to perform premigration validations before running the migration. ++To learn more about the premigration validation, visit [premigration](concepts-premigration-migration-service.md). ++- **Migration mode** allows you to pick the mode for the migration. **Offline** is the default option. ++Select the **Next: Connect to source** button. +++## Connect to the source ++The **Connect to Source** tab prompts you to give details related to the source selected in the **Setup Tab**, which is the source of the databases. ++- **Server Name** - Provide the Hostname or the IP address of the source PostgreSQL instance ++- **Port** - Port number of the Source server ++- **Server admin login name** - Username of the source PostgreSQL server ++- **Password** - Password of the source PostgreSQL server ++- **SSL Mode** - Supported values are preferred and required. When the SSL at the source PostgreSQL server is OFF, use the SSLMODE=prefer. If the SSL at the source server is ON, use the SSLMODE=require. SSL values can be determined in postgresql.conf file. ++- **Test Connection** - Performs the connectivity test between target and source. Once the connection is successful, users can go ahead with the next step; they need to identify the networking issues between the target and source and verify the username/password for the source. Test connection takes a few minutes to establish a connection between the target and source. ++After the successful test connection, select the **Next: Select Migration target** button. +++## Connect to the target ++The **select migration target** tab displays metadata for the Flexible Server target, like subscription name, resource group, server name, location, and PostgreSQL version. ++- **Admin username** - Admin username of the target PostgreSQL server ++- **Password** - Password of the target PostgreSQL server ++- **Test Connection** - Performs the connectivity test between target and source. Once the connection is successful, users can proceed with the next step. Otherwise, we need to identify the networking issues between the target and the source and verify the username/password for the target. Test connection takes a few minutes to establish a connection between the target and source ++After the successful test connection, select the **Next: Select Database(s) for Migration** +++## Select databases for migration ++Under the **Select database for migration** tab, you can choose a list of user databases to migrate from your source PostgreSQL server. +After selecting the databases, select the **Next:Summary** +++## Summary ++The **Summary** tab summarizes all the source and target details for creating the validation or migration. Review the details and select the **Start Validation and Migration** button. +++## Monitor the migration ++After you select the **Start Validation and Migration** button, a notification appears in a few seconds to say that the validation or migration creation is successful. You're redirected automatically to the **Migration** page of flexible server. The entry is in the **InProgress** state and **PerformingPreRequisiteSteps** substate. The workflow takes 2-3 minutes to set up the migration infrastructure and check network connections. +++The grid that displays the migrations has these columns: **Name**, **Status**, **Migration mode**, **Migration type**, **Source server**, **Source server type**, **Databases**, **Duration** and **Start time**. The entries are displayed in the descending order of the start time, with the most recent entry on the top. You can use the refresh button to refresh the status of the validation or migration run. ++## Migration details ++Select the migration name in the grid to see the associated details. ++In the **Setup** tab, we have selected the migration option as **Validate and Migrate**. In this scenario, validations are performed first before migration starts. After the **PerformingPreRequisiteSteps** substrate is completed, the workflow moves into the substrate of **Validation in Progress**. ++- If validation has errors, the migration moves into a **Failed** state. ++- If validation is complete without any error, the migration starts, and the workflow moves into the substate of **Migrating Data**. ++Validation details are available at the instance and database level. ++- **Validation at Instance level** + - Contains validation related to the connectivity check, source version, that is, PostgreSQL version >= 9.5, server parameter check, that is, if the extensions are enabled in the server parameters of the Azure Database for PostgreSQL - flexible server. ++- **Validation at Database level** + - It contains validation of the individual databases related to extensions and collations support in Azure Database for PostgreSQL, a flexible server. ++You can see the **validation** and the **migration** status under the migration details page. +++Possible migration states include: ++- **InProgress**: The migration infrastructure setup is underway, or the actual data migration is in progress. +- **Canceled**: The migration is canceled or deleted. +- **Failed**: The migration has failed. +- **Validation Failed** : The validation has failed. +- **Succeeded**: The migration has succeeded and is complete. +- **WaitingForUserAction**: Applicable only for online migration. Waiting for user action to perform cutover. ++Possible migration substates include: ++- **PerformingPreRequisiteSteps**: Infrastructure setup is underway for data migration. +- **Validation in Progress**: Validation is in progress. +- **MigratingData**: Data migration is in progress. +- **CompletingMigration**: Migration is in the final stages of completion. +- **Completed**: Migration has been completed. +- **Failed**: Migration is failed. ++Possible validation substates include: ++- **Failed**: Validation is failed. +- **Succeeded**: Validation is successful. +- **Warning**: Validation is in Warning. Warnings are informative messages that you must remember while planning the migration. ++## Cancel the migration ++You can cancel any ongoing validations or migrations. The workflow must be in the **InProgress** state to be canceled. You can't cancel a validation or migration that's in the **Succeeded** or **Failed** state. ++- Canceling a validation stops further validation activity, and the validation moves to a **Can be called** state. +- Canceling a migration stops further migration activity on your target server and moves to a **Can be called** state. It doesn't drop or roll back any changes on your target server. Be sure to drop the databases on your target server involved in a canceled migration. ++#### [CLI](#tab/cli) ++## End-to-end flow tutorial ++To begin migrating using Azure CLI, you need to install the Azure CLI on your local machine. +++## Connect to the source ++- In this tutorial, the source AWS RDS for PostgreSQL version is 13.13 ++- For this tutorial, we're going to migrate "ticketdb," "inventorydb," and "timedb" into Azure Database for PostgreSQL flexible server. +++## Perform migration using CLI ++- Open the command prompt and sign in into Azure using the `az login` command ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\success-az-login-cli.png" alt-text="Screenshot of the az success sign in." lightbox="media\tutorial-migration-service-offline-iaas\success-az-login-cli.png"::: ++- Edit the below placeholders `<< >>` in the JSON lines and store them in the local machine as `<<filename>>.json` where the CLI is being invoked. In this tutorial, we have saved the file in C:\migration-CLI\migration_body.json ++```bash +{ +"properties": { +"SourceDBServerResourceId": "<<source hostname or IP address>>:<<port>>@<<username>>", + "SecretParameters": { + "AdminCredentials": { + "SourceServerPassword": "<<Source Password>>", + "TargetServerPassword": "<<Target Password>>" + } + }, + "targetServerUserName":"<<Target username>>", + "DBsToMigrate": [ + "<<comma separated list of databases like - "ticketdb","timedb","inventorydb">>" + ], + "OverwriteDBsInTarget": "true", + "MigrationMode": "Offline", + "sourceType": "AWS", + "sslMode": "Require" + } +} +``` ++- Run the following command to check if any migrations are running. The migration name is unique across the migrations within the Azure Database for PostgreSQL flexible server target. ++ ```bash + az postgres flexible-server migration list --subscription <<subscription ID>> --resource-group <<resource group name>> --name <<Name of the Flexible Server>> --filter All + ``` ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\list-CLI.png" alt-text="Screenshot of list the migration runs in CLI." lightbox="media\tutorial-migration-service-offline-iaas\list-CLI.png"::: ++- In the above steps, there are no migrations performed so we start with the new migration by running the following command ++ ```bash + az postgres flexible-server migration create --subscription <<subscription ID>> --resource-group <<resource group name>> --name <<Name of the Flexible Server>> --migration-name <<Unique Migration Name>> --migration-option ValidateAndMigrate --properties "C:\migration-cli\migration_body.json" + ``` ++- Run the following command to initiate the migration status in the previous step. You can check the status of the migration by providing the migration name ++ ```bash + az postgres flexible-server migration show --subscription <<subscription ID>> --resource-group <<resource group name>> --name <<Name of the Flexible Server>> --migration-name <<Migration ID>> + ``` ++- The status of the migration progress is shown in the CLI. ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\status-migration-cli-aws.png" alt-text="Screenshot of status migration CLI." lightbox="media\tutorial-migration-service-offline-iaas\status-migration-cli-aws.png"::: ++- You can also see the status of the PostgreSQL flexible server portal in the Azure Database. ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\status-migration-portal-aws.png" alt-text="Screenshot of status migration portal." lightbox="media\tutorial-migration-service-offline-iaas\status-migration-portal-aws.png"::: +++## Post migration ++After completing the databases, you need to manually validate the data between source and target and verify that all the objects in the target database are successfully created. ++After migration, you can perform the following tasks: ++- Verify the data on your flexible server and ensure it's an exact copy of the source instance. ++- Post verification, enable the high availability option on your flexible server as needed. ++- Change the SKU of the flexible server to match the application needs. This change needs a database server restart. ++- If you change any server parameters from their default values in the source instance, copy those server parameter values in the flexible server. ++- Copy other server settings like tags, alerts, and firewall rules (if applicable) from the source instance to the flexible server. ++- Make changes to your application to point the connection strings to a flexible server. ++- Monitor the database performance closely to see if it requires performance tuning. ++## Related content ++- [Migration service](concepts-migration-service-postgresql.md) +- [Migrate from on-premises and Azure VMs](tutorial-migration-service-offline-iaas.md) +- [Best practices](best-practices-migration-service-postgresql.md) +- [Known Issues and limitations](concepts-known-issues-migration-service.md) +- [Network setup](how-to-network-setup-migration-service.md) +- [premigration validations](concepts-premigration-migration-service.md) + |
postgresql | Tutorial Migration Service Offline Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/tutorial-migration-service-offline-iaas.md | + + Title: "Tutorial: Offline migration from on-premises and Azure virtual machines using the migration service with the Azure portal and CLI" +description: "Learn to migrate seamlessly from on-premises or an Azure VM to Azure Database for PostgreSQL - Flexible Server using the new migration service in Azure, simplifying the transition while ensuring data integrity and efficient deployment." +++ Last updated : 01/30/2024++++++# Tutorial: Offline migration to Azure Database for PostgreSQL from on-premises or Azure VM-hosted PostgreSQL using migration service Preview +++This tutorial guides you in migrating a PostgreSQL instance from your on-premises or Azure virtual machines (VMs) to Azure Database for PostgreSQL flexible server using the Azure portal and Azure CLI. ++The migration service in Azure Database for PostgreSQL is a fully managed service integrated into the Azure portal and Azure CLI. It's designed to simplify your migration journey to Azure Database for PostgreSQL flexible server. ++In this document, you learn: +> [!div class="checklist"] +> - Prerequisites +> - Configure the migration task +> - Monitor the migration +> - Post migration +++#### [Portal](#tab/portal) ++## Configure the migration task ++The migration service comes with a simple, wizard-based experience on the Azure portal. ++1. Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard. ++1. Go to your Azure Database for the PostgreSQL flexible server. ++1. In the **Overview** tab of the flexible server, on the left menu, scroll down to **Migration** and select it. ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\offline-portal-select-migration-pane.png" alt-text="Screenshot of the migration selection." lightbox="media\tutorial-migration-service-offline-iaas\offline-portal-select-migration-pane.png"::: ++1. Select the **Create** button to migrate to a flexible server from on-premises or Azure VMs. ++ > [!NOTE] + > The first time you use the migration service, an empty grid appears with a prompt to begin your first migration. ++ If migrations to your flexible server target have already been created, the grid now contains information about attempted migrations. ++1. Select the **Create** button to go through a wizard-based series of tabs to perform a migration. ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\portal-offline-create-migration.png" alt-text="Screenshot of the create migration page." lightbox="media\tutorial-migration-service-offline-iaas\portal-offline-create-migration.png"::: ++## Setup ++The first tab is the setup tab. ++The user needs to provide multiple details related to the migration, like the migration name, source server type, option, and mode. ++- **Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and doesn't accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name. ++- **Source Server Type** - Depending on your PostgreSQL source, you can select Azure Database for PostgreSQL single server, on-premises, Azure VM. ++- **Migration Option** - Allows you to perform validations before triggering a migration. You can pick any of the following options + - **Validate** - Checks your server and database readiness for migration to the target. + - **Migrate** - Skips validations and starts migrations. + - **Validate and Migrate** - Performs validation before triggering a migration. Migration gets triggered if there are no validation failures. + - Choosing the **Validate** or **Validate and Migrate** option is always a good practice to perform premigration validations before running the migration. ++To learn more about the premigration validation, visit [premigration](concepts-premigration-migration-service.md). ++- **Migration mode** allows you to pick the mode for the migration. **Offline** is the default option. ++Select the **Next: Connect to source** button. +++## Connect to the source ++The **Connect to Source** tab prompts you to give details related to the source selected in the **Setup Tab**, which is the source of the databases. ++- **Server Name** - Provide the Hostname or the IP address of the source PostgreSQL instance ++- **Port** - Port number of the Source server ++- **Server admin login name** - Username of the source PostgreSQL server ++- **Password** - Password of the source PostgreSQL server ++- **SSL Mode** - Supported values are preferred and required. When the SSL at the source PostgreSQL server is OFF, use the SSLMODE=prefer. If the SSL at the source server is ON, use the SSLMODE=require. SSL values can be determined in postgresql.conf file. ++- **Test Connection** - Performs the connectivity test between target and source. Once the connection is successful, users can go ahead with the next step; they need to identify the networking issues between the target and source and verify the username/password for the source. Test connection takes a few minutes to establish a connection between the target and source. ++After the successful test connection, select the **Next: Select Migration target** button. +++## Connect to the target ++The **select migration target** tab displays metadata for the Flexible Server target, like subscription name, resource group, server name, location, and PostgreSQL version. ++- **Admin username** - Admin username of the target PostgreSQL server ++- **Password** - Password of the target PostgreSQL server ++- **Test Connection** - Performs the connectivity test between target and source. Once the connection is successful, users can proceed with the next step. Otherwise, we need to identify the networking issues between the target and the source and verify the username/password for the target. Test connection takes a few minutes to establish a connection between the target and source ++After the successful test connection, select the **Next: Select Database(s) for Migration** +++## Select databases for migration ++Under the **Select database for migration** tab, you can choose a list of user databases to migrate from your source PostgreSQL server. ++After selecting the databases, select the **Next: Summary**. +++## Summary ++The **Summary** tab summarizes all the source and target details for creating the validation or migration. Review the details and select the **Start Validation and Migration** button. +++## Monitor the migration ++After you select the **Start Validation and Migration** button, a notification appears in a few seconds to say that the validation or migration creation is successful. You're automatically redirected to the flexible server's **Migration** page. The entry is in the **InProgress** state and **PerformingPreRequisiteSteps** substate. The workflow takes 2-3 minutes to set up the migration infrastructure and check network connections. +++The grid that displays the migrations has these columns: **Name**, **Status**, **Migration mode**, **Migration type**, **Source server**, **Source server type**, **Databases**, **Duration** and **Start time**. The entries are displayed in the descending order of the start time, with the most recent entry on the top. You can use the refresh button to refresh the status of the validation or migration run. ++## Migration details ++Select the migration name in the grid to see the associated details. ++In the **Setup** tab, we have selected the migration option as **Validate and Migrate**. In this scenario, validations are performed first before migration starts. After the **PerformingPreRequisiteSteps** substrate is completed, the workflow moves into the substrate of **Validation in Progress**. ++- If validation has errors, the migration moves into a **Failed** state. ++- If validation is complete without any error, the migration starts, and the workflow moves into the substate of **Migrating Data**. ++Validation details are available at Instance and Database level. ++- **Validation at Instance level** + - Contains validation related to the connectivity check, source version, that is, PostgreSQL version >= 9.5, server parameter check, that is, if the extensions are enabled in the server parameters of the Azure Database for PostgreSQL - flexible server. +- **Validation at Database level** + - It contains validation of the individual databases related to extensions and collations support in Azure Database for PostgreSQL, a flexible server. ++You can see the **validation** and the **migration** status under the migration details page. ++Possible migration states include: ++- **InProgress**: The migration infrastructure setup is underway, or the actual data migration is in progress. +- **Canceled**: The migration is canceled or deleted. +- **Failed**: The migration has failed. +- **Validation Failed** : The validation has failed. +- **Succeeded**: The migration has succeeded and is complete. +- **WaitingForUserAction**: Applicable only for online migration. Waiting for user action to perform cutover. ++Possible migration substates include: ++- **PerformingPreRequisiteSteps**: Infrastructure setup is underway for data migration. +- **Validation in Progress**: Validation is in progress. +- **MigratingData**: Data migration is in progress. +- **CompletingMigration**: Migration is in the final stages of completion. +- **Completed**: Migration has been completed. +- **Failed**: Migration is failed. ++Possible validation substates include: ++- **Failed**: Validation is failed. +- **Succeeded**: Validation is successful. +- **Warning**: Validation is in Warning. Warnings are informative messages that you must remember while planning the migration. ++## Cancel the migration ++You can cancel any ongoing validations or migrations. The workflow must be in the **InProgress** state to be canceled. You can't cancel a validation or migration that's in the **Succeeded** or **Failed** state. ++- Canceling a validation stops further validation activity, and the validation moves to a **Can be called** state. ++- Canceling a migration stops further migration activity on your target server and moves to a **Can be called** state. It doesn't drop or roll back any changes on your target server. Be sure to drop the databases on your target server involved in a canceled migration. ++#### [CLI](#tab/cli) ++To begin migrating using Azure CLI, you need to install the Azure CLI on your local machine. +++## Connect to the source ++In this tutorial, the source PostgreSQL version used is 14.8, and it's installed in one of the Azure VMs with the operating system Ubuntu. ++We're going to migrate "ticketdb","inventorydb","salesdb" into Azure Database for PostgreSQL flexible server. +++## Perform migration using CLI ++- Open the command prompt and sign in to the Azure using `az login` command ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\success-az-login-CLI.png" alt-text="Screenshot of the az success sign in." lightbox="media\tutorial-migration-service-offline-iaas\success-az-login-CLI.png"::: ++- Edit the below placeholders `<< >>` in the JSON lines and store them in the local machine as `<<filename>>.json` where the CLI is being invoked. In this tutorial, we have saved the file in C:\migration-CLI\migration_body.json ++ ```bash + { + "properties": { + "SourceDBServerResourceId": "<<source hostname or IP address>>:<<port>>@<<username>>", + "SecretParameters": { + "AdminCredentials": { + "SourceServerPassword": "<<Source Password>>", + "TargetServerPassword": "<<Target Password>>" + } + }, + "targetServerUserName":"<<Target username>>", + "DBsToMigrate": [ + "<<comma separated list of databases like - "ticketdb","timedb","salesdb">>" + ], + "OverwriteDBsInTarget": "true", + "MigrationMode": "Offline", + "sourceType": "AzureVM", + "sslMode": "Prefer" + } + } + ``` ++- Run the following command to check if any migrations are running. The migration name is unique across the migrations within the Azure Database for PostgreSQL flexible server target. ++ ```bash + az postgres flexible-server migration list --subscription <<subscription ID>> --resource-group <<resource group name>> --name <<Name of the Flexible Server>> --filter All + ``` ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\list-CLI.png" alt-text="Screenshot of list the migration runs in CLI." lightbox="media\tutorial-migration-service-offline-iaas\list-CLI.png"::: ++- In the above steps, there are no migrations performed so we start with the new migration by running the following command ++ ```bash + az postgres flexible-server migration create --subscription <<subscription ID>> --resource-group <<resource group name>> --name <<Name of the Flexible Server>> --migration-name <<Unique Migration Name>> --migration-option ValidateAndMigrate --properties "C:\migration-cli\migration_body.json" + ``` ++- Run the following command to initiate the migration status in the previous step. You can check the status of the migration by providing the migration name ++ ```bash + az postgres flexible-server migration show --subscription <<subscription ID>> --resource-group <<resource group name>> --name <<Name of the Flexible Server>> --migration-name <<Migration ID>> + ``` ++- The status of the migration progress is shown in the CLI. ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\status-migration-cli.png" alt-text="Screenshot of status migration CLI." lightbox="media\tutorial-migration-service-offline-iaas\status-migration-cli.png"::: ++- You can also see the status in the Azure Database for PostgreSQL flexible server portal. ++ :::image type="content" source="media\tutorial-migration-service-offline-iaas\status-migration-portal.png" alt-text="Screenshot of status migration portal." lightbox="media\tutorial-migration-service-offline-iaas\status-migration-portal.png"::: +++## Post migration ++After completing the databases, you need to manually validate the data between source and target and verify that all the objects in the target database are successfully created. ++After migration, you can perform the following tasks: ++- Verify the data on your flexible server and ensure it's an exact copy of the source instance. ++- Post verification, enable the high availability option on your flexible server as needed. ++- Change the SKU of the flexible server to match the application needs. This change needs a database server restart. ++- If you change any server parameters from their default values in the source instance, copy those server parameter values in the flexible server. ++- Copy other server settings like tags, alerts, and firewall rules (if applicable) from the source instance to the flexible server. ++- Make changes to your application to point the connection strings to a flexible server. ++- Monitor the database performance closely to see if it requires performance tuning. ++## Related content ++- [Migration service](concepts-migration-service-postgresql.md) +- [Migrate from AWS RDS](tutorial-migration-service-offline-aws.md) +- [Best practices](best-practices-migration-service-postgresql.md) +- [Known Issues and limitations](concepts-known-issues-migration-service.md) +- [Network setup](how-to-network-setup-migration-service.md) +- [Premigration validations](concepts-premigration-migration-service.md) + |
private-5g-core | Ue Usage Event Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/ue-usage-event-hub.md | You can monitor UE usage based on the monitoring data generated by Azure Event H ## Prerequisites -- You must already have an Event Hubs instance with a shared access policy. The shared access policy must have send and receive access configured.- > [!NOTE] - > Only the first shared access policy for the event hub will be used by this feature. Any additional shared access policies will be ignored. -- You must have a user assigned managed identity that has the Resource Policy Contributor or Owner role for the Event Hubs instance and is assigned to the Packet Core Control Plane for the site.+- You must have an Event Hubs instance with a shared access policy. The shared access policy must have send and receive access configured. +- You must have a user assigned managed identity that has the Contributor or Owner role for the Event Hubs instance and is assigned to the Packet Core Control Plane for the site. ++>[!TIP] +> A default shared access policy will be created automatically if the Packet Core Control Plane is configured with the required managed identity. ## Configure UE usage monitoring -UE usage monitoring can be configured during [site creation](create-a-site.md) or at a later stage by [modifying the packet core](modify-packet-core.md). +UE usage monitoring can be enabled during [site creation](create-a-site.md) or at a later stage by [modifying the packet core configuration](modify-packet-core.md). ++Once Event Hubs is receiving data from your AP5GC deployment, you can write an application using SDKs [such as .NET](/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send?tabs=passwordless%2Croles-azure-portal) to consume event data and produce metrics. -Once Event Hubs is receiving data from your AP5GC deployment you can write an application, using SDKs such as [.NET](/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send?tabs=passwordless%2Croles-azure-portal), to consume event data and produce useful metric data. +>[!TIP] +> If you create the managed identity after enabling UE usage monitoring, you will need to refresh the packet core configuration by making a dummy configuration change. See [Modify a packet core instance](modify-packet-core.md). ## Reported UE usage data |
route-server | About Dual Homed Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/about-dual-homed-network.md | Title: About dual-homed network with Azure Route Server -description: Learn how Azure Route Server works in a dual-homed network. -+description: Learn how to utilize Azure Route Server in a dual-homed network where you can connect a spoke virtual network (VNet) to more than one hub VNet. -- Previously updated : 01/27/2023 -++ Last updated : 01/30/2024+#CustomerIntent: As an Azure administrator, I want to peer spoke virtual networks (VNets) to more than one hub VNet so that the resources in the spoke VNets can communicated through either of the hub VNets. # About dual-homed network with Azure Route Server In the control plane, the NVA in the hub VNet will learn about on-premises route In the data plane, the virtual machines in the spoke VNet will send all traffic destined for the on-premises network to the NVA in the hub VNet first. Then the NVA will forward the traffic to the on-premises network through ExpressRoute. Traffic from on-premises will traverse the same data path in the reverse direction. You'll notice none of the route servers are in the data path. -## Next steps +## Related content * Learn about [Azure Route Server support for ExpressRoute and Azure VPN](expressroute-vpn-support.md) * Learn how to [configure peering between Azure Route Server and Network Virtual Appliance](tutorial-configure-route-server-with-quagga.md)- |
spring-apps | Application Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/application-observability.md | - Title: Optimize application observability for Azure Spring Apps -description: Learn how to observe the application of Azure Spring Apps. ---- Previously updated : 10/02/2023----# Optimize application observability for Azure Spring Apps --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. --**This article applies to:** ✔️ Java ❌ C# --**This article applies to:** <br> -❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise --This article shows you how to observe your production applications deployed on Azure Spring Apps and diagnose and investigate production issues. Observability is the ability to collect insights, analytics, and actionable intelligence through the logs, metrics, traces, and alerts. --To find out if your applications meet expectations and to discover and predict issues in all applications, focus on the following areas: --- **Availability**: Check that the application is available and accessible to the user.-- **Reliability**: Check that the application is reliable and can be used normally.-- **Failure**: Understand that the application isn't working properly and further fixes are required.-- **Performance**: Understand which performance issues the application encounters that need further attention and find out the root cause of the problem.-- **Alerts**: Know the current state of the application. Proactively notify others and take necessary actions when the application isn't working properly.--This article uses the well-known [PetClinic](https://github.com/azure-samples/spring-petclinic-microservices) sample app as the production application. For more information on how to deploy PetClinic to Azure Spring Apps and use MySQL as the persistent store, see the following articles: --- [Deploy microservice applications to Azure Spring Apps](./quickstart-deploy-microservice-apps.md)-- [Integrate Azure Spring Apps with Azure Database for MySQL](./quickstart-integrate-azure-database-mysql.md)--Log Analytics and Application Insights are deeply integrated with Azure Spring Apps. You can use Log Analytics to diagnose your application with various log queries and use Application Insights to investigate production issues. For more information, see the following articles: --- [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md)-- [Azure Monitor Insights overview](../azure-monitor/insights/insights-overview.md)--## Prerequisites --- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]---## Query logs to diagnose an application problem --If you encounter production issues, you need to do a root cause analysis. Finding logs is an important part of this analysis, especially for distributed applications with logs spread across multiple applications. The trace data collected by Application Insights can help you find the log information for all related links, including the exception stack information. --This section explains how to use Log Analytics to query the application logs and use Application Insights to investigate request failures. For more information, see the following articles: --- [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md)-- [Application Map: Triage distributed applications](../azure-monitor/app/app-map.md)--### Log queries --This section explains how to query application logs from the `AppPlatformLogsforSpring` table hosted by Azure Spring Apps. You can use the [Kusto Query Language](/azure/data-explorer/kusto/query/) to customize your queries for application logs. --To see the built-in example query statements or to write your own queries, open the Azure Spring Apps instance and go to the **Logs** menu. --#### Show the application logs that contain the "error" or "exception" terms --To see the application logs containing the terms "error" or "exception", select **Alerts** on the **Queries** page, and then select **Run** in the **Show the application logs which contain the "error" or "exception" terms** section. --The following query shows the application logs from the last hour that contains the terms "error" or "exception". You can customize the query with any keyword you want to search for. --```sql -AppPlatformLogsforSpring -| where TimeGenerated > ago(1h) -| where Log contains "error" or Log contains "exception" -| project TimeGenerated , ServiceName , AppName , InstanceName , Log , _ResourceId -``` ---#### Show the error and exception number of each application --To see the error and exception number of an application, select **Alerts** on the **Queries** page, and then select **Run** in the **Show the error and exception number of each application** section. --The following query shows a pie chart of the number of the logs in the last 24 hours that contain the terms "error" or "exception". To view the results in a table format, select **Result**. --```sql -AppPlatformLogsforSpring -| where TimeGenerated > ago(24h) -| where Log contains "error" or Log contains "exception" -| extend FullAppName = strcat(ServiceName, "/", AppName) -| summarize count_per_app = count() by FullAppName, ServiceName, AppName, _ResourceId -| sort by count_per_app desc -| render piechart -``` ---#### Query the customers service log with a key word --Use the following query to see a list of logs in the `customers-service` app that contain the term "root cause". Update the query to use the keyword that you're looking for. --```sql -AppPlatformLogsforSpring -| where AppName == "customers-service" -| where Log contains "root cause" -| project-keep InstanceName, Log -``` ---### Investigate request failures --Use the following steps to investigate request failures in the application cluster and to view the failed request list and specific examples of the failed requests: --1. Go to the Azure Spring Apps instance overview page. --1. On the navigation menu, select **Application Insights** to go to the Application Insights overview page. Then, select **Failures**. -- :::image type="content" source="media/application-observability/application-insights-failures.png" alt-text="Screenshot of the Azure portal that shows the Application Insights Failures page." lightbox="media/application-observability/application-insights-failures.png"::: --1. On the **Failure** page, select the `PUT` operation that has the most failed requests count, select **1 Samples** to go into the details, and then select the suggested sample. -- :::image type="content" source="media/application-observability/application-insights-failure-suggested-sample.png" alt-text="Screenshot of the Azure portal that shows the Select a sample operation pane with the suggested failure sample." lightbox="media/application-observability/application-insights-failure-suggested-sample.png"::: --1. Go to the **End-to-end transaction details** page to view the full call stack in the right panel. -- :::image type="content" source="media/application-observability/application-insights-e2e-exception.png" alt-text="Screenshot of the Azure portal that shows the End-to-end transaction details page with Application Insights failures." lightbox="media/application-observability/application-insights-e2e-exception.png"::: --## Improve the application performance using Application Insights --If there's a performance issue, the trace data collected by Application Insights can help find the log information of all relevant links, including the execution time of each link, to help find the location of the performance bottleneck. --To use Application Insights to investigate the performance issues, use the following steps: --1. Go to the Azure Spring Apps instance overview page. --1. On the navigation menu, select **Application Insights** to go to the Application Insights overview page. Then, select **Performance**. -- :::image type="content" source="media/application-observability/application-insights-performance.png" alt-text="Screenshot of the Azure portal that shows the Application Insights Performance page." lightbox="media/application-observability/application-insights-performance.png"::: --1. On the **Performance** page, select the slowest `GET /api/gateway/owners/{ownerId}` operation, select **3 Samples** to go into the details, and then select the suggested sample. -- :::image type="content" source="media/application-observability/application-insights-performance-suggested-sample.png" alt-text="Screenshot of the Azure portal that shows the Select a sample operation pane with the suggested performance sample." lightbox="media/application-observability/application-insights-performance-suggested-sample.png"::: --1. Go to the **End-to-end transaction details** page to view the full call stack in the right panel. -- :::image type="content" source="media/application-observability/application-insights-e2e-performance.png" alt-text="Screenshot of the Azure portal that shows the End-to-end transaction details page with the Application Insights performance issue." lightbox="media/application-observability/application-insights-e2e-performance.png"::: ---## Next steps --> [!div class="nextstepaction"] -> [Set up a staging environment](../spring-apps/how-to-staging-environment.md) --> [!div class="nextstepaction"] -> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md) --> [!div class="nextstepaction"] -> [Use TLS/SSL certificates](./how-to-use-tls-certificate.md) |
spring-apps | Application Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/application-observability.md | + + Title: Optimize application observability for Azure Spring Apps +description: Learn how to observe the application of Azure Spring Apps. ++++ Last updated : 10/02/2023++++# Optimize application observability for Azure Spring Apps ++> [!NOTE] +> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. ++**This article applies to:** ✔️ Java ❌ C# ++**This article applies to:** <br> +❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise ++This article shows you how to observe your production applications deployed on Azure Spring Apps and diagnose and investigate production issues. Observability is the ability to collect insights, analytics, and actionable intelligence through the logs, metrics, traces, and alerts. ++To find out if your applications meet expectations and to discover and predict issues in all applications, focus on the following areas: ++- **Availability**: Check that the application is available and accessible to the user. +- **Reliability**: Check that the application is reliable and can be used normally. +- **Failure**: Understand that the application isn't working properly and further fixes are required. +- **Performance**: Understand which performance issues the application encounters that need further attention and find out the root cause of the problem. +- **Alerts**: Know the current state of the application. Proactively notify others and take necessary actions when the application isn't working properly. ++This article uses the well-known [PetClinic](https://github.com/azure-samples/spring-petclinic-microservices) sample app as the production application. For more information on how to deploy PetClinic to Azure Spring Apps and use MySQL as the persistent store, see the following articles: ++- [Deploy microservice applications to Azure Spring Apps](../enterprise/quickstart-deploy-microservice-apps.md?pivots=sc-standard&toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json) +- [Integrate Azure Spring Apps with Azure Database for MySQL](quickstart-integrate-azure-database-mysql.md) ++Log Analytics and Application Insights are deeply integrated with Azure Spring Apps. You can use Log Analytics to diagnose your application with various log queries and use Application Insights to investigate production issues. For more information, see the following articles: ++- [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) +- [Azure Monitor Insights overview](../../azure-monitor/insights/insights-overview.md) ++## Prerequisites ++- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] +++## Query logs to diagnose an application problem ++If you encounter production issues, you need to do a root cause analysis. Finding logs is an important part of this analysis, especially for distributed applications with logs spread across multiple applications. The trace data collected by Application Insights can help you find the log information for all related links, including the exception stack information. ++This section explains how to use Log Analytics to query the application logs and use Application Insights to investigate request failures. For more information, see the following articles: ++- [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md) +- [Application Map: Triage distributed applications](../../azure-monitor/app/app-map.md) ++### Log queries ++This section explains how to query application logs from the `AppPlatformLogsforSpring` table hosted by Azure Spring Apps. You can use the [Kusto Query Language](/azure/data-explorer/kusto/query/) to customize your queries for application logs. ++To see the built-in example query statements or to write your own queries, open the Azure Spring Apps instance and go to the **Logs** menu. ++#### Show the application logs that contain the "error" or "exception" terms ++To see the application logs containing the terms "error" or "exception", select **Alerts** on the **Queries** page, and then select **Run** in the **Show the application logs which contain the "error" or "exception" terms** section. ++The following query shows the application logs from the last hour that contains the terms "error" or "exception". You can customize the query with any keyword you want to search for. ++```sql +AppPlatformLogsforSpring +| where TimeGenerated > ago(1h) +| where Log contains "error" or Log contains "exception" +| project TimeGenerated , ServiceName , AppName , InstanceName , Log , _ResourceId +``` +++#### Show the error and exception number of each application ++To see the error and exception number of an application, select **Alerts** on the **Queries** page, and then select **Run** in the **Show the error and exception number of each application** section. ++The following query shows a pie chart of the number of the logs in the last 24 hours that contain the terms "error" or "exception". To view the results in a table format, select **Result**. ++```sql +AppPlatformLogsforSpring +| where TimeGenerated > ago(24h) +| where Log contains "error" or Log contains "exception" +| extend FullAppName = strcat(ServiceName, "/", AppName) +| summarize count_per_app = count() by FullAppName, ServiceName, AppName, _ResourceId +| sort by count_per_app desc +| render piechart +``` +++#### Query the customers service log with a key word ++Use the following query to see a list of logs in the `customers-service` app that contain the term "root cause". Update the query to use the keyword that you're looking for. ++```sql +AppPlatformLogsforSpring +| where AppName == "customers-service" +| where Log contains "root cause" +| project-keep InstanceName, Log +``` +++### Investigate request failures ++Use the following steps to investigate request failures in the application cluster and to view the failed request list and specific examples of the failed requests: ++1. Go to the Azure Spring Apps instance overview page. ++1. On the navigation menu, select **Application Insights** to go to the Application Insights overview page. Then, select **Failures**. ++ :::image type="content" source="media/application-observability/application-insights-failures.png" alt-text="Screenshot of the Azure portal that shows the Application Insights Failures page." lightbox="media/application-observability/application-insights-failures.png"::: ++1. On the **Failure** page, select the `PUT` operation that has the most failed requests count, select **1 Samples** to go into the details, and then select the suggested sample. ++ :::image type="content" source="media/application-observability/application-insights-failure-suggested-sample.png" alt-text="Screenshot of the Azure portal that shows the Select a sample operation pane with the suggested failure sample." lightbox="media/application-observability/application-insights-failure-suggested-sample.png"::: ++1. Go to the **End-to-end transaction details** page to view the full call stack in the right panel. ++ :::image type="content" source="media/application-observability/application-insights-e2e-exception.png" alt-text="Screenshot of the Azure portal that shows the End-to-end transaction details page with Application Insights failures." lightbox="media/application-observability/application-insights-e2e-exception.png"::: ++## Improve the application performance using Application Insights ++If there's a performance issue, the trace data collected by Application Insights can help find the log information of all relevant links, including the execution time of each link, to help find the location of the performance bottleneck. ++To use Application Insights to investigate the performance issues, use the following steps: ++1. Go to the Azure Spring Apps instance overview page. ++1. On the navigation menu, select **Application Insights** to go to the Application Insights overview page. Then, select **Performance**. ++ :::image type="content" source="media/application-observability/application-insights-performance.png" alt-text="Screenshot of the Azure portal that shows the Application Insights Performance page." lightbox="media/application-observability/application-insights-performance.png"::: ++1. On the **Performance** page, select the slowest `GET /api/gateway/owners/{ownerId}` operation, select **3 Samples** to go into the details, and then select the suggested sample. ++ :::image type="content" source="media/application-observability/application-insights-performance-suggested-sample.png" alt-text="Screenshot of the Azure portal that shows the Select a sample operation pane with the suggested performance sample." lightbox="media/application-observability/application-insights-performance-suggested-sample.png"::: ++1. Go to the **End-to-end transaction details** page to view the full call stack in the right panel. ++ :::image type="content" source="media/application-observability/application-insights-e2e-performance.png" alt-text="Screenshot of the Azure portal that shows the End-to-end transaction details page with the Application Insights performance issue." lightbox="media/application-observability/application-insights-e2e-performance.png"::: +++## Next steps ++> [!div class="nextstepaction"] +> [Set up a staging environment](../enterprise/how-to-staging-environment.md) ++> [!div class="nextstepaction"] +> [Map an existing custom domain to Azure Spring Apps](../enterprise/how-to-custom-domain.md?toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json) ++> [!div class="nextstepaction"] +> [Use TLS/SSL certificates](../enterprise/how-to-use-tls-certificate.md?toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json) |
spring-apps | How To Access Data Plane Azure Ad Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md | + + Title: "Access Config Server and Service Registry" ++description: How to access Config Server and Service Registry Endpoints with Microsoft Entra role-based access control. ++++ Last updated : 08/25/2021++++# Access Config Server and Service Registry ++> [!NOTE] +> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. ++**This article applies to:** ✔️ Basic/Standard ❌ Enterprise ++This article explains how to access the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Apps using Microsoft Entra role-based access control (RBAC). ++> [!NOTE] +> Applications deployed and running inside the Azure Spring Apps service are automatically wired up with certificate-based authentication and authorization when accessing the managed Spring Cloud Config Server and Service Registry. You don't need to follow this guidance for these applications. The related certificates are fully managed by the Azure Spring Apps platform, and are automatically injected in your application when connected to Config Server and Service Registry. ++<a name='assign-role-to-azure-ad-usergroup-msi-or-service-principal'></a> ++## Assign role to Microsoft Entra user/group, MSI, or service principal ++Assign the role to the [user | group | service-principal | managed-identity] at [management-group | subscription | resource-group | resource] scope. ++| Role name | Description | +|-|| +| Azure Spring Apps Config Server Reader | Allow read access to Azure Spring Apps Config Server. | +| Azure Spring Apps Config Server Contributor | Allow read, write, and delete access to Azure Spring Apps Config Server. | +| Azure Spring Apps Service Registry Reader | Allow read access to Azure Spring Apps Service Registry. | +| Azure Spring Apps Service Registry Contributor | Allow read, write, and delete access to Azure Spring Apps Service Registry. | ++For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). ++## Access Config Server and Service Registry Endpoints ++After the role is assigned, the assignee can access the Spring Cloud Config Server and the Spring Cloud Service Registry endpoints using the following procedures: ++1. Get an access token. After a Microsoft Entra user is assigned the role, they can use the following commands to sign in to Azure CLI with user, service principal, or managed identity to get an access token. For details, see [Authenticate Azure CLI](/cli/azure/authenticate-azure-cli). ++ ```azurecli + az login + az account get-access-token + ``` ++1. Compose the endpoint. We support the default endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Apps. ++ * *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/{path}'* + * *'https://SERVICE_NAME.svc.azuremicroservices.io/config/{path}'* ++ >[!NOTE] + > If you're using Microsoft Azure operated by 21Vianet, replace `*.azuremicroservices.io` with `*.microservices.azure.cn`. For more information, see the section [Check endpoints in Azure](/azure/china/resources-developer-guide#check-endpoints-in-azure) in the [Microsoft Azure operated by 21Vianet developer guide](/azure/china/resources-developer-guide). ++1. Access the composed endpoint with the access token. Put the access token in a header to provide authorization: `--header 'Authorization: Bearer {TOKEN_FROM_PREVIOUS_STEP}'`. ++ For example: ++ a. Access an endpoint like `https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/health` to see the health status of Config Server. ++ b. Access an endpoint like `https://SERVICE_NAME.svc.azuremicroservices.io/eureka/eureka/apps` to see the registered apps in Spring Cloud Service Registry (Eureka here). ++ If the response is `401 Unauthorized`, check to see if the role is successfully assigned. It will take several minutes for the role to take effect or to verify that the access token has not expired. ++For more information about actuator endpoint, see [Production ready endpoints](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints). ++For Eureka endpoints, see [Eureka-REST-operations](https://github.com/Netflix/eureka/wiki/Eureka-REST-operations) ++For config server endpoints and detailed path information, see [ResourceController.java](https://github.com/spring-cloud/spring-cloud-config/blob/main/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/resource/ResourceController.java) and [EncryptionController.java](https://github.com/spring-cloud/spring-cloud-config/blob/main/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/encryption/EncryptionController.java). ++## Register Spring Boot apps to Spring Cloud Config Server and Service Registry managed by Azure Spring Apps ++After the role is assigned, you can register Spring Boot apps to Spring Cloud Config Server and Service Registry managed by Azure Spring Apps with Microsoft Entra token authentication. Both Config Server and Service Registry support [custom REST template](https://cloud.spring.io/spring-cloud-config/reference/html/#custom-rest-template) to inject the bearer token for authentication. ++For more information, see the samples [Access Azure Spring Apps managed Config Server](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/custom-config-server-client) and [Access Azure Spring Apps managed Service Registry](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/custom-eureka-client). The following sections explain some important details in these samples. ++**In *AccessTokenManager.java*:** ++`AccessTokenManager` is responsible for getting an access token from Microsoft Entra ID. Configure the service principal's sign-in information in the *application.properties* file and initialize `ApplicationTokenCredentials` to get the token. You can find this file in both samples. ++```java +prop.load(in); +tokenClientId = prop.getProperty("access.token.clientId"); +String tenantId = prop.getProperty("access.token.tenantId"); +String secret = prop.getProperty("access.token.secret"); +String clientId = prop.getProperty("access.token.clientId"); +credentials = new ApplicationTokenCredentials( + clientId, tenantId, secret, AzureEnvironment.AZURE); +``` ++**In *CustomConfigServiceBootstrapConfiguration.java*:** ++`CustomConfigServiceBootstrapConfiguration` implements the custom REST template for Config Server and injects the token from Microsoft Entra ID as `Authorization` headers. You can find this file in the [Config Server sample](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/custom-config-server-client). ++```java +public class RequestResponseHandlerInterceptor implements ClientHttpRequestInterceptor { ++ @Override + public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException { + String accessToken = AccessTokenManager.getToken(); + request.getHeaders().remove(AUTHORIZATION); + request.getHeaders().add(AUTHORIZATION, "Bearer " + accessToken); ++ ClientHttpResponse response = execution.execute(request, body); + return response; + } ++} +``` ++**In *CustomRestTemplateTransportClientFactories.java*:** ++The previous two classes are for the implementation of the custom REST template for Spring Cloud Service Registry. The `intercept` part is the same as in the Config Server above. Be sure to add `factory.mappingJacksonHttpMessageConverter()` to the message converters. You can find this file in the [Spring Cloud Service Registry sample](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/custom-eureka-client). ++```java +private RestTemplate customRestTemplate() { + /* + * Inject your custom rest template + */ + RestTemplate restTemplate = new RestTemplate(); + restTemplate.getInterceptors() + .add(new RequestResponseHandlerInterceptor()); + RestTemplateTransportClientFactory factory = new RestTemplateTransportClientFactory(); ++ restTemplate.getMessageConverters().add(0, factory.mappingJacksonHttpMessageConverter()); ++ return restTemplate; +} +``` ++If you're running applications on a Kubernetes cluster, we recommend that you use an IP address to register Spring Cloud Service Registry for access. ++```properties +eureka.instance.prefer-ip-address=true +``` ++## Next steps ++* [Authenticate Azure CLI](/cli/azure/authenticate-azure-cli) +* [Production ready endpoints](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints) +* [Create roles and permissions](../enterprise/how-to-permissions.md?toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json) |
spring-apps | How To Appdynamics Java Agent Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-appdynamics-java-agent-monitor.md | + + Title: "How to monitor Spring Boot apps with the AppDynamics Java Agent (Preview)" ++description: How to use the AppDynamics Java agent to monitor Spring Boot applications in Azure Spring Apps. ++++ Last updated : 06/07/2022++ms.devlang: azurecli +++# How to monitor Spring Boot apps with the AppDynamics Java Agent ++> [!NOTE] +> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. ++**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌️ Enterprise ++This article explains how to use the AppDynamics Java Agent to monitor Spring Boot applications in Azure Spring Apps. ++With the AppDynamics Java Agent, you can: ++- Monitor applications +- Configure the AppDynamics Java Agent using environment variables +- Check all monitoring data from the AppDynamics dashboard ++The following video introduces the AppDynamics Java in-process agent. ++<br> ++> [!VIDEO https://www.youtube.com/embed/4dZuRX5bNAs] ++## Prerequisites ++* [Azure CLI](/cli/azure/install-azure-cli) +* [An AppDynamics account](https://www.appdynamics.com/) ++## Activate the AppDynamics Java in-process agent ++For the whole workflow, you need to: ++* Activate the AppDynamics Java in-process agent in Azure Spring Apps to generate application metrics data. +* Connect the AppDynamics Agent to the AppDynamics Controller to collect and visualize the data in the controller. ++![Diagram showing a Spring Boot application in 'Azure Spring Apps' box with a two-directional arrow connecting it to an 'AppDynamics Agent' box, which also has an arrow pointing to an 'AppDynamics Controller' box](media/how-to-appdynamics-java-agent-monitor/appdynamics-activation.jpg) ++### Activate an application with the AppDynamics Agent using the Azure CLI ++To activate an application through the Azure CLI, use the following steps. ++1. Create a resource group. +1. Create an instance of Azure Spring Apps. +1. Create an application using the following command. Replace the placeholders *\<...>* with your own values. ++ ```azurecli + az spring app create \ + --resource-group "<your-resource-group-name>" \ + --service "<your-Azure-Spring-Apps-instance-name>" \ + --name "<your-app-name>" \ + --is-public true + ``` ++1. Create a deployment with the AppDynamics Agent using environment variables. ++ ```azurecli + az spring app deploy \ + --resource-group "<your-resource-group-name>" \ + --service "<your-Azure-Spring-Apps-instance-name>" \ + --name "<your-app-name>" \ + --artifact-path app.jar \ + --jvm-options="-javaagent:/opt/agents/appdynamics/java/javaagent.jar" \ + --env APPDYNAMICS_AGENT_APPLICATION_NAME=<your-app-name> \ + APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=<your-agent-access-key> \ + APPDYNAMICS_AGENT_ACCOUNT_NAME=<your-agent-account-name> \ + APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME=true \ + APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX=<your-agent-node-name> \ + APPDYNAMICS_AGENT_TIER_NAME=<your-agent-tier-name> \ + APPDYNAMICS_CONTROLLER_HOST_NAME=<your-AppDynamics-controller-host-name> \ + APPDYNAMICS_CONTROLLER_SSL_ENABLED=true \ + APPDYNAMICS_CONTROLLER_PORT=443 + ``` ++Azure Spring Apps pre-installs the AppDynamics Java agent to the path */opt/agents/appdynamics/java/javaagent.jar*. You can activate the agent from your applications' JVM options, then configure the agent using environment variables. You can find values for these variables at [Monitor Azure Spring Apps with Java Agent](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent). For more information about how these variables help to view and organize reports in the AppDynamics UI, see [Tiers and Nodes](https://docs.appdynamics.com/21.9/en/application-monitoring/tiers-and-nodes). ++### Activate an application with the AppDynamics Agent using the Azure portal ++To activate an application through the Azure portal, use the following steps. ++1. Navigate to your Azure Spring Apps instance in the Azure portal. ++1. Select **Apps** in the **Settings** section of the navigation pane. ++ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png" alt-text="Screenshot of the Azure portal showing the Apps page for an Azure Spring Apps instance." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png"::: ++1. Select the app, and then select **Configuration** in the navigation pane. ++1. Use the **General settings** tab to update values such as the JVM options. ++ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png" alt-text="Screenshot of the Azure portal showing the Configuration page for an app in an Azure Spring Apps instance, with the General settings tab selected." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png"::: ++1. Select **Environment variables** to add or update the variables used by your application. ++ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png" alt-text="Screenshot of the Azure portal showing the Configuration page for an app in an Azure Spring Apps instance, with the Environment variables tab selected." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png"::: ++## Automate provisioning ++You can also run a provisioning automation pipeline using Terraform, Bicep, or Azure Resource Manager template (ARM template). This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy. ++### Automate provisioning using Terraform ++To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Apps Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment). ++```terraform +resource "azurerm_spring_cloud_java_deployment" "example" { + ... + jvm_options = "-javaagent:/opt/agents/appdynamics/java/javaagent.jar" + ... + environment_variables = { + "APPDYNAMICS_AGENT_APPLICATION_NAME" : "<your-app-name>", + "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY" : "<your-agent-access-key>", + "APPDYNAMICS_AGENT_ACCOUNT_NAME" : "<your-agent-account-name>", + "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME" : "true", + "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX" : "<your-agent-node-name>", + "APPDYNAMICS_AGENT_TIER_NAME" : "<your-agent-tier-name>", + "APPDYNAMICS_CONTROLLER_HOST_NAME" : "<your-AppDynamics-controller-host-name>", + "APPDYNAMICS_CONTROLLER_SSL_ENABLED" : "true", + "APPDYNAMICS_CONTROLLER_PORT" : "443" + } +} +``` ++### Automate provisioning using Bicep ++To configure the environment variables in a Bicep file, add the following code to the file, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=bicep). ++```bicep +deploymentSettings: { + environmentVariables: { + APPDYNAMICS_AGENT_APPLICATION_NAME : '<your-app-name>' + APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY : '<your-agent-access-key>' + APPDYNAMICS_AGENT_ACCOUNT_NAME : '<your-agent-account-name>' + APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME : 'true' + APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX : '<your-agent-node-name>' + APPDYNAMICS_AGENT_TIER_NAME : '<your-agent-tier-name>' + APPDYNAMICS_CONTROLLER_HOST_NAME : '<your-AppDynamics-controller-host-name>' + APPDYNAMICS_CONTROLLER_SSL_ENABLED : 'true' + APPDYNAMICS_CONTROLLER_PORT : '443' + } + jvmOptions: '-javaagent:/opt/agents/appdynamics/java/javaagent.jar' +} +``` ++### Automate provisioning using an ARM template ++To configure the environment variables in an ARM template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=json). ++```JSON +"deploymentSettings": { + "environmentVariables": { + "APPDYNAMICS_AGENT_APPLICATION_NAME" : "<your-app-name>", + "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY" : "<your-agent-access-key>", + "APPDYNAMICS_AGENT_ACCOUNT_NAME" : "<your-agent-account-name>", + "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME" : "true", + "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX" : "<your-agent-node-name>", + "APPDYNAMICS_AGENT_TIER_NAME" : "<your-agent-tier-name>", + "APPDYNAMICS_CONTROLLER_HOST_NAME" : "<your-AppDynamics-controller-host-name>", + "APPDYNAMICS_CONTROLLER_SSL_ENABLED" : "true", + "APPDYNAMICS_CONTROLLER_PORT" : "443" + }, + "jvmOptions": "-javaagent:/opt/agents/appdynamics/java/javaagent.jar", + ... +} +``` ++## Review reports in the AppDynamics dashboard ++This section shows various reports in AppDynamics. ++The following screenshot shows an overview of your apps in the AppDynamics dashboard: +++The **Application Dashboard** shows the overall information for each of your apps, as shown in the following screenshots using example applications: ++- `api-gateway` ++ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-api-gateway.jpg" alt-text="AppDynamics screenshot showing the Application Dashboard for the example api-gateway app." lightbox="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-api-gateway.jpg"::: ++- `customers-service` ++ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service.jpg" alt-text="AppDynamics screenshot showing the Application Dashboard for the example customers-service app." lightbox="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service.jpg"::: ++The following screenshot shows how you can get basic information from the **Database Calls** dashboard. +++You can also get information about the slowest database calls, as shown in these screenshots: ++++The following screenshot shows memory usage analysis in the **Heap** section of the **Memory** page: +++You can also see the garbage collection process, as shown in this screenshot: +++The following screenshot shows the **Slow Transactions** page: +++You can define more metrics for the JVM, as shown in this screenshot of the **Metric Browser**: +++## View AppDynamics Agent logs ++By default, Azure Spring Apps will print the *info* level logs of the AppDynamics Agent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs. ++You can also get the logs of the AppDynamics Agent from the following locations: ++* Azure Spring Apps logs +* Azure Spring Apps Application Insights +* Azure Spring Apps LogStream ++## Learn about AppDynamics Agent upgrade ++The AppDynamics Agent will be upgraded regularly with JDK (quarterly). Agent upgrade may affect the following scenarios: ++* Existing applications using AppDynamics Agent before upgrade will be unchanged, but will require restart or redeploy to engage the new version of AppDynamics Agent. +* Applications created after upgrade will use the new version of AppDynamics Agent. ++## Configure virtual network injection instance outbound traffic ++For virtual network injection instances of Azure Spring Apps, make sure the outbound traffic is configured correctly for AppDynamics Agent. For details, see [SaaS Domains and IP Ranges](https://docs.appdynamics.com/display/PA?toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json). ++## Understand the limitations ++To understand the limitations of the AppDynamics Agent, see [Monitor Azure Spring Apps with Java Agent](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent). ++## Next steps ++[Use Application Insights Java In-Process Agent in Azure Spring Apps](../enterprise/how-to-application-insights.md?pivots=sc-standard&toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json) |
spring-apps | How To Built In Persistent Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-built-in-persistent-storage.md | + + Title: Use built-in persistent storage in Azure Spring Apps | Microsoft Docs +description: Learn how to use built-in persistent storage in Azure Spring Apps +++ Last updated : 10/28/2021+++++# Use built-in persistent storage in Azure Spring Apps ++> [!NOTE] +> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. ++**This article applies to:** ✔️ Java ✔️ C# ++**This article applies to:** ✔️ Basic/Standard ❌ Enterprise ++Azure Spring Apps provides two types of built-in storage for your application: persistent and temporary. ++By default, Azure Spring Apps provides temporary storage for each application instance. Temporary storage is limited to 5 GB per instance with */tmp* as the default mount path. ++> [!WARNING] +> If you restart an application instance, the associated temporary storage is permanently deleted. ++Persistent storage is a file-share container managed by Azure and allocated per application. All instances of an application share data stored in persistent storage. An Azure Spring Apps instance can have a maximum of 10 applications with persistent storage enabled. Each application is allocated 50 GB of persistent storage. The default mount path for persistent storage is */persistent*. ++## Enable or disable built-in persistent storage ++You can enable or disable built-in persistent storage using the Azure portal or Azure CLI. ++#### [Portal](#tab/azure-portal) ++Use the following steps to enable or disable built-in persistent storage using the Azure portal. ++1. Go to your Azure Spring Apps instance in the Azure portal. ++1. Select **Apps** to view apps for your service instance, and then select an app to display the app's **Overview** page. ++ :::image type="content" source="media/how-to-built-in-persistent-storage/app-selected.png" lightbox="media/how-to-built-in-persistent-storage/app-selected.png" alt-text="Screenshot of Azure portal showing the Apps page."::: ++1. On the **Overview** page, select **Configuration**. ++ :::image type="content" source="media/how-to-built-in-persistent-storage/select-configuration.png" lightbox="media/how-to-built-in-persistent-storage/select-configuration.png" alt-text="Screenshot of Azure portal showing details for an app."::: ++1. On the **Configuration** page, select **Persistent Storage**. ++ :::image type="content" source="media/how-to-built-in-persistent-storage/select-persistent-storage.png" lightbox="media/how-to-built-in-persistent-storage/select-persistent-storage.png" alt-text="Screenshot of Azure portal showing the Configuration page."::: ++1. On the **Persistent Storage** tab, select **Enable** to enable persistent storage, or **Disable** to disable persistent storage. ++ :::image type="content" source="media/how-to-built-in-persistent-storage/enable-persistent-storage.png" lightbox="media/how-to-built-in-persistent-storage/enable-persistent-storage.png" alt-text="Screenshot of Azure portal showing the Persistent Storage tab."::: ++If persistent storage is enabled, the **Persistent Storage** tab displays the storage size and path. ++#### [Azure CLI](#tab/azure-cli) ++If necessary, install the Azure Spring Apps extension for the Azure CLI using this command: ++```azurecli +az extension add --name spring +``` ++Other operations: ++- To create an app with built-in persistent storage enabled: ++ ```azurecli + az spring app create -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true + ``` ++- To enable built-in persistent storage for an existing app: ++ ```azurecli + az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true + ``` ++- To disable built-in persistent storage in an existing app: ++ ```azurecli + az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage false + ``` ++++> [!WARNING] +> If you disable an application's persistent storage, all of that storage is deallocated and all of the stored data is permanently lost. ++## Next steps ++- [Quotas and service plans for Azure Spring Apps](../enterprise/quotas.md?toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json) +- [Scale an application in Azure Spring Apps](../enterprise/how-to-scale-manual.md?toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json) |
spring-apps | How To Config Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-config-server.md | + + Title: Configure your managed Spring Cloud Config Server in Azure Spring Apps +description: Learn how to configure a managed Spring Cloud Config Server in Azure Spring Apps on the Azure portal ++++ Last updated : 12/10/2021++++# Configure a managed Spring Cloud Config Server in Azure Spring Apps ++> [!NOTE] +> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. ++**This article applies to:** ✔️ Java ✔️ C# ++**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise ++This article shows you how to configure a managed Spring Cloud Config Server in Azure Spring Apps service. ++Spring Cloud Config Server provides server and client-side support for an externalized configuration in a distributed system. The Config Server instance provides a central place to manage external properties for applications across all environments. For more information, see the [Spring Cloud Config documentation](https://spring.io/projects/spring-cloud-config). ++> [!NOTE] +> To use config server in the Standard consumption and dedicated plan, you must enable it first. For more information, see [Enable and disable Spring Cloud Config Server in Azure Spring Apps](../consumption-dedicated/quickstart-standard-consumption-config-server.md). ++## Prerequisites ++- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- An already provisioned and running Azure Spring Apps service instance using the Basic or Standard plan. To set up and launch an Azure Spring Apps service, see [Quickstart: Deploy your first application to Azure Spring Apps](../enterprise/quickstart.md?pivots=sc-standard&toc=/azure/spring-apps/basic-standard/toc.json&bc=/azure/spring-apps/basic-standard/breadcrumb/toc.json). Spring Cloud Config Server isn't applicable to the Enterprise plan. +- [Git](https://git-scm.com/downloads). ++## Restriction ++There are some restrictions when you use Config Server with a Git back end. The following properties are automatically injected into your application environment to access Config Server and Service Discovery. If you also configure those properties from your Config Server files, you might experience conflicts and unexpected behavior. ++```yaml +eureka.client.service-url.defaultZone +eureka.client.tls.keystore +eureka.instance.preferIpAddress +eureka.instance.instance-id +server.port +spring.cloud.config.tls.keystore +spring.config.import +spring.application.name +spring.jmx.enabled +management.endpoints.jmx.exposure.include +``` ++> [!CAUTION] +> Don't put these properties in your Config Server application files. ++## Create your Config Server files ++Azure Spring Apps supports Azure DevOps Server, GitHub, GitLab, and Bitbucket for storing your Config Server files. When your repository is ready, you can create the configuration files and store them there. ++Some configurable properties are available only for certain types. The following sections describe the properties for each repository type. ++> [!NOTE] +> Config Server takes `master` (on Git) as the default label if you don't specify one. However, GitHub has recently changed the default branch from `master` to `main`. To avoid Azure Spring Apps Config Server failure, be sure to pay attention to the default label when setting up Config Server with GitHub, especially for newly-created repositories. ++### Public repository ++When you use a public repository, your configurable properties are more limited than with a private repository. ++The following table lists the configurable properties that you can use to set up a public Git repository. ++> [!NOTE] +> Using a hyphen (-) to separate words is the only naming convention that's currently supported. For example, you can use *default-label*, but not *defaultLabel*. ++| Property | Required | Feature | +|:-|-|| +| `uri` | Yes | The URI of the Git repository that's used as the Config Server back end. Should begin with `http://`, `https://`, `git@`, or `ssh://`. | +| `default-label` | No | The default label of the Git repository. Should be the branch name, tag name, or commit ID of the repository. | +| `search-paths` | No | An array of strings that are used to search subdirectories of the Git repository. | ++### Private repository with SSH authentication ++The following table lists the configurable properties that you can use to set up a private Git repository with SSH. ++> [!NOTE] +> Using a hyphen (-) to separate words is the only naming convention that's currently supported. For example, you can use *default-label*, but not *defaultLabel*. ++| Property | Required | Feature | +|:|-|| +| `uri` | Yes | The URI of the Git repository used as the Config Server back end. Should begin with `http://`, `https://`, `git@`, or `ssh://`. | +| `default-label` | No | The default label of the Git repository. Should be the branch name, tag name, or commit ID of the repository. | +| `search-paths` | No | An array of strings used to search subdirectories of the Git repository. | +| `private-key` | No | The SSH private key to access the Git repository. Required when the URI starts with `git@` or `ssh://`. | +| `host-key` | No | The host key of the Git repository server. Shouldn't include the algorithm prefix as covered by `host-key-algorithm`. | +| `host-key-algorithm` | No | The host key algorithm. Should be *ssh-dss*, *ssh-rsa*, *ecdsa-sha2-nistp256*, *ecdsa-sha2-nistp384*, or *ecdsa-sha2-nistp521*. Required only if `host-key` exists. | +| `strict-host-key-checking` | No | Indicates whether the Config Server instance fails to start when using the private `host-key`. Should be *true* (default value) or *false*. | ++### Private repository with basic authentication ++The following table lists the configurable properties that you can use to set up a private Git repository with basic authentication. ++> [!NOTE] +> Using a hyphen (-) to separate words is the only naming convention that's currently supported. For example, use *default-label*, not *defaultLabel*. ++| Property | Required | Feature | +|:-|-|-| +| `uri` | Yes | The URI of the Git repository that's used as the Config Server back end. Should begin with `http://`, `https://`, `git@`, or `ssh://`. | +| `default-label` | No | The default label of the Git repository. Should be the *branch name*, *tag name*, or *commit-id* of the repository. | +| `search-paths` | No | An array of strings used to search subdirectories of the Git repository. | +| `username` | No | The username that's used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. | +| `password` | No | The password or personal access token used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. | ++> [!NOTE] +> Many Git repository servers support the use of tokens rather than passwords for HTTP basic authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps. If you use such a token, remember to update it before it expires. +> +> GitHub has removed support for password authentication, so you need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication requirements for Git operations](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/). ++### Other Git repositories ++The following table lists the configurable properties you can use to set up Git repositories with a pattern. ++> [!NOTE] +> Using a hyphen (-) to separate words is the only naming convention that's currently supported. For example, use *default-label*, not *defaultLabel*. ++| Property | Required | Feature | +|:--|-|| +| `repos` | No | A map consisting of the settings for a Git repository with a given name. | +| `repos."uri"` | Yes on `repos` | The URI of the Git repository that's used as the Config Server back end. Should begin with `http://`, `https://`, `git@`, or `ssh://`. | +| `repos."name"` | Yes on `repos` | A name to identify the repository; for example, *team-A* or *team-B*. Required only if `repos` exists. | +| `repos."pattern"` | No | An array of strings used to match an application name. For each pattern, use the format *{application}/{profile}* format with wildcards. | +| `repos."default-label"` | No | The default label of the Git repository. Should be the branch name, tag name, or commit IOD of the repository. | +| `repos."search-paths`" | No | An array of strings used to search subdirectories of the Git repository. | +| `repos."username"` | No | The username used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. | +| `repos."password"` | No | The password or personal access token used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. | +| `repos."private-key"` | No | The SSH private key to access Git repository. Required when the URI begins with `git@` or `ssh://`. | +| `repos."host-key"` | No | The host key of the Git repository server. Shouldn't include the algorithm prefix as covered by `host-key-algorithm`. | +| `repos."host-key-algorithm"` | No | The host key algorithm. Should be *ssh-dss*, *ssh-rsa*, *ecdsa-sha2-nistp256*, *ecdsa-sha2-nistp384*, or *ecdsa-sha2-nistp521*. Required only if `host-key` exists. | +| `repos."strict-host-key-checking"` | No | Indicates whether the Config Server instance fails to start when using the private `host-key`. Should be *true* (default value) or *false*. | ++The following table shows some examples of patterns for configuring your service with an optional extra repository. For more information, see the [Extra repositories](#extra-repositories) section and the [Pattern Matching and Multiple Repositories section](https://cloud.spring.io/spring-cloud-config/reference/html/#_pattern_matching_and_multiple_repositories) of the Spring documentation. ++| Patterns | Description | +|:--|| +| *test-config-server-app-0/\** | The pattern and repository URI matches a Spring boot application named `test-config-server-app-0` with any profile. | +| *test-config-server-app-1/dev* | The pattern and repository URI matches a Spring boot application named `test-config-server-app-1` with a dev profile. | +| *test-config-server-app-2/prod* | The pattern and repository URI matches a Spring boot application named `test-config-server-app-2` with a prod profile. | +++## Attach your Config Server repository to Azure Spring Apps ++Now that your configuration files are saved in a repository, use the following steps to connect Azure Spring Apps to the repository. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Go to your Azure Spring Apps **Overview** page. ++1. Select **Config Server** in the left navigation pane. ++1. In the **Default repository** section, set **URI** to `https://github.com/Azure-Samples/piggymetrics-config`. ++1. Select **Validate**. ++ :::image type="content" source="media/how-to-config-server/portal-config.png" lightbox="media/how-to-config-server/portal-config.png" alt-text="Screenshot of Azure portal showing the Config Server page."::: ++1. When validation is complete, select **Apply** to save your changes. ++ :::image type="content" source="media/how-to-config-server/validate-complete.png" lightbox="media/how-to-config-server/validate-complete.png" alt-text="Screenshot of Azure portal showing Config Server page with Apply button highlighted."::: ++Updating the configuration can take a few minutes. You should get a notification when the configuration is complete. ++### Enter repository information directly to the Azure portal ++You can enter repository information for the default repository and, optionally, for extra repositories. ++#### Default repository ++Use the steps in this section to enter repository information for a public or private repository. ++- **Public repository**: In the **Default repository** section, in the **Uri** box, paste the repository URI. Enter *config* for the **Label** setting. Ensure that the **Authentication** setting is *Public*, and then select **Apply**. ++- **Private repository**: Azure Spring Apps supports basic password/token-based authentication and SSH. ++ - **Basic Authentication**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the setting under **Authentication** to open the **Edit Authentication** pane. In the **Authentication type** drop-down list, select **HTTP Basic**, and then enter your username and password/token to grant access to Azure Spring Apps. Select **OK**, and then select **Apply** to finish setting up your Config Server instance. ++ :::image type="content" source="media/how-to-config-server/basic-auth.png" lightbox="media/how-to-config-server/basic-auth.png" alt-text="Screenshot of the Default repository section showing authentication settings for Basic authentication."::: ++ > [!NOTE] + > Many Git repository servers support the use of tokens rather than passwords for HTTP basic authentication. Some repositor |