Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Liveness | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/liveness.md | Last updated 11/06/2023 # Tutorial: Detect liveness in faces -Face Liveness detection can be used to determine if a face in an input video stream is real (live) or fake (spoof). It is a crucial building block in a biometric authentication system to prevent spoofing attacks from imposters trying to gain access to the system using a photograph, video, mask, or other means to impersonate another person. +Face Liveness detection can be used to determine if a face in an input video stream is real (live) or fake (spoofed). It's an important building block in a biometric authentication system to prevent imposters from gaining access to the system using a photograph, video, mask, or other means to impersonate another person. -The goal of liveness detection is to ensure that the system is interacting with a physically present live person at the time of authentication. Such systems have become increasingly important with the rise of digital finance, remote access control, and online identity verification processes. +The goal of liveness detection is to ensure that the system is interacting with a physically present live person at the time of authentication. Such systems are increasingly important with the rise of digital finance, remote access control, and online identity verification processes. -The liveness detection solution successfully defends against various spoof types ranging from paper printouts, 2d/3d masks, and spoof presentations on phones and laptops. Liveness detection is an active area of research, with continuous improvements being made to counteract increasingly sophisticated spoofing attacks over time. Continuous improvements will be rolled out to the client and the service components over time as the overall solution gets more robust to new types of attacks. +The Azure AI Face liveness detection solution successfully defends against various spoof types ranging from paper printouts, 2d/3d masks, and spoof presentations on phones and laptops. Liveness detection is an active area of research, with continuous improvements being made to counteract increasingly sophisticated spoofing attacks over time. Continuous improvements will be rolled out to the client and the service components over time as the overall solution gets more robust to new types of attacks. [!INCLUDE [liveness-sdk-gate](../includes/liveness-sdk-gate.md)] + ## Introduction The liveness solution integration involves two distinct components: a frontend mobile/web application and an app server/orchestrator. The liveness solution integration involves two distinct components: a frontend m Additionally, we combine face verification with liveness detection to verify whether the person is the specific person you designated. The following table help describe details of the liveness detection features: | Feature | Description |-| -- | -- | +| -- |--| | Liveness detection | Determine an input is real or fake, and only the app server has the authority to start the liveness check and query the result. | | Liveness detection with face verification | Determine an input is real or fake and verify the identity of the person based on a reference image you provided. Either the app server or the frontend application can provide a reference image. Only the app server has the authority to initial the liveness check and query the result. | --## Get started - This tutorial demonstrates how to operate a frontend application and an app server to perform [liveness detection](#perform-liveness-detection) and [liveness detection with face verification](#perform-liveness-detection-with-face-verification) across various language SDKs. -### Prerequisites ++## Prerequisites - Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) - Your Azure account must have a **Cognitive Services Contributor** role assigned in order for you to agree to the responsible AI terms and create a resource. To get this role assigned to your account, follow the steps in the [Assign roles](/azure/role-based-access-control/role-assignments-steps) documentation, or contact your administrator. This tutorial demonstrates how to operate a frontend application and an app serv - You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. - Access to the Azure AI Vision Face Client SDK for mobile (IOS and Android) and web. To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page. -### Setup frontend applications and app servers to perform liveness detection +## Set up frontend applications and app servers to perform liveness detection -We provide SDKs in different languages for frontend applications and app servers. See the following instructions to setup your frontend applications and app servers. +We provide SDKs in different languages for frontend applications and app servers. See the following instructions to set up your frontend applications and app servers. -#### Integrate liveness into frontend application +### Download SDK for frontend application -Once you have access to the SDK, follow instruction in the [azure-ai-vision-sdk](https://github.com/Azure-Samples/azure-ai-vision-sdk) GitHub repository to integrate the UI and the code into your native mobile application. The liveness SDK supports Java/Kotlin for Android mobile applications, Swift for iOS mobile applications and JavaScript for web applications: +Once you have access to the SDK, follow instructions in the [azure-ai-vision-sdk](https://github.com/Azure-Samples/azure-ai-vision-sdk) GitHub repository to integrate the UI and the code into your native mobile application. The liveness SDK supports Java/Kotlin for Android mobile applications, Swift for iOS mobile applications and JavaScript for web applications: - For Swift iOS, follow the instructions in the [iOS sample](https://aka.ms/azure-ai-vision-face-liveness-client-sdk-ios-readme) - For Kotlin/Java Android, follow the instructions in the [Android sample](https://aka.ms/liveness-sample-java) - For JavaScript Web, follow the instructions in the [Web sample](https://aka.ms/liveness-sample-web) -Once you've added the code into your application, the SDK handles starting the camera, guiding the end-user to adjust their position, composing the liveness payload, and calling the Azure AI Face cloud service to process the liveness payload. +Once you've added the code into your application, the SDK handles starting the camera, guiding the end-user in adjusting their position, composing the liveness payload, and calling the Azure AI Face cloud service to process the liveness payload. -#### Download Azure AI Face client library for an app server +### Download Azure AI Face client library for app server The app server/orchestrator is responsible for controlling the lifecycle of a liveness session. The app server has to create a session before performing liveness detection, and then it can query the result and delete the session when the liveness check is finished. We offer a library in various languages for easily implementing your app server. Follow these steps to install the package you want: - For C#, follow the instructions in the [dotnet readme](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/face/Azure.AI.Vision.Face/README.md) The app server/orchestrator is responsible for controlling the lifecycle of a li - For Python, follow the instructions in the [Python readme](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/face/azure-ai-vision-face/README.md) - For JavaScript, follow the instructions in the [JavaScript readme](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/face/ai-vision-face-rest/README.md) -##### Create environment variables +#### Create environment variables [!INCLUDE [create environment variables](../includes/face-environment-variables.md)] -### Perform liveness detection +## Perform liveness detection The high-level steps involved in liveness orchestration are illustrated below: The high-level steps involved in liveness orchestration are illustrated below: -1. The SDK then starts the camera, guides the user to position correctly and then prepares the payload to call the liveness detection service endpoint. +1. The SDK then starts the camera, guides the user to position correctly, and then prepares the payload to call the liveness detection service endpoint. 1. The SDK calls the Azure AI Vision Face service to perform the liveness detection. Once the service responds, the SDK notifies the frontend application that the liveness check has been completed. The high-level steps involved in liveness orchestration are illustrated below: -### Perform liveness detection with face verification +## Perform liveness detection with face verification Combining face verification with liveness detection enables biometric verification of a particular person of interest with an added guarantee that the person is physically present in the system. There are two parts to integrating liveness with verification: There are two parts to integrating liveness with verification: :::image type="content" source="../media/liveness/liveness-verify-diagram.jpg" alt-text="Diagram of the liveness-with-face-verification workflow of Azure AI Face." lightbox="../media/liveness/liveness-verify-diagram.jpg"::: -#### Select a good reference image +### Select a good reference image Use the following tips to ensure that your input images give the most accurate recognition results. -##### Technical requirements: +#### Technical requirements [!INCLUDE [identity-input-technical](../includes/identity-input-technical.md)] * You can utilize the `qualityForRecognition` attribute in the [face detection](../how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only `"high"` quality images are recommended for person enrollment and quality at or above `"medium"` is recommended for identification scenarios. -##### Composition requirements: +#### Composition requirements - Photo is clear and sharp, not blurry, pixelated, distorted, or damaged. - Photo is not altered to remove face blemishes or face appearance. - Photo must be in an RGB color supported format (JPEG, PNG, WEBP, BMP). Recommended Face size is 200 pixels x 200 pixels. Face sizes larger than 200 pixels x 200 pixels will not result in better AI quality, and no larger than 6 MB in size. Use the following tips to ensure that your input images give the most accurate r - Background should be uniform and plain, free of any shadows. - Face should be centered within the image and fill at least 50% of the image. -#### Set up the orchestration of liveness with verification. +### Set up the orchestration of liveness with verification. The high-level steps involved in liveness with verification orchestration are illustrated below: 1. Providing the verification reference image by either of the following two methods: The high-level steps involved in liveness with verification orchestration are il -### Clean up resources +## Clean up resources If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. |
ai-services | Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/model-customization.md | Begin by going to [Vision Studio](https://portal.vision.cognitive.azure.com/) an Then, sign in with your Azure account and select your Vision resource. If you don't have one, you can create one from this screen. -> [!IMPORTANT] -> To train a custom model in Vision Studio, your Azure subscription needs to be approved for access. Please request access using [this form](https://aka.ms/visionaipublicpreview). :::image type="content" source="../media/customization/select-resource.png" alt-text="Screenshot of the select resource screen."::: |
ai-services | Overview Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-identity.md | This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/identity-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time. * The [how-to guides](./how-to/identity-detect-faces.md) contain instructions for using the service in more specific or customized ways. * The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features.-* The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions. +* The [tutorials](./Tutorials/liveness.md) are longer guides that show you how to use this service as a component in broader business solutions. For a more structured approach, follow a Training module for Face. * [Detect and analyze faces with the Face service](/training/modules/detect-analyze-faces/) |
ai-studio | Deploy Models Mistral | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md | -Mistral AI offers two categories of models in [Azure AI Studio](https://ai.azure.com): +Mistral AI offers two categories of models in the [Azure AI Studio](https://ai.azure.com). These models are available in the [model catalog](model-catalog-overview.md): -* __Premium models__: Mistral Large and Mistral Small. These models are available as serverless APIs with pay-as-you-go token-based billing in the AI Studio model catalog. -* __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models are also available in the AI Studio model catalog and can be deployed to managed compute in your own Azure subscription. +* __Premium models__: Mistral Large and Mistral Small. These models can be deployed as serverless APIs with pay-as-you-go token-based billing. +* __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models can be deployed to managed computes in your own Azure subscription. -You can browse the Mistral family of models in the [Model Catalog](model-catalog-overview.md) by filtering on the Mistral collection. +You can browse the Mistral family of models in the model catalog by filtering on the Mistral collection. ## Mistral family of models Certain models in the model catalog can be deployed as a serverless API with pay ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for eligible models in the Mistral family is only available with hubs created in these regions: ++ - East US + - East US 2 + - North Central US + - South Central US + - West US + - West US 3 + - Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - > [!IMPORTANT] - > The serverless API model deployment offering for eligible models in the Mistral family is only available in hubs created in the **East US 2** and **Sweden Central** regions. - An [Azure AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). To create a deployment: :::image type="content" source="../media/deploy-monitor/mistral/mistral-large-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model as a serverless API." lightbox="../media/deploy-monitor/mistral/mistral-large-deploy-pay-as-you-go.png"::: -1. Select the project in which you want to deploy your model. To deploy the Mistral model, your project must be in the *EastUS2* or *Sweden Central* region. +1. Select the project in which you want to deploy your model. To use the serverless API model deployment offering, your project must belong to one of the regions listed in the [prerequisites](#prerequisites). 1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. 1. Select the **Pricing and terms** tab to learn about pricing for the selected model. 1. Select the **Subscribe and Deploy** button. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering. This step requires that your account has the **Azure AI Developer role** permissions on the resource group, as listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending. Currently, you can have only one deployment for each model within a project. You can consume Mistral family models by using the chat API. For more information on using the APIs, see the [reference](#reference-for-mistral-family-of-models-deployed-as-a-service) section. -### Reference for Mistral family of models deployed as a service +## Reference for Mistral family of models deployed as a service Mistral models accept both the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/chat/completions` and the native [Mistral Chat API](#mistral-chat-api) on `/v1/chat/completions`. Mistral models accept both the [Azure AI Model Inference API](../reference/refer The [Azure AI Model Inference API](../reference/reference-model-inference-api.md) schema can be found in the [reference for Chat Completions](../reference/reference-model-inference-chat-completions.md) article and an [OpenAPI specification can be obtained from the endpoint itself](../reference/reference-model-inference-api.md?tabs=rest#getting-started). -#### Mistral Chat API +### Mistral Chat API Use the method `POST` to send the request to the `/v1/chat/completions` route: The `messages` object has the following fields: | `role` | `string` | The role of the message's author. One of `system`, `user`, or `assistant`. | -#### Example +#### Request example __Body__ The `logprobs` object is a dictionary with the following fields: | `tokens` | `array` of `string` | Selected tokens. | | `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the probability. | -#### Example +#### Response example The following JSON is an example response: The following JSON is an example response: } } ```+ #### More inference examples -| **Sample Type** | **Sample Notebook** | -|-|-| -| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/mistral-large/webrequests-sample)| -| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/mistral-large/openaisdk) | -| LangChain | [langchain.ipynb](https://aka.ms/mistral-large/langchain-sample) | -| Mistral AI | [mistralai.ipynb](https://aka.ms/mistral-large/mistralai-sample) | -| LiteLLM | [litellm.ipynb](https://aka.ms/mistral-large/litellm-sample) +| **Sample Type** | **Sample Notebook** | +|-|-| +| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/mistral-large/webrequests-sample) | +| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/mistral-large/openaisdk) | +| LangChain | [langchain.ipynb](https://aka.ms/mistral-large/langchain-sample) | +| Mistral AI | [mistralai.ipynb](https://aka.ms/mistral-large/mistralai-sample) | +| LiteLLM | [litellm.ipynb](https://aka.ms/mistral-large/litellm-sample) | ## Cost and quotas Models deployed as a serverless API with pay-as-you-go billing are protected by - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
app-service | Overview Name Resolution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md | Your app uses DNS when making calls to dependent resources. Resources could be A If you aren't integrating your app with a virtual network and custom DNS servers aren't configured, your app uses [Azure DNS](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution). If you integrate your app with a virtual network, your app uses the DNS configuration of the virtual network. The default for virtual network is also to use Azure DNS. Through the virtual network, it's also possible to link to [Azure DNS private zones](../dns/private-dns-overview.md) and use that for private endpoint resolution or private domain name resolution. -If you configured your virtual network with a list of custom DNS servers, name resolution uses these servers. If your virtual network is using custom DNS servers and you're using private endpoints, you should read [this article](../private-link/private-endpoint-dns.md) carefully. You also need to consider that your custom DNS servers are able to resolve any public DNS records used by your app. Your DNS configuration needs to either forward requests to a public DNS server, include a public DNS server like Azure DNS in the list of custom DNS servers or specify an alternative server at the app level. +If you configured your virtual network with a list of custom DNS servers, name resolution in App Service will use up to five custom DNS servers. If your virtual network is using custom DNS servers and you're using private endpoints, you should read [this article](../private-link/private-endpoint-dns.md) carefully. You also need to consider that your custom DNS servers are able to resolve any public DNS records used by your app. Your DNS configuration needs to either forward requests to a public DNS server, include a public DNS server like Azure DNS in the list of custom DNS servers or specify an alternative server at the app level. When your app needs to resolve a domain name using DNS, the app sends a name resolution request to all configured DNS servers. If the first server in the list returns a response within the timeout limit, you get the result returned immediately. If not, the app waits for the other servers to respond within the timeout period and evaluates the DNS server responses in the order you configured the servers. If none of the servers respond within the timeout and you configured retry, you repeat the process. |
automation | Automation Tutorial Runbook Textual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md | Title: Tutorial - Create a PowerShell Workflow runbook in Azure Automation description: This tutorial teaches you to create, test, and publish a PowerShell Workflow runbook. Previously updated : 11/21/2022 Last updated : 07/04/2024 #Customer intent: As a developer, I want use workflow runbooks so that I can automate the parallel starting of VMs.-> This article is applicable for PowerShell 5.1; PowerShell 7.1 (preview) and PowerShell 7.2 don't support workflows. +> This article is applicable only for PowerShell 5.1. PowerShell 7+ versions do not support Workflows, and outdated runbooks cannot be updated. We recommend you to use PowerShell 7.2 textual runbooks for advanced features such as parallel job execution. [Learn more](../automation-runbook-types.md#limitations) about limitations of PowerShell Workflow runbooks. In this tutorial, you learn how to: |
azure-app-configuration | Use Key Vault References Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md | Your application uses the App Configuration client provider to retrieve Key Vaul Your application is responsible for authenticating properly to both App Configuration and Key Vault. The two services don't communicate directly. -This tutorial shows you how to implement Key Vault references in your code. It builds on the web app introduced in the quickstarts. Before you continue, finish [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) first. +This tutorial shows you how to implement Key Vault references in your code. It builds on the web app introduced in the ASP.NET core quickstart listed in the prerequisites below. Before you continue, complete this [quickstart](./quickstart-aspnet-core-app.md). You can use any code editor to do the steps in this tutorial. For example, [Visual Studio Code](https://code.visualstudio.com/) is a cross-platform code editor that's available for the Windows, macOS, and Linux operating systems. In this tutorial, you learn how to: ## Prerequisites -Before you start this tutorial, install the [.NET SDK 6.0 or later](https://dotnet.microsoft.com/download). -+Finish the quickstart: [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md). ## Create a vault |
azure-arc | Enable Virtual Hardware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-virtual-hardware.md | Title: Enable additional capabilities on Arc-enabled Server machines by linking to vCenter description: Enable additional capabilities on Arc-enabled Server machines by linking to vCenter. Previously updated : 03/13/2024 Last updated : 07/04/2024 Follow these steps [here](./quick-start-connect-vcenter-to-arc-using-script.md) Use the following az commands to link Arc-enabled Server machines to vCenter at scale. -**Create VMware resources from the specified Arc for Server machines in the vCenter** +**Create VMware resource from the specified Arc for Server machine in the vCenter** ```azurecli-interactive-az connectedvmware vm create-from-machines --resource-group contoso-rg --name contoso-vm --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter +az connectedvmware vm create-from-machines --resource-group contoso-rg --name contoso-vm --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` **Create VMware resources from all Arc for Server machines in the specified resource group belonging to that vCenter** ```azurecli-interactive-az connectedvmware vm create-from-machines --resource-group contoso-rg --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter +az connectedvmware vm create-from-machines --resource-group contoso-rg --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` **Create VMware resources from all Arc for Server machines in the specified subscription belonging to that vCenter** ```azurecli-interactive-az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter +az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` ### Required Parameters |
azure-functions | Functions Container Apps Hosting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md | Title: Azure Container Apps hosting of Azure Functions description: Learn about how you can use Azure Functions on Azure Container Apps to host and manage containerized function apps in Azure. Previously updated : 05/07/2024 Last updated : 07/04/2024 # Customer intent: As a cloud developer, I want to learn more about hosting my function apps in Linux containers managed by Azure Container Apps. -Azure Functions provides integrated support for developing, deploying, and managing containerized function apps on [Azure Container Apps](../container-apps/overview.md). Use Azure Container Apps to host your function app containers when you need to run your event-driven functions in Azure in the same environment as other microservices, APIs, websites, workflows, or any container hosted programs. Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and KEDA. +Azure Functions provides integrated support for developing, deploying, and managing containerized function apps on [Azure Container Apps](../container-apps/overview.md). Use Azure Container Apps to host your function app containers when you need to run your event-driven functions in Azure in the same environment as other microservices, APIs, websites, workflows, or any container hosted programs. Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and Kubernetes Event-driven Autoscaling (KEDA). You can write your function code in any [language stack supported by Functions](supported-languages.md). You can use the same Functions triggers and bindings with event-driven scaling. You can also use existing Functions client tools and the Azure portal to create containers, deploy function app containers to Container Apps, and configure continuous deployment. When you make changes to your functions code, you must rebuild and republish you Azure Functions currently supports the following methods of deploying a containerized function app to Azure Container Apps: ++ [Apache Maven](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details#properties-for-azure-container-apps-hosting-of-azure-functions)++ [ARM templates](/azure/templates/microsoft.web/sites?pivots=deployment-language-arm-template) + [Azure CLI](./functions-deploy-container-apps.md)-+ Azure portal -+ GitHub Actions -+ Azure Pipeline tasks -+ ARM templates -+ [Bicep templates](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/Biceptemplates) ++ [Azure Developer CLI (azd)](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/azdtemplates) + [Azure Functions Core Tools](functions-run-local.md#deploy-containers)-++ [Azure Pipeline tasks](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/AzurePipelineTasks)++ [Azure portal](https://aka.ms/funconacablade)++ [Bicep files](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/Biceptemplates)++ [GitHub Actions](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/GitHubActions)++ [Visual Studio Code](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/VSCode%20Sample) ## Virtual network integration |
azure-maps | Power Bi Visual Add Reference Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md | To use a hosted spatial dataset as a reference layer: 1. Navigate to the **Format** pane. 1. Expand the **Reference Layer** section. 1. Select **URL** from the **Type** drop-down list.-1. Select **Enter a URL** and enter a valid URL pointing to your hosted file. Hosted files must be a valid spatial dataset with a `.json`, `.geojson`, `.wkt`, `.kml` or `.shp` extension. +1. Select **Enter a URL** and enter a valid URL pointing to your hosted file. Hosted files must be a valid spatial dataset with a `.json`, `.geojson`, `.wkt`, `.kml` or `.shp` extension. After the link to the hosted file is added to the reference layer, the URL appears in the **Enter a URL** field. To remove the data from the visual simply delete the URL. - :::image type="content" source="./media/power-bi-visual/reference-layer-hosted.png" alt-text="Screenshot showing the reference layers section when hosting a file control."::: + :::image type="content" source="./media/power-bi-visual/reference-layer-hosted.png" alt-text="Screenshot showing the reference layers section when using the 'Enter a URL' input control."::: -Once the link to the hosted file is added to the reference layer, the URL appears in the **Enter a URL** field. To remove the data from the visual simply delete the URL. +1. Alternatively, you can create a dynamic URL using Data Analysis Expressions ([DAX]) based on fields, variables or other programmatic elements. By utilizing DAX, the URL will dynamically change based on filters, selections, or other user interactions and configurations. For more information, see [Expression-based titles in Power BI Desktop]. ++ :::image type="content" source="./media/power-bi-visual/reference-layer-hosted-dax.png" alt-text="Screenshot showing the reference layers section when using DAX for the URL input."::: Add more context to the map: [supported style properties]: spatial-io-add-simple-data-layer.md#default-supported-style-properties [Add a tile layer]: power-bi-visual-add-tile-layer.md [Show real-time traffic]: power-bi-visual-show-real-time-traffic.md+[DAX]: /dax/ +[Expression-based titles in Power BI Desktop]: /power-bi/create-reports/desktop-conditional-format-visual-titles |
azure-monitor | Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/ip-addresses.md | For more information on availability tests, see [Private availability testing](. | Purpose | URI | IP | Ports | | | | | |-| API |`api.applicationinsights.io`<br/>`api1.applicationinsights.io`<br/>`api2.applicationinsights.io`<br/>`api3.applicationinsights.io`<br/>`api4.applicationinsights.io`<br/>`api5.applicationinsights.io`<br/>`dev.applicationinsights.io`<br/>`dev.applicationinsights.microsoft.com`<br/>`dev.aisvc.visualstudio.com`<br/>`www.applicationinsights.io`<br/>`www.applicationinsights.microsoft.com`<br/>`www.aisvc.visualstudio.com`<br/>`api.loganalytics.io`<br/>`*.api.loganalytics.io`<br/>`dev.loganalytics.io`<br>`docs.loganalytics.io`<br/>`www.loganalytics.io` |20.37.52.188 <br/> 20.37.53.231 <br/> 20.36.47.130 <br/> 20.40.124.0 <br/> 20.43.99.158 <br/> 20.43.98.234 <br/> 13.70.127.61 <br/> 40.81.58.225 <br/> 20.40.160.120 <br/> 23.101.225.155 <br/> 52.139.8.32 <br/> 13.88.230.43 <br/> 52.230.224.237 <br/> 52.242.230.209 <br/> 52.173.249.138 <br/> 52.229.218.221 <br/> 52.229.225.6 <br/> 23.100.94.221 <br/> 52.188.179.229 <br/> 52.226.151.250 <br/> 52.150.36.187 <br/> 40.121.135.131 <br/> 20.44.73.196 <br/> 20.41.49.208 <br/> 40.70.23.205 <br/> 20.40.137.91 <br/> 20.40.140.212 <br/> 40.89.189.61 <br/> 52.155.118.97 <br/> 52.156.40.142 <br/> 23.102.66.132 <br/> 52.231.111.52 <br/> 52.231.108.46 <br/> 52.231.64.72 <br/> 52.162.87.50 <br/> 23.100.228.32 <br/> 40.127.144.141 <br/> 52.155.162.238 <br/> 137.116.226.81 <br/> 52.185.215.171 <br/> 40.119.4.128 <br/> 52.171.56.178 <br/> 20.43.152.45 <br/> 20.44.192.217 <br/> 13.67.77.233 <br/> 51.104.255.249 <br/> 51.104.252.13 <br/> 51.143.165.22 <br/> 13.78.151.158 <br/> 51.105.248.23 <br/> 40.74.36.208 <br/> 40.74.59.40 <br/> 13.93.233.49 <br/> 52.247.202.90 |80,443 | +| API |`api.applicationinsights.io`<br/>`api1.applicationinsights.io`<br/>`api2.applicationinsights.io`<br/>`api3.applicationinsights.io`<br/>`api4.applicationinsights.io`<br/>`api5.applicationinsights.io`<br/>`dev.applicationinsights.io`<br/>`dev.applicationinsights.microsoft.com`<br/>`dev.aisvc.visualstudio.com`<br/>`www.applicationinsights.io`<br/>`www.applicationinsights.microsoft.com`<br/>`www.aisvc.visualstudio.com`<br/>`api.loganalytics.io`<br/>`*.api.loganalytics.io`<br/>`dev.loganalytics.io`<br>`docs.loganalytics.io`<br/>`www.loganalytics.io`<br/>`api.loganalytics.azure.com` |20.37.52.188 <br/> 20.37.53.231 <br/> 20.36.47.130 <br/> 20.40.124.0 <br/> 20.43.99.158 <br/> 20.43.98.234 <br/> 13.70.127.61 <br/> 40.81.58.225 <br/> 20.40.160.120 <br/> 23.101.225.155 <br/> 52.139.8.32 <br/> 13.88.230.43 <br/> 52.230.224.237 <br/> 52.242.230.209 <br/> 52.173.249.138 <br/> 52.229.218.221 <br/> 52.229.225.6 <br/> 23.100.94.221 <br/> 52.188.179.229 <br/> 52.226.151.250 <br/> 52.150.36.187 <br/> 40.121.135.131 <br/> 20.44.73.196 <br/> 20.41.49.208 <br/> 40.70.23.205 <br/> 20.40.137.91 <br/> 20.40.140.212 <br/> 40.89.189.61 <br/> 52.155.118.97 <br/> 52.156.40.142 <br/> 23.102.66.132 <br/> 52.231.111.52 <br/> 52.231.108.46 <br/> 52.231.64.72 <br/> 52.162.87.50 <br/> 23.100.228.32 <br/> 40.127.144.141 <br/> 52.155.162.238 <br/> 137.116.226.81 <br/> 52.185.215.171 <br/> 40.119.4.128 <br/> 52.171.56.178 <br/> 20.43.152.45 <br/> 20.44.192.217 <br/> 13.67.77.233 <br/> 51.104.255.249 <br/> 51.104.252.13 <br/> 51.143.165.22 <br/> 13.78.151.158 <br/> 51.105.248.23 <br/> 40.74.36.208 <br/> 40.74.59.40 <br/> 13.93.233.49 <br/> 52.247.202.90 |80,443 | | Azure Pipeline annotations extension | `aigs1.aisvc.visualstudio.com` |dynamic|443 | ## Application Insights analytics |
communication-services | Incoming Audio Low Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/incoming-audio-low-volume.md | This value is derived from `audioLevel` in WebRTC Stats. [https://www.w3.org/TR/ A low `audioOutputLevel` value indicates that the volume sent by the sender is also low. ## How to mitigate or resolve-If the `audioOutputLevel` value is low, this is likely that the volume sent by the sender is also low. +If the `audioOutputLevel` value is low, it's likely that the volume sent by the sender is also low. To troubleshoot this issue, users should investigate why the audio input volume is low on the sender's side. This problem could be due to various factors, such as microphone settings, or hardware issues. -If the `audioOutputLevel` value appears normal, the issue may be related to system volume settings or speaker issues on the receiver's side. +The value of `audioOutputLevel` ranges from 0 - 65536. In practice, values lower than 60 can be considered quiet, and values lower than 150 are often considered low volume. + Users can check their device's volume settings and speaker output to ensure that they're set to an appropriate level.+If the `audioOutputLevel` value appears normal, the issue may be related to system volume settings or speaker issues on the receiver's side. ++For example, if the user uses Windows, they should check the volume mixer settings and apps volume settings. + ### Using Web Audio GainNode to increase the volume It may be possible to address this issue at the application layer using Web Audio GainNode. By using this feature with the raw audio stream, it's possible to increase the o You can also look to display a [volume level indicator](../../../../quickstarts/voice-video-calling/get-started-volume-indicator.md?pivots=platform-web) in your client user interface to let your users know what the current volume level is. +## References +### Troubleshooting process +Below is a flow diagram of the troubleshooting process for this issue. +++1. When a user reports experiencing low audio volume, the first thing to check is whether the volume of the incoming audio is low. The application can obtain this information by checking `audioOutputLevel` in the media stats. +2. If the `audioOutputLevel` value is constantly low, it indicates that the volume of audio sent by the speaking participant is low. In this case, ask the user to verify if the speaking participant has issues with their microphone device or input volume settings. +3. If the `audioOutputLevel` value isn't always low, the user may still experience low audio volume issue due to system volume settings. Ask the user to check their system volume settings. +4. If the user's system volume is set to a low value, the user should increase the volume in the settings. +5. In some systems that support app-specific volume settings, the audio volume output from the app may be low even if system volume isn't low. In this case, the user should check their volume setting of the app within the system. +6. If the volume setting of the app in the system is low, the user should increase it. +7. If you still can't determine why the audio output volume is low, ask the user to check their speaker device or select another audio output device. The issue may be due to a device problem and not related to the software or operating system. Not all platforms support speaker enumeration in the browser. For example, you can't select an audio output device through the JavaScript API in the Safari browser or in Chrome on Android. In these cases, you should configure the audio output device in the system settings. |
communication-services | Microphone Issue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/microphone-issue.md | For example, a hardware mute button of some headset models can trigger this even The application should listen to the [User Facing Diagnostics API](../../../../concepts/voice-video-calling/user-facing-diagnostics.md) events. The application should display a warning message when receiving events. By doing so, the user is aware of the issue and can troubleshoot by switching to a different microphone device or by unplugging and plugging in their current microphone device.++## References +### Troubleshooting process +If a user can't hear sound during a call, one possibility is that the speaking participant has an issue with their microphone. +If the speaking participant is using your application, you can follow this flow diagram to troubleshoot the issue. +++1. First, check if a microphone is available. The application can obtain this information by invoking `DeviceManager.getMicrophone` API or by detecting a `noMicrophoneDevicesEnumerated` UFD Bad event. +2. If no microphone device is available, prompt the user to plug in a microphone. +3. If a microphone is available but there's no outgoing audio, consider other possibilities such as permission issues, device issues, or network problems. +4. If permission is denied, refer to [The speaking participant doesn't grant the microphone permission](./microphone-permission.md) for more information. +5. If permission is granted, consider whether the issue is due to an external problem, such as `microphoneMuteUnexpectedly` UFD. +6. The `microphoneMuteUnexpectedly` UFD Bad event is triggered when the browser mutes the audio input track. The application can monitor this UFD but isn't able to detect the reason at JavaScript layer. You can still provide instructions in the app and ask if the user is using hardware mute button on their headset. +7. If the user releases the hardware mute and the `microphoneMuteUnexpectedly` UFD recovers, the issue is resolved. +8. If the user isn't using the hardware mute, ask the user to unplug and replug the microphone, or to select another microphone. Ensure the user hasn't muted the microphone at the system level. +9. No outgoing audio issue can also happen when there's a `microphoneNotFunctioning` UFD Bad event. +10. If there's no `microphoneNotFunctioning` UFD Bad event, consider other possibilities, such as network issues. +11. If there's a `networkReconnect` Bad UFD, outgoing audio may be temporarily lost due to a network disconnection. Refer to [There's a network issue in the call](./network-issue.md) for detailed information. +12. If there are no microphone-related events and no network-related events, create a support ticket for ACS team to investigate the issue. Refer to [Reporting an issue](../general-troubleshooting-strategies/report-issue.md). +13. If a `microphoneNotFunctioning` UFD Bad event occurs, and the user has no outgoing audio, they can try to recover the stream by using ACS [mute](/javascript/api/azure-communication-services/@azure/communication-calling/call?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-call-mute) and [unmute](/javascript/api/azure-communication-services/@azure/communication-calling/call?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-call-unmute). +14. If the `microphoneNotFunctioning` UFD doesn't recover after the user performs ACS mute and unmute, there might be an issue with the microphone device. Ask the user to unplug and replug the microphone or select another microphone. |
communication-services | Microphone Permission | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/microphone-permission.md | The listener should check for events with the value of `microphonePermissionDeni It's important to note that if the user revokes access permission during the call, this `microphonePermissionDenied` event also fires. ## How to mitigate or resolve-Your application should always call the `askDevicePermission` API after the `CallClient` is initialized. +Your application should always call the [askDevicePermission](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-askdevicepermission) API after the `CallClient` is initialized. This way gives the user a chance to grant the device permission if they didn't do so before or if the permission state is `prompt`.+The application can also show a warning message if the user denies the permission, so the user can fix it before joining a call. -It's also important to listen for the `microphonePermissionDenied` event. Display a warning message if the user revokes the permission during the call. By doing so, the user is aware of the issue and can adjust their browser or system settings accordingly. +It's also important to listen for the [microphonePermissionDenied](../references/ufd/microphone-permission-denied.md) UFD event. Display a warning message if the user revokes the permission during the call. By doing so, the user is aware of the issue and can adjust their browser or system settings accordingly. +## References +### Troubleshooting process +If a user can't hear sound during a call, one possibility is that the speaking participant hasn't granted microphone permission. +If the speaking participant is using your application, you can follow this flow diagram to troubleshoot the issue. ++1. Check if there's a `microphonePermissionDenied` Bad UFD event for the speaking participant. This usually indicates that the user has denied the permission or that the permission isn't requested. +2. If a `microphonePermissionDenied` Bad UFD event occurs, verify whether the app has called `askDevicePermission` API. +3. The app must call `askDevicePermission` if this API hasn't been invoked before the user joins the call. The app can offer a smoother user experience by determining the current state of permissions. For instance, it can display a message instructing the user to adjust their permissions if necessary. +4. If the app has called `askDevicePermission` API, but the user still gets a `microphonePermissionDenied` Bad UFD event. The user has to reset or grant the microphone permission in the browser. If they have confirmed that the permission is granted in the browser, they should check if the OS is blocking mic access to the browser. +5. If there's no `microphonePermissionDenied` Bad UFD, we need to consider other possibilities. For the speaking participant, there might be other potential reasons for issues with outgoing audio, such as network reconnection, or device issues. +6. If there's a `networkReconnect` Bad UFD, the outgoing audio may be temporarily lost due to a network disconnection. See [There's a network issue in the call](./network-issue.md) for detailed information. +7. If no `networkReconnect` Bad UFD occurs, there might be a problem on the speaking participant's microphone. See [The speaking participant's microphone has a problem](./microphone-issue.md) for detailed information. |
communication-services | Network Issue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/network-issue.md | so that the user is aware of the issue and understands that the audio loss is du However, if the network reconnection occurs at the sender's side, users on the receiving end are unable to know about it because currently the SDK doesn't support notifying receivers that the sender has network issues. +## References +### Troubleshooting process +If a user can't hear sound during a call, one possibility is that the speaking participant or the receiving end has network issues. ++Below is a flow diagram of the troubleshooting process for this issue. +++1. First, check if there's a `networkReconnect` UFD. The user may experience audio loss during the network reconnection. +2. The UFD can happen on either the sender's end or the receiver's end. In both cases, packets don't flow, so the user can't hear the audio. +3. If there's no `networkReconnect` UFD, consider other potential causes, such as permission issues or device problems. +4. If the permission is denied, refer to [The speaking participant doesn't grant the microphone permission](./microphone-permission.md) for more information. +5. The issue could also be due to device problems, refer to [The speaking participant's microphone has a problem](./microphone-issue.md). |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/overview.md | To establish a voice call with good quality, several factors must be considered. - The users granted the microphone permission - The users microphone is working properly - The network conditions are good enough on sending and receiving ends-- The audio output level is functioning properly+- The audio output device is functioning properly All of these factors are important from an end-to-end perspective. |
communication-services | Poor Quality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/poor-quality.md | and the user starts speaking, the user's audio input in the first few seconds ma leading to distortion in the sound. This can be observed by comparing the ref\_out.wav and input.wav files in aecdump files. In this case, reducing the volume of audio playing sound may help. +## References +### Troubleshooting process +Below is a flow diagram of the troubleshooting process for this issue. ++1. When a user reports experiencing poor audio quality during a call, the first thing to check is the source of the issue. It could be coming from the sender's side or the receiver's side. If other participants on different networks also have similar issues, it's very possible that the issue comes from the sender's side. +2. Check if there's `networkSendQuality` UFD Bad event on the sender's side. +3. If there's no `networkSendQuality` UFD Bad event on the sender's side, the poor audio could be due to device issues or audio distortion caused by the browser's audio processing module. Ask the user to collect diagnostic audio recordings from the browser. Refer to [How to collect diagnostic audio recordings](../references/how-to-collect-diagnostic-audio-recordings.md) +4. If there's a `networkSendQuality` UFD Bad event, the poor audio quality might be due to the sender's network issues. Check the sender's network. +5. If the user experiences poor audio quality but no other participants have the same issue, and there are only two participants in the call, still check the sender's network. +6. If the user experiences poor audio quality but no other participants have the same issue in a group call, the issue might be due to the receiver's network. Check for a `networkReceiveQuality` UFD Bad event on the receiver's end. +7. If there's a `networkReceiveQuality` UFD Bad event, check the receiver's network. +8. If you can't find a `networkReceiveQuality` UFD Bad event, check if other media stats metrics on the receiver's end are poor, such as packetsLost, jitter, etc. +9. If you can't determine why the audio quality on the receiver's end is poor, create a support ticket for the ACS team to investigate. |
communication-services | Speaker Issue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/speaker-issue.md | If the `audioOutputLevel` value isn't always low but the user can't hear audio, Speaker issues are considered external problems from the perspective of the ACS Calling SDK. Your application user interface should display a [volume level indicator](../../../../quickstarts/voice-video-calling/get-started-volume-indicator.md?pivots=platform-web) to let your users know what the current volume level of incoming audio is.+ If the incoming audio isn't silent, the user can know that the issue occurs in their speaker or output volume settings and can troubleshoot accordingly.++If the user uses Windows, they should also check the volume mixer settings and apps volume settings. +++If you're using Web Audio API in your application, you might also check if there's `AudioRenderer error with rendering audio code: 3` in the log. +This error occurs when there are too many AudioContext instances open at the same time, particularly if the application doesn't properly close the AudioContext or +if there's an AudioContext creation associated with the UI component refresh logic. ++## References +### Troubleshooting process +If a user can't hear sound during a call, one possibility is that the participant has an issue with their speaker. +The speaker issue isn't easily detected and usually requires users to check their system settings or their audio output devices. ++Below is a flow diagram of the troubleshooting process for this issue. +++1. When a user reports that they can't hear audio, the first thing we need to check is whether the incoming audio is silent. The application can obtain this information by checking `audioOutputLevel` in the media stats. +2. If the `audioOutputLevel` value is constantly 0, it indicates that the incoming audio is silent. In this case, ask the user to verify if the speaking participant is muted or experiencing other issues, such as permission issues, device problems, or network issues. +3. If the `audioOutputLevel` value isn't always 0, the user may still be unable to hear audio due to system volume settings. Ask the user to check their system volume settings. +4. If the user's system volume is set to 0 or very low, the user should increase the volume in the settings. +5. In some systems that support app-specific volume settings, the audio volume output from the app may be low even if system volume isn't low. In this case, the user should check their volume setting of the app within the system. +6. If the volume setting of the app in the system is 0 or very low, the user should increase it. +7. In certain cases, the audio element in the browser may fail to play or decode the audio, you can find an error message `AudioRenderer error with rendering audio code: 3` in the log. +8. A common case for the AudioRenderer error is that the app uses the Web Audio API but doesn't release AudioContext objects properly. Browsers have a limit on the number of AudioContext instances that can be open simultaneously. +9. If you still can't determine why the user can't hear sound during the call, ask the user to check their speaker device or select another audio output device. Note that not all platforms support speaker enumeration in the browser. For example, you can't select an audio output device through the JavaScript API in the Safari browser or in Chrome on Android. In these cases, you should configure the audio output device in the system settings. |
communication-services | Call Setup Takes Too Long | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/call-setup-issues/call-setup-takes-too-long.md | The application can calculate the delay between when the call is initiated and w If a user consistently experiences long call setup times, they should check their network for issues such as slow network speed, long round trip time, or high packet loss. These issues can affect call setup time because the signaling layer uses a `TCP` connection, and factors such as retransmissions can cause delays. Additionally, if the user suspects the delay comes from stream acquisition, they should check their devices. For example, they can choose a different audio input device.+If a user consistently experiences this issue and you're unable to determine the cause, you may consider filing a support ticket for further assistance. ++### Check the duration of stream acquisition +The stream acquisition is part of the call setup flow. You can get this information from webrtc-internals page. +To access the page, open a new tab and enter edge://webrtc-internals (Edge) or chrome://webrtc-internals (Chrome). +++Once you're on the webrtc-internals page, you can calculate the duration of the stream acquisition by comparing the timestamp of the getUserMedia call and the result. If the duration is abnormally long, you may need to check the devices. ++### Check the duration of HTTP requests +You can also check the Network tab of the Developer tools to see the size of requests and how long they take to finish. +If the issue is due to the long duration of the signaling request, you should be able to see some requests taking very long time from the network trace. ++If you need to file a support ticket, we may request the browser HAR file. +To learn how to collect a HAR file, see [Capture a browser trace for troubleshooting](../../../../../azure-portal/capture-browser-trace.md). |
communication-services | How To Collect Call Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/how-to-collect-call-info.md | + + Title: References - How to collect call info ++description: Learn how to collect call info. ++++ Last updated : 05/22/2024++++++# How to collect call info +When you report an issue, providing important call information can help us quickly locate the problematic area and gain a deeper understanding of the issue. ++* ACS resource ID +* call ID +* participant ID ++## How to get ACS resource ID ++You can get this information from [https://portal.azure.com](https://portal.azure.com). +++## How to get call ID and participant ID +The participant ID is important when there are multiple users in the same call. +```typescript +// call ID +call.id +// local participant ID +call.info.participantId ++``` +++ |
communication-services | Application Disposes Video Renderer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/application-disposes-video-renderer.md | -The [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API doesn't resolve immediately, as there are multiple underlying asynchronous operations involved in the video subscription process and thus this API response is an asynchronous response. +The [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API doesn't resolve immediately, as there are multiple underlying asynchronous operations involved in the video subscription process and thus this API response is an asynchronous response. -If your application disposes of the render object while the video subscription is in progress, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +If your application disposes of the render object while the video subscription is in progress, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. ## How to detect using the SDK |
communication-services | Create View Timeout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/create-view-timeout.md | the SDK detects this issue and throws an createView timeout error. This error is unexpected from SDK's perspective. This error indicates a discrepancy between signaling and media transport. ## How to detect using SDK-When there's a `create view timeout` issue, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +When there's a `create view timeout` issue, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. | Error | Details | ||-| When there's a `create view timeout` issue, the [`createView`](/javascript/api/% ## Reasons behind createView timeout failures and how to mitigate the issue ### The video sender's browser is in the background Some mobile devices don't send any video frames when the browser is in the background or a user locks the screen.-The [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API detects no incoming video frames and considers this situation a subscription failure, therefore, it throws a createView timeout error. +The [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API detects no incoming video frames and considers this situation a subscription failure, therefore, it throws a createView timeout error. No further detailed information is available because currently the SDK doesn't support notifying receivers that the sender's browser is in the background. Your application can implement its own detection mechanism and notify the participants in a call when the sender's browser goes back to foreground. The participants can subscribe the video again.-A feasible but less elegant approach for handling this createView timeout error is to continuously retry invoking the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API until it succeeds. +A feasible but less elegant approach for handling this createView timeout error is to continuously retry invoking the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API until it succeeds. ### The video sender dropped from the call unexpectedly Some users might end the call by terminating the browser process instead of by hanging up. If the video sender has network issues during the time other participants are su This error is unexpected on the video receiver's side. For example, if the sender experiences a temporary network disconnection, other participants are unable to receive video frames from the sender. -A workaround approach for handling this createView timeout error is to continuously retry invoking [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API until it succeeds when this network event is happening. +A workaround approach for handling this createView timeout error is to continuously retry invoking [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API until it succeeds when this network event is happening. ### The video receiver has network issues Similar to the sender's network issues, if a video receiver has network issues the video subscription may fail. |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/overview.md | After the SDK completes the handshake at the signaling layer with the server, it The browser performs video encoding and packetization at the RTP(Real-time Transport Protocol) layer for transmission. The other participants in the call receive notifications from the server, indicating the availability of a video stream from the sender. Your application can decide whether to subscribe to the video stream or not. -If your application subscribes to the video stream from the server (for example, using [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API), the server forwards the sender's video packets to the receiver. +If your application subscribes to the video stream from the server (for example, using [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API), the server forwards the sender's video packets to the receiver. The receiver's browser decodes and renders the incoming video. When you use ACS Web Calling SDK for video calls, the SDK and browser may adjust the video quality of the sender based on the available bandwidth. |
communication-services | Reaching Max Number Of Active Video Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/reaching-max-number-of-active-video-subscriptions.md | -If the number of active video subscriptions exceeds the limit, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +If the number of active video subscriptions exceeds the limit, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. | Error details | Details | |
communication-services | Remote Video Becomes Unavailable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/remote-video-becomes-unavailable.md | The SDK detects this change and throws an error. This error is expected from SDK's perspective as the remote endpoint stops sending the video. ## How to detect using the SDK-If the video becomes unavailable before the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API finishes, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +If the video becomes unavailable before the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API finishes, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. | error | Details | ||-| |
communication-services | Subscribing Video Not Available | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/subscribing-video-not-available.md | Subscribing a video in this case results in failure. This error is expected from SDK's perspective as applications shouldn't subscribe to a video that is currently not available. ## How to detect using the SDK-If you subscribe to a video that is unavailable, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +If you subscribe to a video that is unavailable, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. | error | Details | While the SDK throws an error in this scenario, applications should refrain from subscribing to a video when the remote video isn't available, as it doesn't satisfy the precondition. The recommended practice is to monitor the isAvailable change within the `isAvailable` event callback function and to subscribe to the video when `isAvailable` changes to `true`.-However, if there's asynchronous processing in the application layer, that might cause some delay before invoking [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API. +However, if there's asynchronous processing in the application layer, that might cause some delay before invoking [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API. In such case, applications can check isAvailable again before invoking the createView API. |
connectors | Enable Stateful Affinity Built In Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/enable-stateful-affinity-built-in-connectors.md | To run these connector operations in stateful mode, you must enable this capabil 1. In the [Azure portal](https://portal.azure.com), open the Standard logic app resource where you want to enable stateful mode for these connector operations. -1. Enable virtual network integration for your logic app and add your logic app to the previously created subnet: +1. To enable virtual network integration for your logic app, and add your logic app to the previously created subnet, follow these steps: - 1. On your logic app menu resource, under **Settings**, select **Networking**. + 1. On the logic app menu resource, under **Settings**, select **Networking**. - 1. In the **Outbound Traffic** section, select **VNET integration** > **Add VNet**. + 1. In the **Outbound traffic configuration** section, next to **Virtual network integration**, select **Not configured** > **Add virtual network integration**. - 1. On the **Add VNet Integration** pane that opens, select your Azure subscription and your virtual network. + 1. On the **Add virtual network integration** pane that opens, select your Azure subscription and your virtual network. - 1. Under **Subnet**, select **Select existing**. From the **Subnet** list, select the subnet where you want to add your logic app. + 1. From the **Subnet** list, select the subnet where you want to add your logic app. - 1. When you're done, select **OK**. + 1. When you're done, select **Connect**, and return to the **Networking** page. - On the **Networking** page, the **VNet integration** option now appears set to **On**, for example: + The **Virtual network integration** property is now set to the selected virtual network and subnet, for example: - :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/enable-virtual-network-integration.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Networking page, VNet integration set to On."::: + :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/enable-virtual-network-integration.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Networking page with selected virtual network and subnet."::: For general information about enabling virtual network integration with your app, see [Enable virtual network integration in Azure App Service](../app-service/configure-vnet-integration-enable.md). Updates a resource by using the specified resource ID: #### Parameter values -| Element | Value | Description | -||--|-| +| Element | Value | +||--| | HTTP request method | **PATCH** | | <*resourceId*> | **subscriptions/{yourSubscriptionID}/resourcegroups/{yourResourceGroup}/providers/Microsoft.Web/sites/{websiteName}/config/web** | | <*yourSubscriptionId*> | The ID for your Azure subscription | Resource scale-in events might cause the loss of context for built-in connectors 1. On your logic app resource menu, under **Settings**, select **Scale out**. -1. Under **App Scale Out**, set **Enforce Scale Out Limit** to **Yes**, which shows the **Maximum Scale Out Limit**. +1. On the **Scale out** page, in the **App Scale out** section, follow these steps: ++ 1. Set **Enforce Scale Out Limit** to **Yes**, which shows the **Maximum Scale Out Limit**. -1. On the **Scale out** page, under **App Scale out**, set the number for **Always Ready Instances** to the same number as **Maximum Scale Out Limit** and **Maximum Burst**, which appears under **Plan Scale Out**, for example: + 1. Set **Always Ready Instances** to the same number as **Maximum Scale Out Limit** and **Maximum Burst**, which appears in the **Plan Scale out** section, for example: - :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/scale-in-settings.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Scale out page, and Always Ready Instances number set to match Maximum Scale Out Limit and Maximum Burst."::: + :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/scale-in-settings.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Scale out page, and Always Ready Instances number set to match Maximum Burst and Maximum Scale Out Limit."::: 1. When you're done, on the **Scale out** toolbar, select **Save**. |
container-apps | Dotnet Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dotnet-overview.md | For a chance to implement custom logic to determine the health of your applicati By default, Azure Container Apps automatically scales your ASP.NET Core apps based on the number of incoming HTTP requests. You can also configure custom autoscaling rules based on other metrics, such as CPU or memory usage. To learn more about scaling, see [Set scaling rules in Azure Container Apps](scale-app.md). +In .NET 8.0.4 and later, ASP.NET Core apps that use [data protection](/aspnet/core/security/data-protection/introduction) are automatically configured to keep protected data accessible to all replicas as the application scales. When your app begins to scale, a key manager handles the writing and sharing keys across multiple revisions. As the app is deployed, the environment variable `autoConfigureDataProtection` is automatically set `true` to enable this feature. For more information on this auto configuration, see [this GitHub pull request](https://github.com/Azure/azure-rest-api-specs/pull/28001). + Autoscaling changes the number of replicas of your app based on the rules you define. By default, Container Apps randomly routes incoming traffic to the replicas of your ASP.NET Core app. Since traffic can split among different replicas, your app should be stateless so your application doesn't experience state-related issues. Features such as anti-forgery, authentication, SignalR, Blazor Server, and Razor Pages depend on data protection require extra configuration to work correctly when scaling to multiple replicas. |
container-registry | Container Registry Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-cache.md | Artifact cache offers faster and more *reliable pull operations* through Azure C Artifact cache allows cached registries to be accessible over *private networks* for users to align with firewall configurations and compliance standards seamlessly. -Artifact cache addresses the challenge of anonymous pull limits imposed by public registries like Docker Hub. By allowing users to pull images from the local ACR, it circumvents these limits, ensuring *uninterrupted content delivery* from upstream sources and eliminating the concern of hitting pull limits. +Artifact cache addresses the challenge of pull limits imposed by public registries. We recommend users authenticate their cache rules with their upstream source credentials. Then pull images from the local ACR, to help mitigate rate limits. ## Terminology Artifact cache addresses the challenge of anonymous pull limits imposed by publi Artifact cache currently supports the following upstream registries: +>[!WARNING] +> We recommend customers to [create a credential set](container-registry-artifact-cache.md#create-new-credentials) when sourcing content from Docker hub. + | Upstream Registries | Support | Availability | |-|-|--| | Docker Hub | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | |
container-registry | Troubleshoot Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-cache.md | To resolve this issue, you need to follow these steps: Artifact cache currently supports the following upstream registries: +>[!WARNING] +> We recommend customers to [create a credential set](container-registry-artifact-cache.md#create-new-credentials) when sourcing content from Docker hub. + | Upstream Registries | Support | Availability | |-|-|--| | Docker Hub | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | |
cosmos-db | Analytical Store Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md | + > [!IMPORTANT] + > Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). ++To get started with Azure Synapse Link, please visit [“Getting started with Azure Synapse Link”](synapse-link.md) + Azure Cosmos DB analytical store is a fully isolated column store for enabling large-scale analytics against operational data in your Azure Cosmos DB, without any impact to your transactional workloads. Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional applications without having to deal with schema or index management. In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query performance. This article describes in detailed about analytical storage. When you enable analytical store on an Azure Cosmos DB container, a new column-s ## Column store for analytical workloads on operational data -Analytical workloads typically involve aggregations and sequential scans of selected fields. By storing the data in a column-major order, the analytical store allows a group of values for each field to be serialized together. This format reduces the IOPS required to scan or compute statistics over specific fields. It dramatically improves the query response times for scans over large data sets. +Analytical workloads typically involve aggregations and sequential scans of selected fields. The data analytical store is stored in a column-major order, allowing values of each field to be serialized together, where applicable. This format reduces the IOPS required to scan or compute statistics over specific fields. It dramatically improves the query response times for scans over large data sets. For example, if your operational tables are in the following format: :::image type="content" source="./media/analytical-store-introduction/sample-operational-data-table.png" alt-text="Example operational table" border="false"::: -The row store persists the above data in a serialized format, per row, on the disk. This format allows for faster transactional reads, writes, and operational queries, such as, "Return information about Product1". However, as the dataset grows large and if you want to run complex analytical queries on the data it can be expensive. For example, if you want to get "the sales trends for a product under the category named 'Equipment' across different business units and months", you need to run a complex query. Large scans on this dataset can get expensive in terms of provisioned throughput and can also impact the performance of the transactional workloads powering your real-time applications and services. +The row store persists the above data in a serialized format, per row, on the disk. This format allows for faster transactional reads, writes, and operational queries, such as, "Return information about Product 1". However, as the dataset grows large and if you want to run complex analytical queries on the data it can be expensive. For example, if you want to get "the sales trends for a product under the category named 'Equipment' across different business units and months", you need to run a complex query. Large scans on this dataset can get expensive in terms of provisioned throughput and can also impact the performance of the transactional workloads powering your real-time applications and services. Analytical store, which is a column store, is better suited for such queries because it serializes similar fields of data together and reduces the disk IOPS. At the end of each execution of the automatic sync process, your transactional d ## Scalability & elasticity -By using horizontal partitioning, Azure Cosmos DB transactional store can elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it's 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store. +Azure Cosmos DB transactional store uses horizontal partitioning to elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it's 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store. ## <a id="analytical-schema"></a>Automatically handle schema updates sql_results.show() ##### Using full fidelity schema with SQL -Considering the same documents of the Spark example above, customers can use the following syntax example: +You can use the following syntax example, with the same documents of the Spark example above: ```SQL SELECT rating,timestamp_string,timestamp_utc timestamp_utc float '$.timestamp.float64' WHERE timestamp is not null or timestamp_utc is not null ``` -Starting from the query above, customers can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. Customers can also hide complex datatype structures by using views. +You can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. You can also hide complex datatype structures by using views. ```SQL create view MyView as WHERE timestamp_string is not null ``` -##### Working with the MongoDB `_id` field +##### Working with MongoDB `_id` field -the MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below: +MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below: -###### Working with the MongoDB `_id` field in Spark +###### Working with MongoDB `_id` field in Spark The example below works on Spark 2.x and 3.x versions: val dfConverted = df.withColumn("objectId", col("_id.objectId")).withColumn("con display(dfConverted) ``` -###### Working with the MongoDB `_id` field in SQL +###### Working with MongoDB `_id` field in SQL ```SQL SELECT TOP 100 id=CAST(_id as VARBINARY(1000)) It's possible to use full fidelity Schema for API for NoSQL accounts, instead of * Currently, if you enable Synapse Link in your NoSQL API account using the Azure portal, it will be enabled as well-defined schema. * Currently, if you want to use full fidelity schema with NoSQL or Gremlin API accounts, you have to set it at account level in the same CLI or PowerShell command that will enable Synapse Link at account level. * Currently Azure Cosmos DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts have full fidelity schema representation type.-* Full Fidelity schema data types map mentioned above isn't valid for NoSQL API accounts, that use JSON datatypes. As an example, `float` and `integer` values are represented as `num` in analytical store. +* Full Fidelity schema data types map mentioned above isn't valid for NoSQL API accounts that use JSON datatypes. As an example, `float` and `integer` values are represented as `num` in analytical store. * It's not possible to reset the schema representation type, from well-defined to full fidelity or vice-versa. * Currently, containers schemas in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account. * Containers or graphs created before Synapse Link was enabled with full fidelity schema at account level will have well-defined schema. Data tiering refers to the separation of data between storage infrastructures op After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure `transactional TTL` property to have records automatically deleted from the transactional store after a certain time period. Similarly, the `analytical TTL` allows you to manage the lifecycle of data retained in the analytical store, independent from the transactional store. By enabling analytical store and configuring transactional and analytical `TTL` properties, you can seamlessly tier and define the data retention period for the two stores. > [!NOTE]-> When `analytical TTL` is bigger than `transactional TTL`, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL` in analytical store. If your container data may need an update or a delete at some point in time in the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future. +> When `analytical TTL` is set to a value larger than `transactional TTL` value, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL` in analytical store. If your container data may need an update or a delete at some point in time in the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future. > [!NOTE] > If your scenario doesn't demand physical deletes, you can adopt a logical delete/update approach. Insert in transactional store another version of the same document that only exists in analytical store but needs a logical delete/update. Maybe with a flag indicating that it's a delete or an update of an expired document. Both versions of the same document will co-exist in analytical store, and your application should only consider the last one. After the analytical store is enabled, based on the data retention needs of the Analytical store relies on Azure Storage and offers the following protection against physical failure: * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.- * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year. + * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. You need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year. -For more information about Azure Storage durability, click [here](/azure/storage/common/storage-redundancy). +For more information about Azure Storage durability, see [this link.](/azure/storage/common/storage-redundancy) ## Backup Synapse Link, and analytical store by consequence, has different compatibility l * Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account. * Synapse Link for database accounts using continuous backup mode is GA.-* Continuous backup mode for Synapse Link enabled accounts is in public preview. Currently, customers that disabled Synapse Link from containers can't migrate to continuous backup. +* Continuous backup mode for Synapse Link enabled accounts is in public preview. Currently, you can't migrate to continuous backup if you disabled Synapse Link on any of your collections in a Cosmos DB account. ### Backup policies Analytical store partitioning is completely independent of partitioning in The analytical store is optimized to provide scalability, elasticity, and performance for analytical workloads without any dependency on the compute run-times. The storage technology is self-managed to optimize your analytics workloads without manual efforts. -By decoupling the analytical storage system from the analytical compute system, data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics. As of today, Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store. +Data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics. Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store. > [!NOTE] > You can only read from analytical store using Azure Synapse Analytics runtimes. And the opposite is also true, Azure Synapse Analytics runtimes can only read from analytical store. Only the auto-sync process can change data in analytical store. You can write data back to Azure Cosmos DB transactional store using Azure Synapse Analytics Spark pool, using the built-in Azure Cosmos DB OLTP SDK. |
cosmos-db | Analytics And Business Intelligence Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytics-and-business-intelligence-overview.md | + + Title: Analytics and BI ++description: Review Azure Cosmos DB options to enable large-scale analytics and BI reporting on your operational data. ++++ Last updated : 07/01/2024+++# Analytics and Business Intelligence (BI) on your Azure Cosmos DB data ++Azure Cosmos DB offers various options to enable large-scale analytics and BI reporting on your operational data. ++To get meaningful insights on your Azure Cosmos DB data, you may need to query across multiple partitions, collections, or databases. In some cases, you might combine this data with other data sources in your organization such as Azure SQL Database, Azure Data Lake Storage Gen2 etc. You might also query with aggregate functions such as sum, count etc. Such queries need heavy computational power, which likely consumes more request units (RUs) and as a result, these queries might potentially affect your mission critical workload performance. ++To isolate transactional workloads from the performance impact of complex analytical queries, database data is ingested nightly to a central location using complex Extract-Transform-Load (ETL) pipelines. Such ETL-based analytics are complex, costly with delayed insights on business data. ++Azure Cosmos DB addresses these challenges by providing no-ETL, cost-effective analytics offerings. ++## No-ETL, near real-time analytics on Azure Cosmos DB +Azure Cosmos DB offers no-ETL, near real-time analytics on your data without affecting the performance of your transactional workloads or request units (RUs). These offerings remove the need for complex ETL pipelines, making your Azure Cosmos DB data seamlessly available to analytics engines. With reduced latency to insights, you can provide enhanced customer experience and react more quickly to changes in market conditions or business environment. Here are some sample [scenarios](synapse-link-use-cases.md) you can achieve with quick insights into your data. + + You can enable no-ETL analytics and BI reporting on Azure Cosmos DB using the following options: ++* Mirroring your data into Microsoft Fabric +* Enabling Azure Synapse Link to access data from Azure Synapse Analytics + ++### Option 1: Mirroring your Azure Cosmos DB data into Microsoft Fabric ++Mirroring enables you to seamlessly bring your Azure Cosmos DB database data into Microsoft Fabric. With no-ETL, you can get rich business insights on your Azure Cosmos DB data using FabricΓÇÖs built-in analytics, BI, and AI capabilities. ++Your Cosmos DB operational data is incrementally replicated into Fabric OneLake in near real-time. Data in OneLake is stored in open-source Delta Parquet format and made available to all analytical engines in Fabric. With open access, you can use it with various Azure services such as Azure Databricks, Azure HDInsight, and more. OneLake also helps unify your data estate for your analytical needs. Mirrored data can be joined with any other data in OneLake, such as Lakehouses, Warehouses or shortcuts. You can also join Azure Cosmos DB data with other mirrored database sources such as Azure SQL Database, Snowflake. +You can query across Azure Cosmos DB collections or databases mirrored into OneLake. ++With Mirroring in Fabric, you don't need to piece together different services from multiple vendors. Instead, you can enjoy a highly integrated, end-to-end, and easy-to-use product that is designed to simplify your analytics needs. +You can use T-SQL to run complex aggregate queries and Spark for data exploration. You can seamlessly access the data in notebooks, use data science to build machine learning models, and build Power BI reports using Direct Lake powered by rich Copilot integration. +++If you're looking for analytics on your operational data in Azure Cosmos DB, mirroring provides: +* No-ETL, cost-effective near real-time analytics on Azure Cosmos DB data without affecting your request unit (RU) consumption +* Ease of bringing data across various sources into Fabric OneLake. +* Improved query performance of SQL engine handling delta tables, with V-order optimizations +* Improved cold start time for Spark engine with deep integration with ML/notebooks +* One-click integration with Power BI with Direct Lake and Copilot +* Richer app integration to access queries and views with GraphQL +* Open access to and from other services such as Azure Databricks ++To get started with mirroring, visit ["Get started with mirroring tutorial"](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context). +++### Option 2: Azure Synapse Link to access data from Azure Synapse Analytics +Azure Synapse Link for Azure Cosmos DB creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics, enabling no-ETL, near real-time analytics on your operational data. +Transactional data is seamlessly synced to Analytical store, which stores the data in columnar format optimized for analytics. ++Azure Synapse Analytics can access this data in Analytical store, without further movement, using Azure Synapse Link. Business analysts, data engineers, and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real time business intelligence, analytics, and machine learning pipelines. ++The following image shows the Azure Synapse Link integration with Azure Cosmos DB and Azure Synapse Analytics: +++ > [!IMPORTANT] + > Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). ++To get started with Azure Synapse Link, visit ["Getting started with Azure Synapse Link"](synapse-link.md). +++## Real-time analytics and BI on Azure Cosmos DB: Other options +There are a few other options to enable real-time analytics on Azure Cosmos DB data: +* Using [change feed](nosql/changefeed-ecommerce-solution.md) +* Using [Spark connector directly on Azure Cosmos DB](nosql/tutorial-spark-connector.md) +* Using Power BI connector directly on Azure Cosmos DB ++While these options are included for completeness and work well with single partition queries in real-time, these methods have the following challenges for analytical queries: +* Performance impact on your workload: ++ Analytical queries tend to be complex and consume significant compute capacity. When these queries are run against your Azure Cosmos DB data directly, you might experience performance degradation on your transactional queries. +* Cost impact: + + When analytical queries are run directly against your database or collections, they increase the need for request units allocated, as analytical queries tend to be complex and need more computation power. Increased RU usage will likely lead to significant cost impact over time, if you run aggregate queries. ++Instead of these options, we recommend that you use Mirroring in Microsoft Fabric or Azure Synapse Link, which provide no-ETL analytics, without affecting transactional workload performance or request units. ++## Related content ++* [Mirroring Azure Cosmos DB overview](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context) ++* [Getting started with mirroring](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context) ++* [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) ++* [Working with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) ++ |
cosmos-db | Configure Synapse Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md | + > [!IMPORTANT] + > Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). + Azure Synapse Link is available for Azure Cosmos DB SQL API or for Azure Cosmos DB API for Mongo DB accounts. And it is in preview for Gremlin API, with activation via CLI commands. Use the following steps to run analytical queries with the Azure Synapse Link for Azure Cosmos DB: * [Enable Azure Synapse Link for your Azure Cosmos DB accounts](#enable-synapse-link) |
cosmos-db | Synapse Link Use Cases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-use-cases.md | Title: Near real-time analytics use cases with Azure Synapse Link for Azure Cosmos DB -description: Learn how Azure Synapse Link for Azure Cosmos DB is used in Supply chain analytics, forecasting, reporting, real-time personalization, and IOT predictive maintenance. + Title: Near real-time analytics use cases for Azure Cosmos DB +description: Learn how real-time analytics is used in Supply chain analytics, forecasting, reporting, real-time personalization, and IOT predictive maintenance. - Previously updated : 09/29/2022-+ Last updated : 06/25/2024+ -# Azure Synapse Link for Azure Cosmos DB: Near real-time analytics use cases +# Azure Cosmos DB: No-ETL analytics use cases [!INCLUDE[NoSQL, MongoDB, Gremlin](includes/appliesto-nosql-mongodb-gremlin.md)] -[Azure Synapse Link](synapse-link.md) for Azure Cosmos DB is a cloud native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data. Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics. +Azure Cosmos DB provides various analytics options for no-ETL, near real-time analytics over operational data. You can enable analytics on your Azure Cosmos DB data using following options: +* Mirroring Azure Cosmos DB in Microsoft Fabric +* Azure Synapse Link for Azure Cosmos DB -You might be curious to understand what industry use cases can leverage this cloud native HTAP capability for near real-time analytics over operational data. Here are three common use cases for Azure Synapse Link for Azure Cosmos DB: +To learn more about these options, see ["Analytics and BI on your Azure Cosmos DB data."](analytics-and-business-intelligence-overview.md) ++> [!IMPORTANT] +> Mirroring Azure Cosmos DB in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). ++No-ETL, near real-time analytics can open up various possibilities for your businesses. Here are three sample scenarios: * Supply chain analytics, forecasting & reporting * Real-time personalization * Predictive maintenance, anomaly detection in IOT scenarios -> [!NOTE] -> Azure Synapse Link for Azure Cosmos DB targets the scenario where enterprise teams are looking to run near real-time analytics. These analytics are run without ETL over operational data generated across transactional applications built on Azure Cosmos DB. This does not replace the need for a separate data warehouse when there are traditional data warehouse requirements such as workload management, high concurrency, persistence aggregates across multiple data sources. --> [!NOTE] -> Synapse Link for Gremlin API is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md). - ## Supply chain analytics, forecasting & reporting Research studies show that embedding big data analytics in supply chain operations leads to improvements in order-to-cycle delivery times and supply chain efficiency. Manufacturers are onboarding to cloud-native technologies to break out of constraints of legacy Enterprise Resource Planning (ERP) and Supply Chain Management (SCM) systems. With supply chains generating increasing volumes of operational data every minute (order, shipment, transaction data), manufacturers need an operational database. This operational database should scale to handle the data volumes as well as an analytical platform to get to a level of real-time contextual intelligence to stay ahead of the curve. -The following architecture shows the power of leveraging Azure Cosmos DB as the cloud-native operational database and Synapse Link in supply chain analytics: +The following architecture shows the power of using Azure Cosmos DB as the cloud-native operational database in supply chain analytics: -Based on previous architecture, you can achieve the following use cases with Synapse Link for Azure Cosmos DB: +Based on previous architecture, you can achieve the following use cases: * **Prepare & train predictive pipeline:** Generate insights over the operational data across the supply chain using machine learning translates. This way you can lower inventory, operations costs, and reduce the order-to-delivery times for customers. - Synapse Link allows you to analyze the changing operational data in Azure Cosmos DB without any manual ETL processes. It saves you from additional cost, latency, and operational complexity. Synapse Link enables data engineers and data scientists to build robust predictive pipelines: + Mirroring and Synapse Link allow you to analyze the changing operational data in Azure Cosmos DB without any manual ETL processes. These offerings save you from additional cost, latency, and operational complexity. They enable data engineers and data scientists to build robust predictive pipelines: - * Query operational data from Azure Cosmos DB analytical store by leveraging native integration with Apache Spark pools in Azure Synapse Analytics. You can query the data in an interactive notebook or scheduled remote jobs without complex data engineering. + * Query operational data from Azure Cosmos DB by using native integration with Apache Spark pools in Microsoft Fabric or Azure Synapse Analytics. You can query the data in an interactive notebook or scheduled remote jobs without complex data engineering. - * Build Machine Learning (ML) models with Spark ML algorithms and Azure ML integration in Azure Synapse Analytics. + * Build Machine Learning (ML) models with Spark ML algorithms and Azure Machine Learning (AML) integration in Microsoft Fabric or Azure Synapse Analytics. * Write back the results after model inference into Azure Cosmos DB for operational near-real-time scoring. * **Operational reporting:** Supply chain teams need flexible and custom reports over real-time, accurate operational data. These reports are required to obtain a snapshot view of supply chain effectiveness, profitability, and productivity. It allows data analysts and other key stakeholders to constantly reevaluate the business and identify areas to tweak to reduce operational costs. - Synapse Link for Azure Cosmos DB enables rich business intelligence (BI)/reporting scenarios: + Mirroring and Synapse Link for Azure Cosmos DB enable rich business intelligence (BI)/reporting scenarios: - * Query operational data from Azure Cosmos DB analytical store by using native integration with serverless SQL pool and full expressiveness of T-SQL language. + * Query operational data from Azure Cosmos DB by using native integration with full expressiveness of T-SQL language. - * Model and publish auto refreshing BI dashboards over Azure Cosmos DB through serverless SQL pool support for familiar BI tools. For example, Azure Analysis Services, Power BI Premium, etc. + * Model and publish auto refreshing BI dashboards over Azure Cosmos DB through Power BI integrated in Microsoft Fabric or Azure Synapse Analytics. The following is some guidance for data integration for batch & streaming data into Azure Cosmos DB: -* **Batch data integration & orchestration:** With supply chains getting more complex, supply chain data platforms need to integrate with variety of data sources and formats. Azure Synapse comes built-in with the same data integration engine and experiences as Azure Data Factory. This integration allows data engineers to create rich data pipelines without a separate orchestration engine: +* **Batch data integration & orchestration:** With supply chains getting more complex, supply chain data platforms need to integrate with variety of data sources and formats. Microsoft Fabric and Azure Synapse come built-in with the same data integration engine and experiences as Azure Data Factory. This integration allows data engineers to create rich data pipelines without a separate orchestration engine: - * Move data from 85+ supported data sources to [Azure Cosmos DB with Azure Data Factory](../data-factory/connector-azure-cosmos-db.md). + * Move data from 85+ supported data sources to [Azure Cosmos DB with Azure Data Factory](../data-factory/connector-azure-cosmos-db.md). * Write code-free ETL pipelines to Azure Cosmos DB including [relational-to-hierarchical and hierarchical-to-hierarchical mappings with mapping data flows](../data-factory/how-to-sqldb-to-cosmosdb.md). The following is some guidance for data integration for batch & streaming data i Retailers today must build secure and scalable e-commerce solutions that meet the demands of both customers and business. These e-commerce solutions need to engage customers through customized products and offers, process transactions quickly and securely, and focus on fulfillment and customer service. Azure Cosmos DB along with the latest Synapse Link for Azure Cosmos DB allows retailers to generate personalized recommendations for customers in real time. They use low-latency and tunable consistency settings for immediate insights as shown in the following architecture: --Synapse Link for Azure Cosmos DB use case: --* **Prepare & train predictive pipeline:** You can generate insights over the operational data across your business units or customer segments using Synapse Spark and machine learning models. This translates to personalized delivery to target customer segments, predictive end-user experiences and targeted marketing to fit your end-user requirements. +* **Prepare & train predictive pipeline:** You can generate insights over the operational data across your business units or customer segments using Fabric or Synapse Spark and machine learning models. This translates to personalized delivery to target customer segments, predictive end-user experiences, and targeted marketing to fit your end-user requirements. +) ## IOT predictive maintenance Industrial IOT innovations have drastically reduced downtimes of machinery and increased overall efficiency across all fields of industry. One of such innovations is predictive maintenance analytics for machinery at the edge of the cloud. -The following is an architecture leveraging the cloud native HTAP capabilities of Azure Synapse Link for Azure Cosmos DB in IoT predictive maintenance: -+The following is an architecture using the cloud native HTAP capabilities in IoT predictive maintenance: -Synapse Link for Azure Cosmos DB use cases: * **Prepare & train predictive pipeline:** The historical operational data from IoT device sensors could be used to train predictive models such as anomaly detectors. These anomaly detectors are then deployed back to the edge for real-time monitoring. Such a virtuous loop allows for continuous retraining of the predictive models. -* **Operational reporting:** With the growth of digital twin initiatives, companies are collecting vast amounts of operational data from large number of sensors to build a digital copy of each machine. This data powers BI needs to understand trends over historical data in addition to real-time applications over recent hot data. --## Sample scenario: HTAP for Azure Cosmos DB --For nearly a decade, Azure Cosmos DB has been used by thousands of customers for mission critical applications that require elastic scale, turnkey global distribution, multi-region write replication for low latency and high availability of both reads & writes in their transactional workloads. - -The following list shows an overview of the various workload patterns that are supported with operational data using Azure Cosmos DB: --* Real-time apps & services -* Event stream processing -* BI dashboards -* Big data analytics -* Machine learning --Azure Synapse Link enables Azure Cosmos DB to not just power transactional workloads but also perform near real-time analytical workloads over historical operational data. It happens with no ETL requirements and guaranteed performance isolation from the transactional workloads. --The following image shows workload patterns using Azure Cosmos DB: --Let us take the example of an e-commerce company CompanyXYZ with global operations across 20 countries/regions to illustrate the benefits of choosing Azure Cosmos DB as the single real-time database powering both transactional and analytical requirements of an inventory management platform. +* **Operational reporting:** With the growth of digital twin initiatives, companies are collecting vast amounts of operational data from large number of sensors to build a digital copy of each machine. This data powers BI needs to understand trends over historical data in addition to recent hot data. -* CompanyXYZ's core business depends on the inventory management system ΓÇô hence availability & reliability are core pillar requirements. Benefits of using Azure Cosmos DB: +## Related content - * By virtue of deep integration with Azure infrastructure, and transparent multi-region writes, global replication, Azure Cosmos DB provides industry-leading [99.999% high availability](high-availability.md) against regional outages. -* CompanyXYZ's supply chain partners may be in separate geographic locations but they may have to see a single view of the product inventory across the globe to support their local operations. This includes the need to be able to read updates made by other supply chain partners in real time. As well as being able to make updates without worrying about conflicts with other partners at high throughput. Benefits of using Azure Cosmos DB: -- * With its unique multi-region writes replication protocol and latch-free, write-optimized transactional store, Azure Cosmos DB guarantees less than 10-ms latencies for both indexed reads and writes at the 99th percentile globally. -- * High throughput ingestion of both batch & streaming data feeds with [real-time indexing](index-policy.md) in transactional store. -- * Azure Cosmos DB transactional store provides three more options than the two extremes of strong and eventual consistency levels to achieve the [availability vs performance tradeoffs](./consistency-levels.md) closest to the business need. --* CompanyXYZ's supply chain partners have highly fluctuating traffic patterns ranging from hundreds to millions of requests/s and thus the inventory management platform needs to deal with unexpected burstiness in traffic. Benefits of using Azure Cosmos DB: -- * Azure Cosmos DB's transactional store supports elastic scalability of storage and throughput using horizontal partitioning. Containers and databases configured in Autopilot mode can automatically and instantly scale the provisioned throughput based on the application needs without impacting the availability, latency, throughput, or performance of the workload globally. --* CompanyXYZ needs to establish a secure analytics platform to house system-wide historical inventory data to enable analytics and insights across supply chain partner, business units and functions. The analytics platform needs to enable collaboration across the system, traditional BI/reporting use cases, advanced analytics use cases and predictive intelligent solutions over the operational inventory data. Benefits of using Synapse Link for Azure Cosmos DB: -- * By using [Azure Cosmos DB analytical store](analytical-store-introduction.md), a fully isolated column store, Synapse Link enables no Extract-Transform-Load (ETL) analytics in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) against globally distributed operational data at scale. Business analysts, data engineers and data scientists can now use Synapse Spark or Synapse SQL in an interoperable manner to run near real-time business intelligence, analytics, and machine learning pipelines without impacting the performance of their transactional workloads on Azure Cosmos DB. See the [benefits of Synapse Link in Azure Cosmos DB](synapse-link.md) for more details. --## Next steps --To learn more, see the following docs: +* [Mirroring Azure Cosmos DB overview](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context) +* [Getting started with mirroring](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context) + * [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) -* [Azure Cosmos DB Analytical Store](analytical-store-introduction.md) - * [Working with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) -* [Frequently asked questions about Azure Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.yml) --* [Apache Spark in Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-concepts.md) --* [Serverless SQL pool runtime support in Azure Synapse Analytics](../synapse-analytics/sql/on-demand-workspace-overview.md) |
cosmos-db | Synapse Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md | Azure Synapse Link for Azure Cosmos DB is a cloud-native hybrid transactional an [Azure Cosmos DB analytical store](analytical-store-introduction.md), a fully isolated column store, can be used with Azure Synapse Link to enable Extract-Transform-Load (ETL) analytics in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) against your operational data at scale. Business analysts, data engineers, and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real time business intelligence, analytics, and machine learning pipelines. You can analyze real time data without affecting the performance of your transactional workloads on Azure Cosmos DB. +> [!IMPORTANT] +> Mirroring Azure Cosmos DB in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). + The following image shows the Azure Synapse Link integration with Azure Cosmos DB and Azure Synapse Analytics: :::image type="content" source="./media/synapse-link/synapse-analytics-cosmos-db-architecture.png" alt-text="Architecture diagram for Azure Synapse Analytics integration with Azure Cosmos DB" border="false"::: |
data-factory | How To Create Custom Event Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-custom-event-trigger.md | Title: Create custom event triggers in Azure Data Factory -description: Learn how to create a trigger in Azure Data Factory that runs a pipeline in response to a custom event published to Event Grid. +description: Learn how to create a trigger in Azure Data Factory that runs a pipeline in response to a custom event published to Azure Event Grid. Last updated 01/05/2024 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -Event-driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require Azure Data Factory customers to trigger pipelines when certain events occur. Data Factory native integration with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) now covers [custom topics](../event-grid/custom-topics.md). You send events to an event grid topic. Data Factory subscribes to the topic, listens, and then triggers pipelines accordingly. --> [!NOTE] -> The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more information, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the `Microsoft.EventGrid/eventSubscriptions/` action. This action is part of the [EventGrid EventSubscription Contributor](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) built-in role. +Event-driven architecture is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require Azure Data Factory customers to trigger pipelines when certain events occur. Data Factory native integration with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) now covers [custom topics](../event-grid/custom-topics.md). You send events to an Event Grid topic. Data Factory subscribes to the topic, listens, and then triggers pipelines accordingly. +The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more information, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the `Microsoft.EventGrid/eventSubscriptions/` action. This action is part of the [EventGrid EventSubscription Contributor](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) built-in role. > [!IMPORTANT]-> If you are using this feature in Azure Synapse Analytics, please ensure that your subscription is also registered with Data Factory resource provider, or otherwise you will get an error stating that _the creation of an "Event Subscription" failed_. -+> If you're using this feature in Azure Synapse Analytics, ensure that your subscription is also registered with a Data Factory resource provider. Otherwise, you get a message stating that "the creation of an Event Subscription failed." -If you combine pipeline parameters and a custom event trigger, you can parse and reference custom `data` payloads in pipeline runs. Because the `data` field in a custom event payload is a free-form, JSON key-value structure, you can control event-driven pipeline runs. +If you combine pipeline parameters and a custom event trigger, you can parse and reference custom `data` payloads in pipeline runs. Because the `data` field in a custom event payload is a freeform, JSON key-value structure, you can control event-driven pipeline runs. > [!IMPORTANT]-> If a key referenced in parameterization is missing in the custom event payload, `trigger run` will fail. You'll get an error that states the expression cannot be evaluated because property `keyName` doesn't exist. In this case, **no** `pipeline run` will be triggered by the event. +> If a key referenced in parameterization is missing in the custom event payload, `trigger run` fails. You get a message that states the expression can't be evaluated because the `keyName` property doesn't exist. In this case, **no** `pipeline run` is triggered by the event. ## Set up a custom topic in Event Grid To use the custom event trigger in Data Factory, you need to *first* set up a [custom topic in Event Grid](../event-grid/custom-topics.md). -Go to Azure Event Grid and create the topic yourself. For more information on how to create the custom topic, see Azure Event Grid [portal tutorials](../event-grid/custom-topics.md#azure-portal-tutorials) and [CLI tutorials](../event-grid/custom-topics.md#azure-cli-tutorials). +Go to Event Grid and create the topic yourself. For more information on how to create the custom topic, see Event Grid [portal tutorials](../event-grid/custom-topics.md#azure-portal-tutorials) and [Azure CLI tutorials](../event-grid/custom-topics.md#azure-cli-tutorials). > [!NOTE]-> The workflow is different from Storage Event Trigger. Here, Data Factory doesn't set up the topic for you. +> The workflow is different from a storage event trigger. Here, Data Factory doesn't set up the topic for you. -Data Factory expects events to follow the [Event Grid event schema](../event-grid/event-schema.md). Make sure event payloads have the following fields: +Data Factory expects events to follow the [Event Grid event schema](../event-grid/event-schema.md). Make sure that event payloads have the following fields: ```json [ Data Factory expects events to follow the [Event Grid event schema](../event-gri ## Use Data Factory to create a custom event trigger -1. Go to Azure Data Factory and sign in. +1. Go to Data Factory and sign in. 1. Switch to the **Edit** tab. Look for the pencil icon. 1. Select **Trigger** on the menu and then select **New/Edit**. -1. On the **Add Triggers** page, select **Choose trigger**, and then select **+New**. +1. On the **Add Triggers** page, select **Choose trigger**, and then select **+ New**. -1. Select **Custom events** for **Type**. +1. Under **Type**, select **Custom events**. - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-1-creation.png" alt-text="Screenshot of Author page to create a new custom event trigger in Data Factory UI." lightbox="media/how-to-create-custom-event-trigger/custom-event-1-creation-expanded.png"::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-1-creation.png" alt-text="Screenshot that shows creating a new custom event trigger in the Data Factory UI." lightbox="media/how-to-create-custom-event-trigger/custom-event-1-creation-expanded.png"::: -1. Select your custom topic from the Azure subscription dropdown or manually enter the event topic scope. +1. Select your custom topic from the Azure subscription dropdown list or manually enter the event topic scope. > [!NOTE]- > To create or modify a custom event trigger in Data Factory, you need to use an Azure account with appropriate role-based access control (Azure RBAC). No additional permission is required. The Data Factory service principal does *not* require special permission to your Event Grid. For more information about access control, see the [Role-based access control](#role-based-access-control) section. + > To create or modify a custom event trigger in Data Factory, you need to use an Azure account with appropriate Azure role-based access control (Azure RBAC). No other permission is required. The Data Factory service principal does *not* require special permission to your Event Grid. For more information about access control, see the [Role-based access control](#role-based-access-control) section. -1. The **Subject begins with** and **Subject ends with** properties allow you to filter for trigger events. Both properties are optional. +1. The `Subject begins with` and `Subject ends with` properties allow you to filter for trigger events. Both properties are optional. -1. Use **+ New** to add **Event Types** to filter on. The list of custom event triggers uses an OR relationship. When a custom event with an `eventType` property that matches one on the list, a pipeline run is triggered. The event type is case insensitive. For example, in the following screenshot, the trigger matches all `copycompleted` or `copysucceeded` events that have a subject that begins with *factories*. +1. Use **+ New** to add **Event types** to filter on. The list of custom event triggers uses an OR relationship. When a custom event with an `eventType` property matches one on the list, a pipeline run is triggered. The event type is case insensitive. For example, in the following screenshot, the trigger matches all `copycompleted` or `copysucceeded` events that have a subject that begins with *factories*. - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-2-properties.png" alt-text="Screenshot of Edit Trigger page to explain Event Types and Subject filtering in Data Factory UI."::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-2-properties.png" alt-text="Screenshot that shows the Edit trigger page to explain Event types and Subject filtering in the Data Factory UI."::: ++1. A custom event trigger can parse and send a custom `data` payload to your pipeline. You create the pipeline parameters and then fill in the values on the **Parameters** page. Use the format `@triggerBody().event.data._keyName_` to parse the data payload and pass values to the pipeline parameters. ++ For a detailed explanation, see: -1. A custom event trigger can parse and send a custom `data` payload to your pipeline. You create the pipeline parameters, and then fill in the values on the **Parameters** page. Use the format `@triggerBody().event.data._keyName_` to parse the data payload and pass values to the pipeline parameters. - - For a detailed explanation, see the following articles: - [Reference trigger metadata in pipelines](how-to-use-trigger-parameterization.md) - [System variables in custom event trigger](control-flow-system-variables.md#custom-event-trigger-scope) - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-4-trigger-values.png" alt-text="Screenshot of pipeline parameters settings."::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-4-trigger-values.png" alt-text="Screenshot that shows pipeline parameters settings."::: - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-3-parameters.png" alt-text="Screenshot of the parameters page to reference data payload in custom event."::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-3-parameters.png" alt-text="Screenshot that shows the parameters page to reference data payload in a custom event."::: -1. After you've entered the parameters, select **OK**. +1. After you enter the parameters, select **OK**. ## Advanced filtering -Custom event trigger supports advanced filtering capabilities, similar to [Event Grid Advanced Filtering](../event-grid/event-filtering.md#advanced-filtering). These conditional filters allow pipelines to trigger based upon the _values_ of event payload. For instance, you may have a field in the event payload, named _Department_, and pipeline should only trigger if _Department_ equals to _Finance_. You may also specify complex logic, such as _date_ field in list [1, 2, 3, 4, 5], _month_ field __not__ in list [11, 12], _tag_ field contains any of ['Fiscal Year 2021', 'FiscalYear2021', 'FY2021']. +Custom event triggers support advanced filtering capabilities, similar to [Event Grid advanced filtering](../event-grid/event-filtering.md#advanced-filtering). These conditional filters allow pipelines to trigger based on the _values_ of the event payload. For instance, you might have a field in the event payload named _Department_, and the pipeline should only trigger if _Department_ equals _Finance_. You might also specify complex logic, such as the _date_ field in list [1, 2, 3, 4, 5], the _month_ field *not* in the list [11, 12], and if the _tag_ field contains [Fiscal Year 2021, FiscalYear2021, or FY2021]. - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-5-advanced-filters.png" alt-text="Screenshot of setting advanced filters for customer event trigger"::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-5-advanced-filters.png" alt-text="Screenshot that shows setting advanced filters for a customer event trigger."::: -As of today custom event trigger supports a __subset__ of [advanced filtering operators](../event-grid/event-filtering.md#advanced-filtering) in Event Grid. Following filter conditions are supported: +As of today, custom event triggers support a *subset* of [advanced filtering operators](../event-grid/event-filtering.md#advanced-filtering) in Event Grid. The following filter conditions are supported: -* NumberIn -* NumberNotIn -* NumberLessThan -* NumberGreaterThan -* NumberLessThanOrEquals -* NumberGreaterThanOrEquals -* BoolEquals -* StringContains -* StringBeginsWith -* StringEndsWith -* StringIn -* StringNotIn +* `NumberIn` +* `NumberNotIn` +* `NumberLessThan` +* `NumberGreaterThan` +* `NumberLessThanOrEquals` +* `NumberGreaterThanOrEquals` +* `BoolEquals` +* `StringContains` +* `StringBeginsWith` +* `StringEndsWith` +* `StringIn` +* `StringNotIn` -Select **+New** to add new filter conditions. +Select **+ New** to add new filter conditions. -Additionally, custom event triggers obey the [same limitations as Event Grid](../event-grid/event-filtering.md#limitations), including: +Custom event triggers also obey the [same limitations as Event Grid](../event-grid/event-filtering.md#limitations), such as: -* 5 advanced filters and 25 filter values across all the filters per custom event trigger -* 512 characters per string value -* 5 values for in and not in operators -* keys cannot have `.` (dot) character in them, for example, `john.doe@contoso.com`. Currently, there's no support for escape characters in keys. +* 5 advanced filters and 25 filter values across all the filters per custom event trigger. +* 512 characters per string value. +* 5 values for `in` and `not in` operators. +* Keys can't have the `.` (dot) character in them, for example, `john.doe@contoso.com`. Currently, there's no support for escape characters in keys. * The same key can be used in more than one filter. -Data Factory relies upon the latest _GA_ version of [Event Grid API](../event-grid/whats-new.md). As new API versions get to GA stage, Data Factory will expand its support for more advanced filtering operators. +Data Factory relies on the latest general availability (GA) version of the [Event Grid API](../event-grid/whats-new.md). As new API versions get to the GA stage, Data Factory expands its support for more advanced filtering operators. ## JSON schema -The following table provides an overview of the schema elements that are related to custom event triggers: +The following table provides an overview of the schema elements that are related to custom event triggers. | JSON element | Description | Type | Allowed values | Required | ||-||||-| `scope` | The Azure Resource Manager resource ID of the Event Grid topic. | String | Azure Resource Manager ID | Yes | +| `scope` | The Azure Resource Manager resource ID of the Event Grid topic. | String | Azure Resource Manager ID | Yes. | | `events` | The type of events that cause this trigger to fire. | Array of strings | | Yes, at least one value is expected. |-| `subjectBeginsWith` | The `subject` field must begin with the provided pattern for the trigger to fire. For example, _factories_ only fire the trigger for event subjects that start with *factories*. | String | | No | -| `subjectEndsWith` | The `subject` field must end with the provided pattern for the trigger to fire. | String | | No | -| `advancedFilters` | List of JSON blobs, each specifying a filter condition. Each blob specifies `key`, `operatorType`, and `values`. | List of JSON blob | | No | +| `subjectBeginsWith` | The `subject` field must begin with the provided pattern for the trigger to fire. For example, *factories* only fire the trigger for event subjects that start with *factories*. | String | | No. | +| `subjectEndsWith` | The `subject` field must end with the provided pattern for the trigger to fire. | String | | No. | +| `advancedFilters` | List of JSON blobs, each specifying a filter condition. Each blob specifies `key`, `operatorType`, and `values`. | List of JSON blobs | | No. | ## Role-based access control -Azure Data Factory uses Azure role-based access control (RBAC) to prohibit unauthorized access. To function properly, Data Factory requires access to: +Data Factory uses Azure RBAC to prohibit unauthorized access. To function properly, Data Factory requires access to: + - Listen to events. - Subscribe to updates from events. - Trigger pipelines linked to custom events. -To successfully create or update a custom event trigger, you need to sign in to Data Factory with an Azure account that has appropriate access. Otherwise, the operation will fail with an _Access Denied_ error. +To successfully create or update a custom event trigger, you need to sign in to Data Factory with an Azure account that has appropriate access. Otherwise, the operation fails with the message "Access Denied." -Data Factory doesn't require special permission to your Event Grid. You also do *not* need to assign special Azure RBAC role permission to the Data Factory service principal for the operation. +Data Factory doesn't require special permission to your instance of Event Grid. You also do *not* need to assign special Azure RBAC role permission to the Data Factory service principal for the operation. Specifically, you need `Microsoft.EventGrid/EventSubscriptions/Write` permission on `/subscriptions/####/resourceGroups//####/providers/Microsoft.EventGrid/topics/someTopics`. -- When authoring in the data factory (in the development environment for instance), the Azure account signed in needs to have the above permission-- When publishing through [CI/CD](continuous-integration-delivery.md), the account used to publish the ARM template into the testing or production factory needs to have the above permission.+- When you author in the data factory (in the development environment, for instance), the Azure account signed in needs to have the preceding permission. +- When you publish through [continuous integration and continuous delivery](continuous-integration-delivery.md), the account used to publish the Azure Resource Manager template into the testing or production factory needs to have the preceding permission. ## Related content |
data-factory | How To Create Event Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md | Title: Create event-based triggers- -description: Learn how to create a trigger in an Azure Data Factory or Azure Synapse Analytics that runs a pipeline in response to an event. ++description: Learn how to create a trigger in Azure Data Factory or Azure Synapse Analytics that runs a pipeline in response to an event. Last updated 05/24/2024 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article describes the Storage Event Triggers that you can create in your Data Factory or Synapse pipelines. +This article describes the storage event triggers that you can create in your Azure Data Factory or Azure Synapse Analytics pipelines. -Event-driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require customers to trigger pipelines that are triggered from events on a storage account, such as the arrival or deletion of a file in Azure Blob Storage account. Data Factory and Synapse pipelines natively integrate with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/), which lets you trigger pipelines on such events. +Event-driven architecture is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require customers to trigger pipelines that are triggered from events on an Azure Storage account, such as the arrival or deletion of a file in Azure Blob Storage account. Data Factory and Azure Synapse Analytics pipelines natively integrate with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/), which lets you trigger pipelines on such events. ## Storage event trigger considerations -There are several things to consider when using storage event triggers: +Consider the following points when you use storage event triggers: -- The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more info, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the *Microsoft.EventGrid/eventSubscriptions/** action. This action is part of the EventGrid EventSubscription Contributor built-in role.-- If you're using this feature in Azure Synapse Analytics, ensure that you also register your subscription with the Data Factory resource provider. Otherwise you get an error stating that _the creation of an "Event Subscription" failed_.-- If the blob storage account resides behind a [private endpoint](../storage/common/storage-private-endpoints.md) and blocks public network access, you need to configure network rules to allow communications from blob storage to Azure Event Grid. You can either grant storage access to trusted Azure services, such as Event Grid, following [Storage documentation](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services), or configure private endpoints for Event Grid that map to VNet address space, following [Event Grid documentation](../event-grid/configure-private-endpoints.md)-- The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. If you're working with SFTP Storage Events you need to specify the SFTP Data API under the filtering section too. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account.-- To create a new or modify an existing Storage Event Trigger, the Azure account used to log into the service and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on the storage account. No other permissions are required: Service Principal for the Azure Data Factory and Azure Synapse does _not_ need special permission to either the Storage account or Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.-- If you applied an ARM lock to your Storage Account, it might impact the blob trigger's ability to create or delete blobs. A **ReadOnly** lock prevents both creation and deletion, while a **DoNotDelete** lock prevents deletion. Ensure you account for these restrictions to avoid any issues with your triggers.-- File arrival triggers are not recommended as a triggering mechanism from data flow sinks. Data flows perform a number of file renaming and partition file shuffling tasks in the target folder that can inadvertently trigger a file arrival event before the complete processing of your data.+- The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more information, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the `Microsoft.EventGrid/eventSubscriptions/` action. This action is part of the `EventGrid EventSubscription Contributor` built-in role. +- If you're using this feature in Azure Synapse Analytics, ensure that you also register your subscription with the Data Factory resource provider. Otherwise, you get a message stating that "the creation of an Event Subscription failed." +- If the Blob Storage account resides behind a [private endpoint](../storage/common/storage-private-endpoints.md) and blocks public network access, you need to configure network rules to allow communications from Blob Storage to Event Grid. You can either grant storage access to trusted Azure services, such as Event Grid, following [Storage documentation](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services), or configure private endpoints for Event Grid that map to a virtual network address space, following [Event Grid documentation](../event-grid/configure-private-endpoints.md). +- The storage event trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. If you're working with Secure File Transfer Protocol (SFTP) storage events, you need to specify the SFTP Data API under the filtering section too. Because of an Event Grid limitation, Data Factory only supports a maximum of 500 storage event triggers per storage account. +- To create a new storage event trigger or modify an existing one, the Azure account you use to sign in to the service and publish the storage event trigger must have appropriate role-based access control (Azure RBAC) permission on the storage account. No other permissions are required. Service principal for the Azure Data Factory and Azure Synapse Analytics does _not_ need special permission to either the storage account or Event Grid. For more information about access control, see the [Role-based access control](#role-based-access-control) section. +- If you applied an Azure Resource Manager lock to your storage account, it might affect the blob trigger's ability to create or delete blobs. A `ReadOnly` lock prevents both creation and deletion, while a `DoNotDelete` lock prevents deletion. Ensure that you account for these restrictions to avoid any issues with your triggers. +- We don't recommend file arrival triggers as a triggering mechanism from data flow sinks. Data flows perform a number of file renaming and partition file shuffling tasks in the target folder that can inadvertently trigger a file arrival event before the complete processing of your data. -## Create a trigger with UI +## Create a trigger with the UI -This section shows you how to create a storage event trigger within the Azure Data Factory and Synapse pipeline User Interface. +This section shows you how to create a storage event trigger within the Azure Data Factory and Azure Synapse Analytics pipeline user interface (UI). -1. Switch to the **Edit** tab in Data Factory, or the **Integrate** tab in Azure Synapse. +1. Switch to the **Edit** tab in Data Factory or the **Integrate** tab in Azure Synapse Analytics. -1. Select **Trigger** on the menu, then select **New/Edit**. +1. On the menu, select **Trigger**, and then select **New/Edit**. -1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**. +1. On the **Add Triggers** page, select **Choose trigger**, and then select **+ New**. -1. Select trigger type **Storage Event** +1. Select the trigger type **Storage events**. # [Azure Data Factory](#tab/data-factory)- :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1.png" alt-text="Screenshot of Author page to create a new storage event trigger in Data Factory UI." ::: - # [Azure Synapse](#tab/synapse-analytics) - :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot of Author page to create a new storage event trigger in the Azure Synapse UI."::: + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1.png" alt-text="Screenshot that shows creating a new storage event trigger in the Data Factory UI." ::: + # [Azure Synapse Analytics](#tab/synapse-analytics) + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot that shows creating a new storage event trigger in the Azure Synapse Analytics UI."::: -1. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is required, but be mindful that selecting all containers can lead to a large number of events. +1. Select your storage account from the Azure subscription dropdown list or manually by using its storage account resource ID. Choose the container on which you want the events to occur. Container selection is required, but selecting all containers can lead to a large number of events. -1. The **Blob path begins with** and **Blob path ends with** properties allow you to specify the containers, folders, and blob names for which you want to receive events. Your storage event trigger requires at least one of these properties to be defined. You can use variety of patterns for both **Blob path begins with** and **Blob path ends with** properties, as shown in the examples later in this article. +1. The `Blob path begins with` and `Blob path begins with` properties allow you to specify the containers, folders, and blob names for which you want to receive events. Your storage event trigger requires at least one of these properties to be defined. You can use various patterns for both `Blob path begins with` and `Blob path begins with` properties, as shown in the examples later in this article. - * **Blob path begins with:** The blob path must start with a folder path. Valid values include `2018/` and `2018/april/shoes.csv`. This field can't be selected if a container isn't selected. - * **Blob path ends with:** The blob path must end with a file name or extension. Valid values include `shoes.csv` and `.csv`. Container and folder names, when specified, they must be separated by a `/blobs/` segment. For example, a container named 'orders' can have a value of `/orders/blobs/2018/april/shoes.csv`. To specify a folder in any container, omit the leading '/' character. For example, `april/shoes.csv` triggers an event on any file named `shoes.csv` in folder a called 'april' in any container. - * Note that Blob path **begins with** and **ends with** are the only pattern matching allowed in Storage Event Trigger. Other types of wildcard matching aren't supported for the trigger type. + * `Blob path begins with`: The blob path must start with a folder path. Valid values include `2018/` and `2018/april/shoes.csv`. This field can't be selected if a container isn't selected. + * `Blob path begins with`: The blob path must end with a file name or extension. Valid values include `shoes.csv` and `.csv`. Container and folder names, when specified, must be separated by a `/blobs/` segment. For example, a container named `orders` can have a value of `/orders/blobs/2018/april/shoes.csv`. To specify a folder in any container, omit the leading `/` character. For example, `april/shoes.csv` triggers an event on any file named `shoes.csv` in a folder called `april` in any container. + + Note that `Blob path begins with` and `Blob path ends with` are the only pattern matching allowed in a storage event trigger. Other types of wildcard matching aren't supported for the trigger type. -1. Select whether your trigger responds to a **Blob created** event, **Blob deleted** event, or both. In your specified storage location, each event triggers the Data Factory and Synapse pipelines associated with the trigger. +1. Select whether your trigger responds to a **Blob created** event, a **Blob deleted** event, or both. In your specified storage location, each event triggers the Data Factory and Azure Synapse Analytics pipelines associated with the trigger. - :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-2.png" alt-text="Screenshot of storage event trigger creation page."::: + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-2.png" alt-text="Screenshot that shows a storage event trigger creation page."::: 1. Select whether or not your trigger ignores blobs with zero bytes. -1. After you configure your trigger, click on **Next: Data preview**. This screen shows the existing blobs matched by your storage event trigger configuration. Make sure you have specific filters. Configuring filters that are too broad can match a large number of files created/deleted and may significantly impact your cost. Once your filter conditions are verified, click **Finish**. +1. After you configure your trigger, select **Next: Data preview**. This screen shows the existing blobs matched by your storage event trigger configuration. Make sure you have specific filters. Configuring filters that are too broad can match a large number of files that were created or deleted and might significantly affect your cost. After your filter conditions are verified, select **Finish**. - :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-3.png" alt-text="Screenshot of storage event trigger preview page."::: + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-3.png" alt-text="Screenshot that shows the storage event trigger preview page."::: -1. To attach a pipeline to this trigger, go to the pipeline canvas and click **Trigger** and select **New/Edit**. When the side nav appears, click on the **Choose trigger...** dropdown and select the trigger you created. Click **Next: Data preview** to confirm the configuration is correct and then **Next** to validate the Data preview is correct. +1. To attach a pipeline to this trigger, go to the pipeline canvas and select **Trigger** > **New/Edit**. When the side pane appears, select the **Choose trigger** dropdown list and select the trigger you created. Select **Next: Data preview** to confirm that the configuration is correct. Then select **Next** to validate that the data preview is correct. -1. If your pipeline has parameters, you can specify them on the trigger runs parameter side nav. The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. For detailed explanation, see [Reference Trigger Metadata in Pipelines](how-to-use-trigger-parameterization.md) +1. If your pipeline has parameters, you can specify them on the **Trigger Run Parameters** side pane. The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After you map the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. For a detailed explanation, see [Reference trigger metadata in pipelines](how-to-use-trigger-parameterization.md). - :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-4.png" alt-text="Screenshot of storage event trigger mapping properties to pipeline parameters."::: + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-4.png" alt-text="Screenshot that shows storage event trigger mapping properties to pipeline parameters."::: - In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively. + In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The `folderPath` and `fileName` properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path _sample-data/event-testing_, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile`, respectively. -1. Click **Finish** once you're done. +1. After you're finished, select **Finish**. ## JSON schema -The following table provides an overview of the schema elements that are related to storage event triggers: +The following table provides an overview of the schema elements that are related to storage event triggers. -| **JSON Element** | **Description** | **Type** | **Allowed Values** | **Required** | +| JSON element | Description | Type | Allowed values | Required | | - | | -- | | |-| **scope** | The Azure Resource Manager resource ID of the Storage Account. | String | Azure Resource Manager ID | Yes | -| **events** | The type of events that cause this trigger to fire. | Array | Microsoft.Storage.BlobCreated, Microsoft.Storage.BlobDeleted | Yes, any combination of these values. | -| **blobPathBeginsWith** | The blob path must begin with the pattern provided for the trigger to fire. For example, `/records/blobs/december/` only fires the trigger for blobs in the `december` folder under the `records` container. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | -| **blobPathEndsWith** | The blob path must end with the pattern provided for the trigger to fire. For example, `december/boxes.csv` only fires the trigger for blobs named `boxes` in a `december` folder. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | -| **ignoreEmptyBlobs** | Whether or not zero-byte blobs triggers a pipeline run. By default, this is set to true. | Boolean | true or false | No | +| scope | The Azure Resource Manager resource ID of the storage account. | String | Azure Resource Manager ID | Yes. | +| events | The type of events that cause this trigger to fire. | Array | `Microsoft.Storage.BlobCreated`, `Microsoft.Storage.BlobDeleted` | Yes, any combination of these values. | +| `blobPathBeginsWith` | The blob path must begin with the pattern provided for the trigger to fire. For example, `/records/blobs/december/` only fires the trigger for blobs in the `december` folder under the `records` container. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | +| `blobPathEndsWith` | The blob path must end with the pattern provided for the trigger to fire. For example, `december/boxes.csv` only fires the trigger for blobs named `boxes` in a `december` folder. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | +| `ignoreEmptyBlobs` | Whether or not zero-byte blobs triggers a pipeline run. By default, this is set to `true`. | Boolean | true or false | No. | ## Examples of storage event triggers This section provides examples of storage event trigger settings. > [!IMPORTANT]-> You have to include the `/blobs/` segment of the path, as shown in the following examples, whenever you specify container and folder, container and file, or container, folder, and file. For **blobPathBeginsWith**, the UI automatically adds `/blobs/` between the folder and container name in the trigger JSON. +> You have to include the `/blobs/` segment of the path, as shown in the following examples, whenever you specify container and folder, container and file, or container, folder, and file. For `blobPathBeginsWith`, the UI automatically adds `/blobs/` between the folder and container name in the trigger JSON. | Property | Example | Description | ||||-| **Blob path begins with** | `/containername/` | Receives events for any blob in the container. | -| **Blob path begins with** | `/containername/blobs/foldername/` | Receives events for any blobs in the `containername` container and `foldername` folder. | -| **Blob path begins with** | `/containername/blobs/foldername/subfoldername/` | You can also reference a subfolder. | -| **Blob path begins with** | `/containername/blobs/foldername/file.txt` | Receives events for a blob named `file.txt` in the `foldername` folder under the `containername` container. | -| **Blob path ends with** | `file.txt` | Receives events for a blob named `file.txt` in any path. | -| **Blob path ends with** | `/containername/blobs/file.txt` | Receives events for a blob named `file.txt` under container `containername`. | -| **Blob path ends with** | `foldername/file.txt` | Receives events for a blob named `file.txt` in `foldername` folder under any container. | +| `Blob path begins with` | `/containername/` | Receives events for any blob in the container. | +| `Blob path begins with` | `/containername/blobs/foldername/` | Receives events for any blobs in the `containername` container and `foldername` folder. | +| `Blob path begins with` | `/containername/blobs/foldername/subfoldername/` | You can also reference a subfolder. | +| `Blob path begins with` | `/containername/blobs/foldername/file.txt` | Receives events for a blob named `file.txt` in the `foldername` folder under the `containername` container. | +| `Blob path ends with` | `file.txt` | Receives events for a blob named `file.txt` in any path. | +| `Blob path ends with` | `/containername/blobs/file.txt` | Receives events for a blob named `file.txt` under the container `containername`. | +| `Blob path ends with` | `foldername/file.txt` | Receives events for a blob named `file.txt` in the `foldername` folder under any container. | ## Role-based access control -Azure Data Factory and Synapse pipelines use Azure role-based access control (Azure RBAC) to ensure that unauthorized access to listen to, subscribe to updates from, and trigger pipelines linked to blob events, are strictly prohibited. +Data Factory and Azure Synapse Analytics pipelines use Azure role-based access control (Azure RBAC) to ensure that unauthorized access to listen to, subscribe to updates from, and trigger pipelines linked to blob events are strictly prohibited. -* To successfully create a new or update an existing Storage Event Trigger, the Azure account signed into the service needs to have appropriate access to the relevant storage account. Otherwise, the operation fails with _Access Denied_. -* Azure Data Factory and Azure Synapse need no special permission to your Event Grid, and you do _not_ need to assign special RBAC permission to the Data Factory or Azure Synapse service principal for the operation. +* To successfully create a new storage event trigger or update an existing one, the Azure account signed in to the service needs to have appropriate access to the relevant storage account. Otherwise, the operation fails with the message "Access Denied." +* Data Factory and Azure Synapse Analytics need no special permission to your Event Grid instance, and you do *not* need to assign special RBAC permission to the Data Factory or Azure Synapse Analytics service principal for the operation. -Any of following RBAC settings works for storage event trigger: +Any of the following RBAC settings work for storage event triggers: * Owner role to the storage account * Contributor role to the storage account-* _Microsoft.EventGrid/EventSubscriptions/Write_ permission to storage account _/subscriptions/####/resourceGroups/####/providers/Microsoft.Storage/storageAccounts/storageAccountName_ +* `Microsoft.EventGrid/EventSubscriptions/Write` permission to the storage account `/subscriptions/####/resourceGroups/####/providers/Microsoft.Storage/storageAccounts/storageAccountName` +Specifically: -Specifically, +- When you author in the data factory (in the development environment for instance), the Azure account signed in needs to have the preceding permission. +- When you publish through [continuous integration and continuous delivery](continuous-integration-delivery.md), the account used to publish the Azure Resource Manager template into the testing or production factory needs to have the preceding permission. -- When you author in the data factory (in the development environment for instance), the Azure account signed in needs to have the above permission-- When you publish through [CI/CD](continuous-integration-delivery.md), the account used to publish the ARM template into the testing or production factory needs to have the above permission.+To understand how the service delivers the two promises, let's take a step back and peek behind the scenes. Here are the high-level workflows for integration between Data Factory/Azure Synapse Analytics, Storage, and Event Grid. -In order to understand how the service delivers the two promises, let's take back a step and take a peek behind the scenes. Here are the high-level work flows for integration between Azure Data Factory/Azure Synapse, Storage, and Event Grid. +### Create a new storage event trigger -### Create a new Storage Event Trigger +This high-level workflow describes how Data Factory interacts with Event Grid to create a storage event trigger. The data flow is the same in Azure Synapse Analytics, with Azure Synapse Analytics pipelines taking the role of the data factory in the following diagram. -This high-level work flow describes how Azure Data Factory interacts with Event Grid to create a Storage Event Trigger. The data flow is the same in Azure Synapse, with Synapse pipelines taking the role of the Data Factory in the following diagram. +Two noticeable callouts from the workflows: -Two noticeable call outs from the work flows: --* Azure Data Factory and Azure Synapse make _no_ direct contact with Storage account. Request to create a subscription is instead relayed and processed by Event Grid. Hence, the service needs no permission to Storage account for this step. --* Access control and permission checking happen within the service. Before the service sends a request to subscribe to storage event, it checks the permission for the user. More specifically, it checks whether the Azure account signed in and attempting to create the Storage Event trigger has appropriate access to the relevant storage account. If the permission check fails, trigger creation also fails. +* Data Factory and Azure Synapse Analytics make _no_ direct contact with the storage account. The request to create a subscription is instead relayed and processed by Event Grid. The service needs no permission to access the storage account for this step. +* Access control and permission checking happen within the service. Before the service sends a request to subscribe to a storage event, it checks the permission for the user. More specifically, it checks whether the Azure account that's signed in and attempting to create the storage event trigger has appropriate access to the relevant storage account. If the permission check fails, trigger creation also fails. ### Storage event trigger pipeline run -This high-level work flow describes how storage event trigger pipelines run through Event Grid. For Azure Synapse the data flow is the same, with Synapse pipelines taking the role of the Data Factory in the diagram below. +This high-level workflow describes how storage event trigger pipelines run through Event Grid. For Azure Synapse Analytics, the data flow is the same, with Azure Synapse Analytics pipelines taking the role of Data Factory in the following diagram. -There are three noticeable call outs in the workflow related to Event triggering pipelines within the service: +Three noticeable callouts in the workflow are related to event triggering pipelines within the service: -* Event Grid uses a Push model that relays the message as soon as possible when storage drops the message into the system. This is different from messaging system, such as Kafka where a Pull system is used. -* Event Trigger serves as an active listener to the incoming message and it properly triggers the associated pipeline. -* Storage Event Trigger itself makes no direct contact with Storage account +* Event Grid uses a Push model that relays the message as soon as possible when storage drops the message into the system. This approach is different from a messaging system, such as Kafka, where a Pull system is used. +* The event trigger serves as an active listener to the incoming message and it properly triggers the associated pipeline. +* The storage event trigger itself makes no direct contact with the storage account. - * That said, if you have a Copy or other activity inside the pipeline to process the data in Storage account, the service makes direct contact with Storage, using the credentials stored in the Linked Service. Ensure that Linked Service is set up appropriately - * However, if you make no reference to the Storage account in the pipeline, you don't need to grant permission to the service to access Storage account + * If you have a Copy activity or another activity inside the pipeline to process the data in the storage account, the service makes direct contact with the storage account by using the credentials stored in the linked service. Ensure that the linked service is set up appropriately. + * If you make no reference to the storage account in the pipeline, you don't need to grant permission to the service to access the storage account. ## Related content -* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). -* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md) +* For more information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). +* To reference trigger metadata in a pipeline, see [Reference trigger metadata in pipeline runs](how-to-use-trigger-parameterization.md). |
data-factory | How To Create Tumbling Window Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md | Last updated 01/05/2024 This article provides steps to create, start, and monitor a tumbling window trigger. For general information about triggers and the supported types, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md). -Tumbling window triggers are a type of trigger that fires at a periodic time interval from a specified start time, while retaining state. Tumbling windows are a series of fixed-sized, non-overlapping, and contiguous time intervals. A tumbling window trigger has a one-to-one relationship with a pipeline and can only reference a singular pipeline. Tumbling window trigger is a more heavy weight alternative for schedule trigger offering a suite of features for complex scenarios([dependency on other tumbling window triggers](#tumbling-window-trigger-dependency), [rerunning a failed job](tumbling-window-trigger-dependency.md#monitor-dependencies) and [set user retry for pipelines](#user-assigned-retries-of-pipelines)). To further understand the difference between schedule trigger and tumbling window trigger, please visit [here](concepts-pipeline-execution-triggers.md#trigger-type-comparison). +Tumbling window triggers are a type of trigger that fires at a periodic time interval from a specified start time, while retaining state. Tumbling windows are a series of fixed-sized, nonoverlapping, and contiguous time intervals. A tumbling window trigger has a one-to-one relationship with a pipeline and can only reference a singular pipeline. -## Azure Data Factory and Synapse portal experience +A tumbling window trigger is a more heavyweight alternative for a schedule trigger. It offers a suite of features for complex scenarios like ([dependency on other tumbling window triggers](#tumbling-window-trigger-dependency), [rerunning a failed job](tumbling-window-trigger-dependency.md#monitor-dependencies), and [setting user retry for pipelines](#user-assigned-retries-of-pipelines)). To further understand the difference between a schedule trigger and a tumbling window trigger, see [Trigger type comparison](concepts-pipeline-execution-triggers.md#trigger-type-comparison). -1. To create a tumbling window trigger in the Azure portal, select the **Triggers** tab, and then select **New**. -1. After the trigger configuration pane opens, select **Tumbling Window**, and then define your tumbling window trigger properties. -1. When you're done, select **Save**. +## Azure Data Factory and Azure Synapse portal experience ++1. To create a tumbling window trigger in the Azure portal, select the **Triggers** tab, and then select **New**. +1. After the trigger configuration pane opens, select **Tumbling window**. Then define your tumbling window trigger properties. +1. When you're finished, select **Save**. # [Azure Data Factory](#tab/data-factory) # [Azure Synapse](#tab/synapse-analytics) A tumbling window has the following trigger type properties: } ``` -The following table provides a high-level overview of the major JSON elements that are related to recurrence and scheduling of a tumbling window trigger: +The following table provides a high-level overview of the major JSON elements that are related to recurrence and scheduling of a tumbling window trigger. | JSON element | Description | Type | Allowed values | Required | |: |: |: |: |: |-| **type** | The type of the trigger. The type is the fixed value "TumblingWindowTrigger". | String | "TumblingWindowTrigger" | Yes | -| **runtimeState** | The current state of the trigger run time.<br/>**Note**: This element is \<readOnly>. | String | "Started," "Stopped," "Disabled" | Yes | -| **frequency** | A string that represents the frequency unit (minutes, hours, or months) at which the trigger recurs. If the **startTime** date values are more granular than the **frequency** value, the **startTime** dates are considered when the window boundaries are computed. For example, if the **frequency** value is hourly and the **startTime** value is 2017-09-01T10:10:10Z, the first window is (2017-09-01T10:10:10Z, 2017-09-01T11:10:10Z). | String | "Minute," "Hour", "Month" | Yes | -| **interval** | A positive integer that denotes the interval for the **frequency** value, which determines how often the trigger runs. For example, if the **interval** is 3 and the **frequency** is "hour," the trigger recurs every 3 hours. <br/>**Note**: The minimum window interval is 5 minutes. | Integer | A positive integer. | Yes | -| **startTime**| The first occurrence, which can be in the past. The first trigger interval is (**startTime**, **startTime** + **interval**). | DateTime | A DateTime value. | Yes | -| **endTime**| The last occurrence, which can be in the past. | DateTime | A DateTime value. | Yes | -| **delay** | The amount of time to delay the start of data processing for the window. The pipeline run is started after the expected execution time plus the amount of **delay**. The **delay** defines how long the trigger waits past the due time before triggering a new run. The **delay** doesnΓÇÖt alter the window **startTime**. For example, a **delay** value of 00:10:00 implies a delay of 10 minutes. | Timespan<br/>(hh:mm:ss) | A timespan value where the default is 00:00:00. | No | -| **maxConcurrency** | The number of simultaneous trigger runs that are fired for windows that are ready. For example, to back fill hourly runs for yesterday results in 24 windows. If **maxConcurrency** = 10, trigger events are fired only for the first 10 windows (00:00-01:00 - 09:00-10:00). After the first 10 triggered pipeline runs are complete, trigger runs are fired for the next 10 windows (10:00-11:00 - 19:00-20:00). Continuing with this example of **maxConcurrency** = 10, if there are 10 windows ready, there are 10 total pipeline runs. If there's only 1 window ready, there's only 1 pipeline run. | Integer | An integer between 1 and 50. | Yes | -| **retryPolicy: Count** | The number of retries before the pipeline run is marked as "Failed." | Integer | An integer, where the default is 0 (no retries). | No | -| **retryPolicy: intervalInSeconds** | The delay between retry attempts specified in seconds. | Integer | The number of seconds, where the default is 30. The minimum value is 30. | No | -| **dependsOn: type** | The type of TumblingWindowTriggerReference. Required if a dependency is set. | String | "TumblingWindowTriggerDependencyReference", "SelfDependencyTumblingWindowTriggerReference" | No | -| **dependsOn: size** | The size of the dependency tumbling window. | Timespan<br/>(hh:mm:ss) | A positive timespan value where the default is the window size of the child trigger | No | -| **dependsOn: offset** | The offset of the dependency trigger. | Timespan<br/>(hh:mm:ss) | A timespan value that must be negative in a self-dependency. If no value specified, the window is the same as the trigger itself. | Self-Dependency: Yes<br/>Other: No | +| `type` | The type of the trigger. The `type` is the fixed value `TumblingWindowTrigger`. | `String` | `TumblingWindowTrigger` | Yes | +| `runtimeState` | The current state of the trigger run time.<br/>This element is \<readOnly>. | `String` | `Started`, `Stopped`, `Disabled` | Yes | +| `frequency` | A string that represents the frequency unit (minutes, hours, or months) at which the trigger recurs. If the `startTime` date values are more granular than the `frequency` value, the `startTime` dates are considered when the window boundaries are computed. For example, if the `frequency` value is `hourly` and the `startTime` value is 2017-09-01T10:10:10Z, the first window is (2017-09-01T10:10:10Z, 2017-09-01T11:10:10Z). | `String` | `Minute`, `Hour`, `Month` | Yes | +| `interval` | A positive integer that denotes the interval for the `frequency` value, which determines how often the trigger runs. For example, if the `interval` is `3` and the `frequency` is `hour`, the trigger recurs every 3 hours. <br/>The minimum window interval is 5 minutes. | `Integer` | A positive integer. | Yes | +| `startTime`| The first occurrence, which can be in the past. The first trigger interval is (`startTime`, `startTime + interval`). | `DateTime` | A `DateTime` value. | Yes | +| `endTime`| The last occurrence, which can be in the past. | `DateTime` | A `DateTime` value. | Yes | +| `delay` | The amount of time to delay the start of data processing for the window. The pipeline run is started after the expected execution time plus the amount of delay. The delay defines how long the trigger waits past the due time before triggering a new run. The delay doesn't alter the window `startTime`. For example, a `delay` value of 00:10:00 implies a delay of 10 minutes. | `Timespan`<br/>(hh:mm:ss) | A `timespan` value where the default is `00:00:00`. | No | +| `maxConcurrency` | The number of simultaneous trigger runs that are fired for windows that are ready. For example, to backfill hourly runs for yesterday results in 24 windows. If `maxConcurrency` = 10, trigger events are fired only for the first 10 windows (00:00-01:00 - 09:00-10:00). After the first 10 triggered pipeline runs are complete, trigger runs are fired for the next 10 windows (10:00-11:00 - 19:00-20:00). Continuing with this example of `maxConcurrency` = 10, if there are 10 windows ready, there are 10 total pipeline runs. If only one window is ready, only one pipeline runs. | `Integer` | An integer between 1 and 50. | Yes | +| `retryPolicy: Count` | The number of retries before the pipeline run is marked as `Failed`. | `Integer` | An integer, where the default is 0 (no retries). | No | +| `retryPolicy: intervalInSeconds` | The delay between retry attempts specified in seconds. | `Integer` | The number of seconds, where the default is 30. The minimum value is `30`. | No | +| `dependsOn: type` | The type of `TumblingWindowTriggerReference`. Required if a dependency is set. | `String` | `TumblingWindowTriggerDependencyReference`, `SelfDependencyTumblingWindowTriggerReference` | No | +| `dependsOn: size` | The size of the dependency tumbling window. | `Timespan`<br/>(hh:mm:ss) | A positive `timespan` value where the default is the window size of the child trigger. | No | +| `dependsOn: offset` | The offset of the dependency trigger. | `Timespan`<br/>(hh:mm:ss) | A `timespan` value that must be negative in a self-dependency. If no value is specified, the window is the same as the trigger itself. | Self-Dependency: Yes<br/>Other: No | > [!NOTE]-> After a tumbling window trigger is published, **interval** and **frequency** can't be edited. +> After a tumbling window trigger is published, the `interval` and `frequency` values can't be edited. ### WindowStart and WindowEnd system variables -You can use the **WindowStart** and **WindowEnd** system variables of the tumbling window trigger in your **pipeline** definition (that is, for part of a query). Pass the system variables as parameters to your pipeline in the **trigger** definition. The following example shows you how to pass these variables as parameters: +You can use the `WindowStart` and `WindowEnd` system variables of the tumbling window trigger in your **pipeline** definition (that is, for part of a query). Pass the system variables as parameters to your pipeline in the **trigger** definition. The following example shows you how to pass these variables as parameters. ```json { You can use the **WindowStart** and **WindowEnd** system variables of the tumbli } ``` -To use the **WindowStart** and **WindowEnd** system variable values in the pipeline definition, use your "MyWindowStart" and "MyWindowEnd" parameters, accordingly. +To use the `WindowStart` and `WindowEnd` system variable values in the pipeline definition, use your `MyWindowStart` and `MyWindowEnd` parameters, accordingly. ### Execution order of windows in a backfill scenario -If the startTime of trigger is in the past, then based on this formula, M=(CurrentTime- TriggerStartTime)/TumblingWindowSize, the trigger will generate {M} backfill(past) runs in parallel, honoring trigger concurrency, before executing the future runs. The order of execution for windows is deterministic, from oldest to newest intervals. Currently, this behavior can't be modified. +If the trigger `startTime` is in the past, then based on the formula M=(CurrentTime- TriggerStartTime)/TumblingWindowSize, the trigger generates {M} backfill(past) runs in parallel, honoring trigger concurrency, before executing the future runs. The order of execution for windows is deterministic, from oldest to newest intervals. Currently, this behavior can't be modified. > [!NOTE]-> Be aware that in this scenario, all runs from the selected startTime will be run before executing future runs. If you need to backfill a long period of time, doing an intial historical load is recommended. +> In this scenario, all runs from the selected `startTime` are run before executing future runs. If you need to backfill a long period of time, we recommend doing an initial historical load. ### Existing TriggerResource elements -The following points apply to update of existing **TriggerResource** elements: +The following points apply to updating existing `TriggerResource` elements: -* The value for the **frequency** element (or window size) of the trigger along with **interval** element cannot be changed once the trigger is created. This is required for proper functioning of triggerRun reruns and dependency evaluations -* If the value for the **endTime** element of the trigger changes (added or updated), the state of the windows that are already processed is *not* reset. The trigger honors the new **endTime** value. If the new **endTime** value is before the windows that are already executed, the trigger stops. Otherwise, the trigger stops when the new **endTime** value is encountered. +* The value for the `frequency` element (or window size) of the trigger along with the `interval` element can't be changed after the trigger is created. This restriction is required for proper functioning of `triggerRun` reruns and dependency evaluations. +* If the value for the `endTime` element of the trigger changes (by adding or updating), the state of the windows that are already processed is *not* reset. The trigger honors the new `endTime` value. If the new `endTime` value is before the windows that are already executed, the trigger stops. Otherwise, the trigger stops when the new `endTime` value is encountered. -### User assigned retries of pipelines +### User-assigned retries of pipelines -In case of pipeline failures, tumbling window trigger can retry the execution of the referenced pipeline automatically, using the same input parameters, without the user intervention. This can be specified using the property "retryPolicy" in the trigger definition. +In the case of pipeline failures, a tumbling window trigger can retry the execution of the referenced pipeline automatically by using the same input parameters, without user intervention. Use the `retryPolicy` property in the trigger definition to specify this action. ### Tumbling window trigger dependency If you want to make sure that a tumbling window trigger is executed only after the successful execution of another tumbling window trigger in the data factory, [create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md). -### Cancel tumbling window run +### Cancel a tumbling window run -You can cancel runs for a tumbling window trigger, if the specific window is in _Waiting_, _Waiting on Dependency_, or _Running_ state +You can cancel runs for a tumbling window trigger if the specific window is in a **Waiting**, **Waiting on dependency**, or **Running** state: -* If the window is in **Running** state, cancel the associated _Pipeline Run_, and the trigger run will be marked as _Canceled_ afterwards -* If the window is in **Waiting** or **Waiting on Dependency** state, you can cancel the window from Monitoring: +* If the window is in a **Running** state, cancel the associated **Pipeline Run**, and the trigger run is marked as **Canceled** afterwards. +* If the window is in a **Waiting** or **Waiting on dependency** state, you can cancel the window from **Monitoring**. # [Azure Data Factory](#tab/data-factory) # [Azure Synapse](#tab/synapse-analytics) -You can also rerun a canceled window. The rerun will take the _latest_ published definitions of the trigger, and dependencies for the specified window will be _re-evaluated_ upon rerun +You can also rerun a canceled window. The rerun takes the _latest_ published definitions of the trigger. Dependencies for the specified window are _reevaluated_ upon rerun. # [Azure Data Factory](#tab/data-factory) # [Azure Synapse](#tab/synapse-analytics) -## Sample for Azure PowerShell and Azure CLI +## Sample for Azure PowerShell and the Azure CLI # [Azure PowerShell](#tab/azure-powershell) This section shows you how to use Azure PowerShell to create, start, and monitor ### Prerequisites -- **Azure subscription**. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. --- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell). --- **Azure Data Factory**. Follow the instructions in [Create an Azure Data Factory using PowerShell](./quickstart-create-data-factory-powershell.md) to create a data factory and a pipeline.+- **Azure subscription**: If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. +- **Azure PowerShell**: Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell). +- **Azure Data Factory**: Follow the instructions in [Create an Azure Data Factory by using PowerShell](./quickstart-create-data-factory-powershell.md) to create a data factory and a pipeline. -### Sample Code +### Sample code 1. Create a JSON file named **MyTrigger.json** in the C:\ADFv2QuickStartPSH\ folder with the following content: > [!IMPORTANT]- > Before you save the JSON file, set the value of the **startTime** element to the current UTC time. Set the value of the **endTime** element to one hour past the current UTC time. + > Before you save the JSON file, set the value of the `startTime` element to the current Coordinated Universal Time (UTC) time. Set the value of the `endTime` element to one hour past the current UTC time. ```json { This section shows you how to use Azure PowerShell to create, start, and monitor } ``` -2. Create a trigger by using the [Set-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/set-azdatafactoryv2trigger) cmdlet: +1. Create a trigger by using the [Set-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/set-azdatafactoryv2trigger) cmdlet: ```powershell Set-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" -DefinitionFile "C:\ADFv2QuickStartPSH\MyTrigger.json" ``` -3. Confirm that the status of the trigger is **Stopped** by using the [Get-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/get-azdatafactoryv2trigger) cmdlet: +1. Confirm that the status of the trigger is **Stopped** by using the [Get-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/get-azdatafactoryv2trigger) cmdlet: ```powershell Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" ``` -4. Start the trigger by using the [Start-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/start-azdatafactoryv2trigger) cmdlet: +1. Start the trigger by using the [Start-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/start-azdatafactoryv2trigger) cmdlet: ```powershell Start-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" ``` -5. Confirm that the status of the trigger is **Started** by using the [Get-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/get-azdatafactoryv2trigger) cmdlet: +1. Confirm that the status of the trigger is **Started** by using the [Get-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/get-azdatafactoryv2trigger) cmdlet: ```powershell Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" ``` -6. Get the trigger runs in Azure PowerShell by using the [Get-AzDataFactoryV2TriggerRun](/powershell/module/az.datafactory/get-azdatafactoryv2triggerrun) cmdlet. To get information about the trigger runs, execute the following command periodically. Update the **TriggerRunStartedAfter** and **TriggerRunStartedBefore** values to match the values in your trigger definition: +1. Get the trigger runs in Azure PowerShell by using the [Get-AzDataFactoryV2TriggerRun](/powershell/module/az.datafactory/get-azdatafactoryv2triggerrun) cmdlet. To get information about the trigger runs, execute the following command periodically. Update the `TriggerRunStartedAfter` and `TriggerRunStartedBefore` values to match the values in your trigger definition: ```powershell Get-AzDataFactoryV2TriggerRun -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -TriggerName "MyTrigger" -TriggerRunStartedAfter "2017-12-08T00:00:00" -TriggerRunStartedBefore "2017-12-08T01:00:00" This section shows you how to use Azure PowerShell to create, start, and monitor # [Azure CLI](#tab/azure-cli) -This section shows you how to use Azure CLI to create, start, and monitor a trigger. +This section shows you how to use the Azure CLI to create, start, and monitor a trigger. ### Prerequisites [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -- Follow the instructions in [Create an Azure Data Factory using Azure CLI](./quickstart-create-data-factory-azure-cli.md) to create a data factory and a pipeline.+- Follow the instructions in [Create an Azure Data Factory by using the Azure CLI](./quickstart-create-data-factory-azure-cli.md) to create a data factory and a pipeline. -### Sample Code +### Sample code 1. In your working directory, create a JSON file named **MyTrigger.json** with the trigger's properties. For this sample, use the following content: > [!IMPORTANT]- > Before you save the JSON file, set the value of **referenceName** to your pipeline name. Set the value of the **startTime** element to the current UTC time. Set the value of the **endTime** element to one hour past the current UTC time. + > Before you save the JSON file, set the value of `referenceName` to your pipeline name. Set the value of the `startTime` element to the current UTC time. Set the value of the `endTime` element to one hour past the current UTC time. ```json { This section shows you how to use Azure CLI to create, start, and monitor a trig } ``` -2. Create a trigger by using the [az datafactory trigger create](/cli/azure/datafactory/trigger#az-datafactory-trigger-create) command: +1. Create a trigger by using the [az datafactory trigger create](/cli/azure/datafactory/trigger#az-datafactory-trigger-create) command: > [!IMPORTANT]- > For this step and all subsequent steps replace `ResourceGroupName` with your resource group name. Replace `DataFactoryName` with your data factory's name. + > For this step and all subsequent steps, replace `ResourceGroupName` with your resource group name. Replace `DataFactoryName` with your data factory's name. ```azurecli az datafactory trigger create --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --name "MyTrigger" --properties @MyTrigger.json ``` -3. Confirm that the status of the trigger is **Stopped** by using the [az datafactory trigger show](/cli/azure/datafactory/trigger#az-datafactory-trigger-show) command: +1. Confirm that the status of the trigger is **Stopped** by using the [az datafactory trigger show](/cli/azure/datafactory/trigger#az-datafactory-trigger-show) command: ```azurecli az datafactory trigger show --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --name "MyTrigger" ``` -4. Start the trigger by using the [az datafactory trigger start](/cli/azure/datafactory/trigger#az-datafactory-trigger-start) command: +1. Start the trigger by using the [az datafactory trigger start](/cli/azure/datafactory/trigger#az-datafactory-trigger-start) command: ```azurecli az datafactory trigger start --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --name "MyTrigger" ``` -5. Confirm that the status of the trigger is **Started** by using the [az datafactory trigger show](/cli/azure/datafactory/trigger#az-datafactory-trigger-show) command: +1. Confirm that the status of the trigger is **Started** by using the [az datafactory trigger show](/cli/azure/datafactory/trigger#az-datafactory-trigger-show) command: ```azurecli az datafactory trigger show --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --name "MyTrigger" ``` -6. Get the trigger runs in Azure CLI by using the [az datafactory trigger-run query-by-factory](/cli/azure/datafactory/trigger-run#az-datafactory-trigger-run-query-by-factory) command. To get information about the trigger runs, execute the following command periodically. Update the **last-updated-after** and **last-updated-before** values to match the values in your trigger definition: +1. Get the trigger runs in the Azure CLI by using the [az datafactory trigger-run query-by-factory](/cli/azure/datafactory/trigger-run#az-datafactory-trigger-run-query-by-factory) command. To get information about the trigger runs, execute the following command periodically. Update the `last-updated-after` and `last-updated-before` values to match the values in your trigger definition: ```azurecli az datafactory trigger-run query-by-factory --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --filters operand="TriggerName" operator="Equals" values="MyTrigger" --last-updated-after "2017-12-08T00:00:00Z" --last-updated-before "2017-12-08T01:00:00Z" To monitor trigger runs and pipeline runs in the Azure portal, see [Monitor pipe ## Related content -* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). -* [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md). -* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md) +* [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json) +* [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md) +* [Reference trigger metadata in pipeline runs](how-to-use-trigger-parameterization.md) |
data-factory | How To Use Trigger Parameterization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-trigger-parameterization.md | Title: Pass trigger information to pipeline -description: Learn how to reference trigger metadata in pipeline +description: Learn how to reference trigger metadata in pipelines. Last updated 05/15/2024 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article describes how trigger metadata, such as trigger start time, can be used in pipeline run. +This article describes how trigger metadata, such as the trigger start time, can be used in a pipeline run. -Pipeline sometimes needs to understand and reads metadata from trigger that invokes it. For instance, with Tumbling Window Trigger run, based upon window start and end time, pipeline will process different data slices or folders. In Azure Data Factory, we use Parameterization and [System Variable](control-flow-system-variables.md) to pass meta data from trigger to pipeline. +A pipeline sometimes needs to understand and read metadata from the trigger that invokes it. For instance, with a tumbling window trigger run, based on the window start and end time, the pipeline processes different data slices or folders. In Azure Data Factory, we use parameterization and [system variables](control-flow-system-variables.md) to pass metadata from triggers to pipelines. -This pattern is especially useful for [Tumbling Window Trigger](how-to-create-tumbling-window-trigger.md), where trigger provides window start and end time, and [Custom Event Trigger](how-to-create-custom-event-trigger.md), where trigger parse and process values in [custom defined _data_ field](../event-grid/event-schema.md). +This pattern is especially useful for [tumbling window triggers](how-to-create-tumbling-window-trigger.md), where the trigger provides the window start and end time, and [custom event triggers](how-to-create-custom-event-trigger.md), where the trigger parses and processes values in a [custom-defined *data* field](../event-grid/event-schema.md). > [!NOTE]-> Different trigger type provides different meta data information. For more information, see [System Variable](control-flow-system-variables.md) +> Different trigger types provide different metadata information. For more information, see [System variables](control-flow-system-variables.md). ## Data Factory UI -This section shows you how to pass meta data information from trigger to pipeline, within the Azure Data Factory User Interface. +This section shows you how to pass metadata information from triggers to pipelines, within the Data Factory user interface (UI). -1. Go to the **Authoring Canvas** and edit a pipeline +1. Go to the **Authoring Canvas** and edit a pipeline. -1. Select on the blank canvas to bring up pipeline settings. DonΓÇÖt select any activity. You may need to pull up the setting panel from the bottom of the canvas, as it may have been collapsed +1. Select the blank canvas to bring up pipeline settings. Don't select any activity. You might need to pull up the setting pane from the bottom of the canvas because it might be collapsed. -1. Select **Parameters** section and select **+ New** to add parameters +1. Select the **Parameters** tab and select **+ New** to add parameters. - :::image type="content" source="media/how-to-use-trigger-parameterization/01-create-parameter.png" alt-text="Screen shot of pipeline setting showing how to define parameters in pipeline."::: + :::image type="content" source="media/how-to-use-trigger-parameterization/01-create-parameter.png" alt-text="Screenshot that shows a pipeline setting showing how to define parameters in a pipeline."::: -1. Add triggers to pipeline, by clicking on **+ Trigger**. +1. Add triggers to the pipeline by selecting **+ Trigger**. -1. Create or attach a trigger to the pipeline, and select **OK** +1. Create or attach a trigger to the pipeline and select **OK**. -1. After selecting **OK**, another **New trigger** page is presented with a list of the parameters specified for the pipeline, as shown in the following screenshot. On that page, fill in trigger meta data for each parameter. Use format defined in [System Variable](control-flow-system-variables.md) to retrieve trigger information. You don't need to fill in the information for all parameters, just the ones that will assume trigger metadata values. For instance, here we assign trigger run start time to *parameter_1*. +1. After you select **OK**, another **New trigger** page appears with a list of the parameters specified for the pipeline, as shown in the following screenshot. On that page, fill in the trigger metadata for each parameter. Use the format defined in [System variables](control-flow-system-variables.md) to retrieve trigger information. You don't need to fill in the information for all parameters. Just fill in the ones that will assume trigger metadata values. For instance, here we assign the trigger run start time to `parameter_1`. - :::image type="content" source="media/how-to-use-trigger-parameterization/02-pass-in-system-variable.png" alt-text="Screenshot of trigger definition page showing how to pass trigger information to pipeline parameters."::: + :::image type="content" source="media/how-to-use-trigger-parameterization/02-pass-in-system-variable.png" alt-text="Screenshot that shows the Trigger Run Parameters page showing how to pass trigger information to pipeline parameters."::: -1. To use the values in pipeline, utilize parameters _@pipeline().parameters.parameterName_, __not__ system variable, in pipeline definitions. For instance, in our case, to read trigger start time, we'll reference @pipeline().parameters.parameter_1. +1. To use the values in the pipeline, utilize parameters, like `@pipeline().parameters.parameterName`, *not* system variables, in pipeline definitions. For instance, in this case, to read the trigger start time, we reference `@pipeline().parameters.parameter_1`. ## JSON schema -To pass in trigger information to pipeline runs, both the trigger and the pipeline json need to be updated with _parameters_ section. +To pass in trigger information to pipeline runs, both the trigger and the pipeline JSON need to be updated with the `parameters` section. ### Pipeline definition -Under **properties** section, add parameter definitions to **parameters** section +Under the `properties` section, add parameter definitions to the `parameters` section. ```json { Under **properties** section, add parameter definitions to **parameters** sectio ### Trigger definition -Under **pipelines** section, assign parameter values in **parameters** section. You don't need to fill in the information for all parameters, just the ones that will assume trigger metadata values. +Under the `pipelines` section, assign parameter values in the `parameters` section. You don't need to fill in the information for all parameters. Just fill in the ones that will assume trigger metadata values. ```json { Under **pipelines** section, assign parameter values in **parameters** section. } ``` -### Use trigger information in pipeline +### Use trigger information in a pipeline -To use the values in pipeline, utilize parameters _@pipeline().parameters.parameterName_, __not__ system variable, in pipeline definitions. +To use the values in a pipeline, utilize parameters, like `@pipeline().parameters.parameterName`, *not* system variables, in pipeline definitions. ## Related content -For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). +For more information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). |
data-factory | Tumbling Window Trigger Dependency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md | Last updated 10/20/2023 # Create a tumbling window trigger dependency [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article provides steps to create a dependency on a tumbling window trigger. For general information about Tumbling Window triggers, see [How to create tumbling window trigger](how-to-create-tumbling-window-trigger.md). +This article provides steps to create a dependency on a tumbling window trigger. For general information about tumbling window triggers, see [Create a tumbling window trigger](how-to-create-tumbling-window-trigger.md). -In order to build a dependency chain and make sure that a trigger is executed only after the successful execution of another trigger within the service, use this advanced feature to create a tumbling window dependency. +To build a dependency chain and make sure that a trigger is executed only after the successful execution of another trigger within the service, use this advanced feature to create a tumbling window dependency. -For a demonstration on how to create dependent pipelines using tumbling window trigger, watch the following video: +For a demonstration on how to create dependent pipelines by using a tumbling window trigger, watch the following video: > [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player] ## Create a dependency in the UI -To create dependency on a trigger, select **Trigger > Advanced > New**, and then choose the trigger to depend on with the appropriate offset and size. Select **Finish** and publish the changes for the dependencies to take effect. +To create dependency on a trigger, select **Trigger** > **Advanced** > **New**. Then choose the trigger to depend on with the appropriate offset and size. Select **Finish** and publish the changes for the dependencies to take effect. ## Tumbling window dependency properties A tumbling window trigger with a dependency has the following properties: } ``` -The following table provides the list of attributes needed to define a Tumbling Window dependency. +The following table provides the list of attributes needed to define a tumbling window dependency. -| **Property Name** | **Description** | **Type** | **Required** | +| Property name | Description | Type | Required | |||||-| type | All the existing tumbling window triggers are displayed in this drop down. Choose the trigger to take dependency on. | TumblingWindowTriggerDependencyReference or SelfDependencyTumblingWindowTriggerReference | Yes | -| offset | Offset of the dependency trigger. Provide a value in time span format and both negative and positive offsets are allowed. This property is mandatory if the trigger is depending on itself and in all other cases it is optional. Self-dependency should always be a negative offset. If no value specified, the window is the same as the trigger itself. | Timespan<br/>(hh:mm:ss) | Self-Dependency: Yes<br/>Other: No | -| size | Size of the dependency tumbling window. Provide a positive timespan value. This property is optional. | Timespan<br/>(hh:mm:ss) | No | +| `type` | All the existing tumbling window triggers are displayed in this dropdown list. Choose the trigger to take dependency on. | `TumblingWindowTriggerDependencyReference` or `SelfDependencyTumblingWindowTriggerReference` | Yes | +| `offset` | Offset of the dependency trigger. Provide a value in the timespan format. Both negative and positive offsets are allowed. This property is mandatory if the trigger is depending on itself. In all other cases, it's optional. Self-dependency should always be a negative offset. If no value is specified, the window is the same as the trigger itself. | Timespan<br/>(hh:mm:ss) | Self-Dependency: Yes<br/>Other: No | +| `size` | Size of the dependency tumbling window. Provide a positive timespan value. This property is optional. | Timespan<br/>(hh:mm:ss) | No | > [!NOTE] > A tumbling window trigger can depend on a maximum of five other triggers. ## Tumbling window self-dependency properties -In scenarios where the trigger shouldn't proceed to the next window until the preceding window is successfully completed, build a self-dependency. A self-dependency trigger that's dependent on the success of earlier runs of itself within the preceding hour will have the properties indicated in the following code. +In scenarios where the trigger shouldn't proceed to the next window until the preceding window is successfully completed, build a self-dependency. A self-dependency trigger that's dependent on the success of earlier runs of itself within the preceding hour has the properties indicated in the following code. > [!NOTE]-> If your triggered pipeline relies on the output of pipelines in previously triggered windows, we recommend using only tumbling window trigger self-dependency. To limit parallel trigger runs, set the maximimum trigger concurrency. +> If your triggered pipeline relies on the output of pipelines in previously triggered windows, we recommend using only tumbling window trigger self-dependency. To limit parallel trigger runs, set the maximum trigger concurrency. ```json { In scenarios where the trigger shouldn't proceed to the next window until the pr } } ```+ ## Usage scenarios and examples -Below are illustrations of scenarios and usage of tumbling window dependency properties. +The following scenarios show the use of tumbling window dependency properties. ### Dependency offset ### Dependency size ### Self-dependency ### Dependency on another tumbling window trigger -A daily telemetry processing job depending on another daily job aggregating the last seven days output and generates seven day rolling window streams: +The following example shows a daily telemetry processing job that depends on another daily job aggregating the last seven days of output and generates seven-day rolling window streams. ### Dependency on itself -A daily job with no gaps in the output streams of the job: +The following example shows a daily job with no gaps in the output streams of the job. ## Monitor dependencies -You can monitor the dependency chain and the corresponding windows from the trigger run monitoring page. Navigate to **Monitoring > Trigger Runs**. If a Tumbling Window trigger has dependencies, Trigger Name will bear a hyperlink to dependency monitoring view. +You can monitor the dependency chain and the corresponding windows from the trigger run monitoring page. Go to **Monitoring** > **Trigger Runs**. If a tumbling window trigger has dependencies, the trigger name bears a hyperlink to a dependency monitoring view. -Click through the trigger name to view trigger dependencies. Right-hand panel shows detailed trigger run information, such as RunID, window time, status, and so on. +Click through the trigger name to view trigger dependencies. The pane on the right shows trigger run information such as the run ID, window time, and status. -You can see the status of the dependencies, and windows for each dependent trigger. If one of the dependencies triggers fails, you must successfully rerun it in order for the dependent trigger to run. +You can see the status of the dependencies and windows for each dependent trigger. If one of the dependencies triggers fails, you must successfully rerun it for the dependent trigger to run. -A tumbling window trigger will wait on dependencies for _seven days_ before timing out. After seven days, the trigger run will fail. +A tumbling window trigger waits on dependencies for _seven days_ before timing out. After seven days, the trigger run fails. > [!NOTE]-> A tumbling window trigger cannot be cancelled while it is in the **Waiting on dependency** state. The dependent activity must finish before the tumbling window trigger can be cancelled. This is by design to ensure dependent activities can complete once started, and helps reduce the likelihood of unexpected results. +> A tumbling window trigger can't be canceled while it's in the **Waiting on dependency** state. The dependent activity must finish before the tumbling window trigger can be canceled. This restriction is by design to ensure that dependent activities can complete once they're started. It also helps to reduce the likelihood of unexpected results. -For a more visual to view the trigger dependency schedule, select the Gantt view. +For a more visual way to view the trigger dependency schedule, select the Gantt view. -Transparent boxes show the dependency windows for each down stream-dependent trigger, while solid colored boxes above show individual window runs. Here are some tips for interpreting the Gantt chart view: +Transparent boxes show the dependency windows for each downstream-dependent trigger. Solid-colored boxes shown in the preceding image show individual window runs. Here are some tips for interpreting the Gantt chart view: -* Transparent box renders blue when dependent windows are in pending or running state -* After all windows succeeds for a dependent trigger, the transparent box will turn green -* Transparent box renders red when some dependent window fails. Look for a solid red box to identify the failure window run +* Transparent boxes render blue when dependent windows are in a **Pending** or **Running** state. +* After all windows succeed for a dependent trigger, the transparent box turns green. +* Transparent boxes render red when a dependent window fails. Look for a solid red box to identify the failure window run. -To rerun a window in Gantt chart view, select the solid color box for the window, and an action panel will pop up with details and rerun options +To rerun a window in the Gantt chart view, select the solid color box for the window. An action pane pops up with information and rerun options. ## Related content -* Review [How to create a tumbling window trigger](how-to-create-tumbling-window-trigger.md) +- [Create a tumbling window trigger](how-to-create-tumbling-window-trigger.md) |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | One common challenge when connecting sensors to Defender for IoT in the Azure po ### Security update -This update resolves six CVEs, which are listed in [software version 23.1.3 feature documentation](release-notes.md#version-2413). +This update resolves six CVEs, which are listed in [software version 24.1.3 feature documentation](release-notes.md#version-2413). ## February 2024 |
logic-apps | Deploy Single Tenant Logic Apps Private Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md | ms.suite: integration Previously updated : 10/09/2023 Last updated : 07/04/2024 # Customer intent: As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints. For more information, review the following documentation: This deployment method requires that temporary public access to your storage account. If you can't enable public access due to your organization's policies, you can still deploy your logic app to a private storage account. However, you have to [deploy with an Azure Resource Manager template (ARM template)](#deploy-arm-template), which is described in a later section. > [!NOTE]+> > An exception to the previous rule is that you can use the Azure portal to deploy your logic app to an App Service Environment, > even if the storage account is protected with a private endpoint. However, you'll need connectivity between the > subnet used by the App Service Environment and the subnet used by the storage account's private endpoint. This deployment method requires that temporary public access to your storage acc 1. Deploy your logic app resource by using either the Azure portal or Visual Studio Code. -1. After deployment finishes, enable virtual network integration between your logic app and the private endpoints on the virtual network that connects to your storage account. +1. After deployment finishes, enable virtual network integration between your logic app and the private endpoints on the virtual network connected to your storage account. 1. In the [Azure portal](https://portal.azure.com), open your logic app resource. 1. On the logic app resource menu, under **Settings**, select **Networking**. - 1. Select **VNet integration** on **Outbound Traffic** card to enable integration with a virtual network connecting to your storage account. + 1. In the **Outbound traffic configuration** section, next to **Virtual network integration**, select **Not configured** > **Add virtual network integration** . - 1. To access your logic app workflow data over the virtual network, in your logic app resource settings, set the `WEBSITE_CONTENTOVERVNET` setting to `1`. + 1. On the **Add virtual network integration** pane that opens, select your Azure subscription and your virtual network. - If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your secondary DNS. + 1. From the **Subnet** list, select the subnet where you want to add your logic app. When you're done, select **Connect**. ++1. To access your logic app workflow data over the virtual network, follow these steps: ++ 1. On the logic app resource menu, under **Settings**, select **Environment variables**. ++ 1. On the **App settings** tab, add the **WEBSITE_CONTENTOVERVNET** app setting, if none exist, and set the value to **1**. ++ 1. If you use your own domain name server (DNS) with your virtual network, add the **WEBSITE_DNS_SERVER** app setting, if none exist, and set the value to the IP address for your DNS. If you have a secondary DNS, add another app setting named **WEBSITE_DNS_ALT_SERVER**, and set the value to the IP for your secondary DNS. 1. After you apply these app settings, you can remove public access from your storage account. This deployment method requires that temporary public access to your storage acc 1. On the **Networking** pane, on the **Firewalls and virtual networks** tab, under **Allow access from**, clear **Selected networks**, and add virtual networks as necessary. > [!NOTE]+ > > Your logic app might experience an interruption because the connectivity switch between public and private endpoints might take time. > This disruption might result in your workflows temporarily disappearing. If this behavior happens, you can try to reload your workflows > by restarting the logic app and waiting several minutes. The following errors commonly happen with a private storage account that's behin ||-| | Access to the `host.json` file is denied | `"System.Private.CoreLib: Access to the path 'C:\\home\\site\\wwwroot\\host.json' is denied."` | | Can't load workflows in the logic app resource | `"Encountered an error (ServiceUnavailable) from host runtime."` |-||| As the logic app isn't running when these errors occur, you can't use the Kudu console debugging service on the Azure platform to troubleshoot these errors. However, you can use the following methods instead: |
logic-apps | Logic Apps Securing A Logic App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md | You can limit access to the inputs and outputs in the run history for your logic For example, to block anyone from accessing inputs and outputs, specify an IP address range such as `0.0.0.0-0.0.0.0`. Only a person with administrator permissions can remove this restriction, which provides the possibility for "just-in-time" access to data in your logic app workflows. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x* -To specify the allowed IP ranges, follow these steps for either the Azure portal or your Azure Resource Manager template: +To specify the allowed IP ranges, follow these steps for your Consumption or Standard logic app in the Azure portal or your Azure Resource Manager template: #### [Portal](#tab/azure-portal) ##### Consumption workflows -1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. +1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer. 1. On your logic app's menu, under **Settings**, select **Workflow settings**. -1. Under **Access control configuration** > **Allowed inbound IP addresses**, select **Specific IP ranges**. +1. In the **Access control configuration** section, under **Allowed inbound IP addresses**, from the **Trigger access option** list, select **Specific IP ranges**. -1. Under **IP ranges for contents**, specify the IP address ranges that can access content from inputs and outputs. +1. In the **IP ranges for contents** box, specify the IP address ranges that can access content from inputs and outputs. ##### Standard workflows -1. In the [Azure portal](https://portal.azure.com), open your logic app resource. +1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource. 1. On the logic app menu, under **Settings**, select **Networking**. -1. In the **Inbound Traffic** section, select **Access restriction**. +1. In the **Inbound traffic configuration** section, next to **Public network access**, select **Enabled with no access restriction**. -1. Create one or more rules to either **Allow** or **Deny** requests from specific IP ranges. You can also use the HTTP header filter settings and forwarding settings. +1. On the **Access restrictions** page, under **App access**, select **Enabled from select virtual networks and IP addresses**. ++1. Under **Site access and rules**, on the **Main site** tab, add one or more rules to either **Allow** or **Deny** requests from specific IP ranges. You can also use the HTTP header filter settings and forwarding settings. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x* For more information, see [Blocking inbound IP addresses in Azure Logic Apps (Standard)](https://www.serverlessnotes.com/docs/block-inbound-ip-addresses-in-azure-logic-apps-standard). In the Azure portal, IP address restriction affects both triggers *and* actions, ##### Standard workflows -1. In the [Azure portal](https://portal.azure.com), open your logic app resource. +1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource. 1. On the logic app menu, under **Settings**, select **Networking**. -1. In the **Inbound Traffic** section, select **Access restriction**. +1. In the **Inbound traffic configuration** section, next to **Public network access**, select **Enabled with no access restriction**. ++1. On the **Access restrictions** page, under **App access**, select **Enabled from select virtual networks and IP addresses**. -1. Create one or more rules to either **Allow** or **Deny** requests from specific IP ranges. You can also use the HTTP header filter settings and forwarding settings. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x* +1. Under **Site access and rules**, on the **Main site** tab, add one or more rules to either **Allow** or **Deny** requests from specific IP ranges. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x* For more information, see [Blocking inbound IP addresses in Azure Logic Apps (Standard)](https://www.serverlessnotes.com/docs/block-inbound-ip-addresses-in-azure-logic-apps-standard). |
logic-apps | Secure Single Tenant Workflow Virtual Network Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md | For more information, review [Create single-tenant logic app workflows in Azure ### Set up private endpoint connection -1. On your logic app menu, under **Settings**, select **Networking**. +1. On the logic app resource menu, under **Settings**, select **Networking**. -1. On the **Networking** page, on the **Inbound traffic** card, select **Private endpoints**. +1. On the **Networking** page, in the **Inbound traffic configuration** section, select the link next to **Private endpoints**. -1. On the **Private Endpoint connections**, select **Add**. +1. On the **Private Endpoint connections** page, select **Add** > **Express** or **Advanced**. -1. On the **Add Private Endpoint** pane that opens, provide the requested information about the endpoint. + For more information about the **Advanced** option, see [Create a private endpoint](../private-link/create-private-endpoint-portal.md#create-a-private-endpoint). ++1. On the **Add Private Endpoint** pane, provide the requested information about the endpoint. For more information, review [Private Endpoint properties](../private-link/private-endpoint-overview.md#private-endpoint-properties). For more information, review the following documentation: ### Set up virtual network integration -1. In the Azure portal, on the logic app resource menu, under **Settings**, select **Networking**. +1. In the [Azure portal](https://portal.azure.com), on the logic app resource menu, under **Settings**, select **Networking**. ++1. On the **Networking** page, in the **Outbound traffic configuration** section, select the link next to **Virtual network integration**. -1. On the **Networking** pane, on the **Outbound traffic** card, select **VNet integration**. +1. On the **Virtual network integration** page, select **Add virtual network integration**. -1. On the **VNet Integration** pane, select **Add Vnet**. +1. On the **Add virtual network integration** pane, select the subscription, the virtual network that connects to your internal service, and the subnet where to add the logic app. When you finish, select **Connect**. -1. On the **Add VNet Integration** pane, select the subscription and the virtual network that connects to your internal service. + On the **Virtual Network Integration** page, by default, the **Outbound internet traffic** setting is selected, which routes all outbound traffic through the virtual network. In this scenario, the app setting named **WEBSITE_VNET_ROUTE_ALL** is ignored. - After you add virtual network integration, on the **VNet Integration** pane, the **Route All** setting is enabled by default. This setting routes all outbound traffic through the virtual network. When this setting is enabled, the `WEBSITE_VNET_ROUTE_ALL` app setting is ignored. + To find this app setting, on the logic app resource menu, under **Settings**, select **Environment variables**. -1. If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your DNS. +1. If you use your own domain name server (DNS) with your virtual network, add the **WEBSITE_DNS_SERVER** app setting, if none exist, and set the value to the IP address for your DNS. If you have a secondary DNS, add another app setting named **WEBSITE_DNS_ALT_SERVER**, and set the value to the IP for your secondary DNS. 1. After Azure successfully provisions the virtual network integration, try to run the workflow again. |
machine-learning | Azure Machine Learning Ci Image Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md | -Azure Machine Learning checks and validates any machine learning packages that may require an upgrade. Updates incorporate the latest OS-related patches from Canonical as the original Linux OS publisher. In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available. For details on the patching process, see [Vulnerability Management](./concept-vulnerability-management.md). +Azure Machine Learning checks and validates any machine learning packages that might require an upgrade. Updates incorporate the latest OS-related patches from Canonical as the original Linux OS publisher. In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available. For details on the patching process, see [Vulnerability Management](./concept-vulnerability-management.md). Main updates provided with each image version are described in the below sections. Major: Image Version: `24.06.10` SDK (azureml-core): `1.56.0` Python: `3.9`+ CUDA: `12.2`-CUDnn==9.1.1 ++CUDnn==`9.1.1` + Nvidia Driver: `535.171.04`+ PyTorch: `1.13.1`+ TensorFlow: `2.15.0` -autokeras==1.0.16 -keras=2.15.0 -ray==2.2.0 -docker version==24.0.9-1 +autokeras==`1.0.16` ++keras=`2.15.0` ++ray==`2.2.0` ++docker version==`24.0.9-1` -## Feb 16, 2024 +## February 16, 2024 Version: `24.01.30` Main changes: Main changes: - `Azure Machine Learning SDK` to version `1.49.0` - `Certifi` updated to `2022.9.24`-- `.NET` updated from `3.1` (EOL) to `6.0`+- `.NET` updated from `3.1` (end-of-life) to `6.0` - `Pyspark` update to `3.3.1` (mitigating log4j 1.2.17 and common-text-1.6 vulnerabilities) - Default `intellisense` to Python `3.10` on the CI - Bug fixes and stability improvements |
machine-learning | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md | Azure portal users can find the latest image available for provisioning the Data Visit the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds. -## June 28, 2024 --Image Version: 24.06.10 --SDK Version: 1.56.0 --Issue fixed: Compute Instance 20.04 image build with SDK 1.56.0 --Major: Image Version: 24.06.10 --- SDK(azureml-core):1.56.0-- Python:3.9-- CUDA: 12.2-- CUDnn==9.1.1-- Nvidia Driver: 535.171.04-- PyTorch: 1.13.1-- TensorFlow: 2.15.0-- autokeras==1.0.16-- keras=2.15.0-- ray==2.2.0-- docker version==24.0.9-1- ## June 17, 2024 [Data Science Virtual Machine - Windows 2022](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2022?tab=Overview) |
machine-learning | How To Deploy Models Mistral | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-mistral.md | -In this article, you learn how to use Azure Machine Learning studio to deploy the Mistral family of models as a service with pay-as-you-go billing. +In this article, you learn how to use Azure Machine Learning studio to deploy the Mistral family of models as serverless APIs with pay-as-you-go token-based billing. -Mistral AI offers two categories of models in Azure Machine Learning studio: +Mistral AI offers two categories of models in Azure Machine Learning studio. These models are available in the [model catalog](concept-model-catalog.md). -- __Premium models__: Mistral Large and Mistral Small. These models are available with pay-as-you-go token based billing with Models as a Service in the studio model catalog. -- __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models are also available in the studio model catalog and can be deployed to dedicated VM instances in your own Azure subscription with managed online endpoints.- -You can browse the Mistral family of models in the [model catalog](concept-model-catalog.md) by filtering on the Mistral collection. +- __Premium models__: Mistral Large and Mistral Small. These models can be deployed as serverless APIs with pay-as-you-go token-based billing. +- __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models can be deployed to managed computes in your own Azure subscription. ++You can browse the Mistral family of models in the model catalog by filtering on the Mistral collection. ## Mistral family of models Additionally, Mistral Large is: - __Strong in coding.__ Code generation, review, and comments. Supports all mainstream coding languages. - __Multi-lingual by design.__ Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported. - __Responsible AI compliant.__ Efficient guardrails baked in the model, and extra safety layer with the `safe_mode` option.- + # [Mistral Small](#tab/mistral-small) Mistral Small is Mistral AI's most efficient Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency. Mistral Small is: [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] -## Deploy Mistral family of models with pay-as-you-go +## Deploy Mistral family of models as a serverless API ++Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription. -Certain models in the model catalog can be deployed as a service with pay-as-you-go. Pay-as-you-go deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription. +**Mistral Large** and **Mistral Small** can be deployed as a serverless API with pay-as-you-go billing and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models. -**Mistral Large** and **Mistral Small** are eligible to be deployed as a service with pay-as-you-go and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models. ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.+- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one. The serverless API model deployment offering for eligible models in the Mistral family is only available in workspaces created in these regions: - > [!IMPORTANT] - > The pay-as-you-go model deployment offering for eligible models in the Mistral family is only available in workspaces created in the **East US 2** and **Sweden Central** regions. + - East US + - East US 2 + - North Central US + - South Central US + - West US + - West US 3 + - Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). The following steps demonstrate the deployment of Mistral Large, but you can use To create a deployment: 1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).-1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **Sweden Central** region. -1. Choose the model (Mistral-large) that you want to deploy from the [model catalog](https://ml.azure.com/model/catalog). +1. Select the workspace in which you want to deploy your model. To use the serverless API model deployment offering, your workspace must belong to one of the regions listed in the [prerequisites](#prerequisites). +1. Choose the model you want to deploy, for example Mistral-large, from the [model catalog](https://ml.azure.com/model/catalog). Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**. -1. On the model's overview page in the model catalog, select **Deploy** and then **Pay-as-you-go**. +1. On the model's overview page in the model catalog, select **Deploy** to open a serverless API deployment window for the model. +1. Select the checkbox to acknowledge the Microsoft purchase policy. - :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go.png"::: + :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-serverless-api.png" alt-text="A screenshot showing how to deploy a model as a serverless API." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-serverless-api.png"::: 1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. -1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model. +1. You can also select the **Pricing and terms** tab to learn about pricing for the selected model. 1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, Mistral-large). This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a workspace. - :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-marketplace-terms.png"::: - 1. Once you subscribe the workspace for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ workspace don't require subscribing again. If this scenario applies to you, you'll see a **Continue to deploy** option to select. - :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go-project.png" alt-text="A screenshot showing a workspace that is already subscribed to the offering." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go-project.png"::: + :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-serverless-api-project.png" alt-text="A screenshot showing a workspace that is already subscribed to the offering." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-serverless-api-project.png"::: 1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region. To create a deployment: 1. Select the **Test** tab to start interacting with the model. 1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**. -To learn about billing for Mistral models deployed with pay-as-you-go, see [Cost and quota considerations for Mistral family of models deployed as a service](#cost-and-quota-considerations-for-mistral-family-of-models-deployed-as-a-service). +To learn about billing for Mistral models deployed as a serverless API with pay-as-you-go token-based billing, see [Cost and quota considerations for Mistral family of models deployed as a service](#cost-and-quota-considerations-for-mistral-family-of-models-deployed-as-a-service). ### Consume the Mistral family of models as a service You can consume Mistral Large by using the chat API. For more information on using the APIs, see the [reference](#reference-for-mistral-family-of-models-deployed-as-a-service) section. -### Reference for Mistral family of models deployed as a service +## Reference for Mistral family of models deployed as a service Mistral models accept both the [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/chat/completions` and the native [Mistral Chat API](#mistral-chat-api) on `/v1/chat/completions`. Mistral models accept both the [Azure AI Model Inference API](reference-model-in The [Azure AI Model Inference API](reference-model-inference-api.md) schema can be found in the [reference for Chat Completions](reference-model-inference-chat-completions.md) article and an [OpenAPI specification can be obtained from the endpoint itself](reference-model-inference-api.md?tabs=rest#getting-started). -#### Mistral Chat API +### Mistral Chat API Use the method `POST` to send the request to the `/v1/chat/completions` route: The `messages` object has the following fields: | `role` | `string` | The role of the message's author. One of `system`, `user`, or `assistant`. | -#### Example +#### Request example __Body__ The `logprobs` object is a dictionary with the following fields: | `tokens` | `array` of `string` | Selected tokens. | | `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the prob. | -#### Example +#### Response example The following JSON is an example response: Models deployed as a service with pay-as-you-go are protected by Azure AI conten ## Related content - [Model Catalog and Collections](concept-model-catalog.md)+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)-- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md) |
migrate | Concepts Assessment Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md | Assessments also determine readiness of the recommended target for Microsoft Def - SUSE Linux Enterprise Server 12, 15+ - Debian 9, 10, 11 - Oracle Linux 7.2+, 8- - CentOS Linux 7.2+ - Amazon Linux 2 - For other Operating Systems, the server is marked as **Ready with Conditions**. If a server is not ready to be migrated to Azure, it is marked as **Not Ready** for Microsoft Defender for Servers. |
migrate | Concepts Business Case Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md | You can override the above values in the assumptions section of the Business cas ## Next steps-- [Learn more](./migrate-services-overview.md) about Azure Migrate.+- [Review](best-practices-assessment.md) the best practices for creating assessments. +- Learn more on how to [build](how-to-build-a-business-case.md) and [view](how-to-view-a-business-case.md) a business case. |
migrate | Discover And Assess Using Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md | In the configuration manager, select **Set up prerequisites**, and then complete After the appliance is successfully registered, to see the registration details, select **View details**. -4. **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions. +4. **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7, 7, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions. You can *rerun prerequisites* at any time during appliance configuration to check whether the appliance meets all the prerequisites. |
migrate | How To Scale Out For Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-scale-out-for-migration.md | To complete the registration of the scale-out appliance, click **import** to get 1. In the pop-up window opened in the previous step, select the location of the copied configuration zip file and click **Save**. Once the files have been successfully imported, the registration of the scale-out appliance will complete and it will show you the timestamp of the last successful import. You can also see the registration details by clicking **View details**.-1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, as indicated in the *Installation instructions*. +1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7, 7, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, as indicated in the *Installation instructions*. The Migration and modernization tool uses the VDDK to replicate servers during migration to Azure. |
migrate | How To Set Up Appliance Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/how-to-set-up-appliance-vmware.md | In the configuration manager, select **Set up prerequisites**, and complete thes After the appliance is successfully registered, select **View details** to see the registration details. -1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 or 7.0 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*. +1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7, 7.0, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*. The Migration and modernization tool uses the VDDK to replicate servers during migration to Azure. |
migrate | Migrate Support Matrix Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware.md | Requirement | Details Before deployment | You should have a project in place with the Azure Migrate: Discovery and assessment tool added to the project.<br /><br />You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br /><br />Learn how to [create a project for the first time](../create-manage-projects.md).<br /> Learn how to [add a discovery and assessment tool to an existing project](../how-to-assess.md).<br /> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](../how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers. Supported servers | Supported for all servers in your on-premises environment. Log Analytics | Azure Migrate and Modernize uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br /><br /> You associate a new or existing Log Analytics workspace with a project. You can't modify the workspace for a project after you add the workspace. <br /><br /> The workspace must be in the same subscription as the project.<br /><br /> The workspace must be located in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /><br /> The workspace must be in a [region in which Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor®ions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> In Log Analytics, the workspace associated with Azure Migrate is tagged with the project key and project name.-Required agents | On each server that you want to analyze, install the following agents:<br />- [Microsoft Monitoring Agent (MMA)](../../azure-monitor/agents/agent-windows.md)<br />- [Dependency agent](../../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](../how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the [MMA](../how-to-create-group-machine-dependencies.md#install-the-mma). +Required agents | On each server that you want to analyze, install the following agents:<br />- Azure Monitor agent (AMA)<br />- [Dependency agent](../../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](../how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the Azure Monitor agent. Log Analytics workspace | The workspace must be in the same subscription as the project.<br /><br /> Azure Migrate supports workspaces that are located in the East US, Southeast Asia, and West Europe regions.<br /><br /> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor®ions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> You can't modify the workspace for a project after you add the workspace. Cost | The Service Map solution doesn't incur any charges for the first 180 days. The count starts from the day you associate the Log Analytics workspace with the project.<br /><br /> After 180 days, standard Log Analytics charges apply.<br /><br /> Using any solution other than Service Map in the associated Log Analytics workspace incurs [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br /><br /> When the project is deleted, the workspace isn't automatically deleted. After you delete the project, Service Map usage isn't free. Each node is charged according to the paid tier of the Log Analytics workspace.<br /><br />If you have projects that you created before Azure Migrate general availability (GA on February 28, 2018), you might incur other Service Map charges. To ensure that you're charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable. Management | When you register agents to the workspace, use the ID and key provided by the project.<br /><br /> You can use the Log Analytics workspace outside Azure Migrate and Modernize.<br /><br /> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../../azure-monitor/logs/manage-access.md).<br /><br /> Don't delete the workspace created by Azure Migrate and Modernize unless you delete the project. If you do, the dependency visualization functionality doesn't work as expected. |
migrate | Prepare For Agentless Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/prepare-for-agentless-migration.md | Azure Migrate automatically handles these configuration changes for the followin - Windows Server 2008 or later - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x-- CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.9, 7.7, 7.6, 7.5, 7.4, 6.x+- CentOS 9.x (Release and Stream) - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 - Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) The preparation script executes the following changes based on the OS type of th Azure Migrate will attempt to install the Microsoft Azure Linux Agent (waagent), a secure, lightweight process that manages Linux & FreeBSD provisioning, and VM interaction with the Azure Fabric Controller. [Learn more](../../virtual-machines/extensions/agent-linux.md) about the functionality enabled for Linux and FreeBSD IaaS deployments via the Linux agent. - Review the list of [required packages](../../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8.x/7.x/6.x, CentOS 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12, Debian 9/8/7, and Oracle 7/6 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](../../virtual-machines/extensions/agent-linux.md#installation) for other OS versions. + Review the list of [required packages](../../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12, Debian 9/8/7, and Oracle 7/6 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](../../virtual-machines/extensions/agent-linux.md#installation) for other OS versions. You can use the command to verify the service status of the Azure Linux Agent to make sure it's running. The service name might be **walinuxagent** or **waagent**. Once the hydration changes are done, the script will unmount all the partitions mounted, deactivate volume groups, and then flush the devices. |
migrate | Tutorial Discover Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-discover-vmware.md | Requirement | Details **vCenter Server/ESXi host** | You need a server running vCenter Server version 8.0, 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Servers must be hosted on an ESXi host running version 5.5 or later.<br /><br /> On the vCenter Server, allow inbound connections on TCP port 443 so that the appliance can collect configuration and performance metadata.<br /><br /> The appliance connects to vCenter Server on port 443 by default. If the server running vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details in the appliance configuration manager.<br /><br /> On the ESXi hosts, make sure that inbound access is allowed on TCP port 443 for discovery of installed applications and for agentless dependency analysis on servers. **Azure Migrate appliance** | vCenter Server must have these resources to allocate to a server that hosts the Azure Migrate appliance:<br /><br /> - 32 GB of RAM, 8 vCPUs, and approximately 80 GB of disk storage.<br /><br /> - An external virtual switch and internet access on the appliance server, directly or via a proxy. **Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](migrate-support-matrix-vmware.md#sql-server-instance-and-database-discovery-requirements) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). -**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account [requires these permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance. You can use the [account provisioning utility](../least-privilege-credentials.md) to create custom accounts or use any existing account that is a member of the sysadmin server role for simplicity. +**SQL Server access** | To discover SQL Server instances and databases, the Windows account, or SQL Server account [requires these permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance. You can use the [account provisioning utility](../least-privilege-credentials.md) to create custom accounts or use any existing account that is a member of the sysadmin server role for simplicity. ## Prepare an Azure user account In the configuration manager, select **Set up prerequisites**, and then complete After the appliance is successfully registered, to see the registration details, select **View details**. -1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. Download VDDK 6.7 or 7 (depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*. +1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. Download VDDK 6.7, 7, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*. The Migration and modernization tool uses the VDDK to replicate servers during migration to Azure. Details such as OS license support status, inventory, database instances, etc. a You can gain deeper insights into the support posture of your environment from the **Discovered servers** and **Discovered database instances** sections. -The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. +The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right, which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. -The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. +The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right, which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. |
mysql | Concepts Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md | Refer to the following sections to learn more about the limits of the several co ### lower_case_table_names -For MySQL version 5.7, default value is 1 in Azure Database for MySQL flexible server. It's important to note that while it is possible to change the supported value to 2, reverting from 2 back to 1 isn't allowed. Contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value. +For MySQL version 5.7, default value is 1 in Azure Database for MySQL flexible server. It's important to note that while it's possible to change the supported value to 2, reverting from 2 back to 1 isn't allowed. Contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value. For [MySQL version 8.0+](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html) lower_case_table_names can only be configured when initializing the server. [Learn more](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html). Changing the lower_case_table_names setting after the server is initialized is prohibited. For MySQL version 8.0, default value is 1 in Azure Database for MySQL flexible server. Supported value for MySQL version 8.0 are 1 and 2 in Azure Database for MySQL flexible server. Contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value during server creation. If [`log_bin_trust_function_creators`] is set to OFF, if you try to create trigg ### innodb_buffer_pool_size Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) to learn more about this parameter.+The [Physical Memory Size](./concepts-service-tiers-storage.md#physical-memory-size-gb) (GB) in the table below represents the available random-access memory (RAM) in gigabytes (GB) on your Azure Database for MySQL flexible server. -|**Pricing Tier**|**vCore(s)**|**Memory Size (GiB)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| +|**Pricing Tier**|**vCore(s)**|**Physical Memory Size (GiB)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| |||||||-|Burstable (B1s)|1|1|134217728|33554432|134217728| -|Burstable (B1ms)|1|2|536870912|134217728|536870912| +|Burstable (B1s)|1|1|134217728|33554432|268435456| +|Burstable (B1ms)|1|2|536870912|134217728|1073741824| |Burstable|2|4|2147483648|134217728|2147483648|-|General Purpose|2|8|5368709120|134217728|5368709120| +|General Purpose|2|8|4294967296|134217728|5368709120| |General Purpose|4|16|12884901888|134217728|12884901888| |General Purpose|8|32|25769803776|134217728|25769803776| |General Purpose|16|64|51539607552|134217728|51539607552| Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb- |Business Critical|4|32|25769803776|134217728|25769803776| |Business Critical|8|64|51539607552|134217728|51539607552| |Business Critical|16|128|103079215104|134217728|103079215104|+|Business Critical|20|160|128849018880|134217728|128849018880| |Business Critical|32|256|206158430208|134217728|206158430208| |Business Critical|48|384|309237645312|134217728|309237645312| |Business Critical|64|504|405874409472|134217728|405874409472| Azure Database for MySQL flexible server supports at largest, **4 TB**, in a sin ### innodb_log_file_size -[innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) is the size in bytes of each [log file](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_file) in a [log group](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_group). The combined size of log files [(innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) * [innodb_log_files_in_group](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_files_in_group)) can't exceed a maximum value that is slightly less than 512 GB). A bigger log file size is better for performance, but it has a drawback that the recovery time after a crash is high. You need to balance recovery time in the rare event of a crash recovery versus maximizing throughput during peak operations. These can also result in longer restart times. You can configure innodb_log_size to any of these values - 256 MB, 512 MB, 1 GB or 2 GB for Azure Database for MySQL flexible server. The parameter is static and requires a restart. +[innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) is the size in bytes of each [log file](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_file) in a [log group](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_group). The combined size of log files [(innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) * [innodb_log_files_in_group](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_files_in_group)) can't exceed a maximum value that is slightly less than 512 GB). A bigger log file size is better for performance, but it has a drawback that the recovery time after a crash is high. You need to balance recovery time in the rare event of a crash recovery versus maximizing throughput during peak operations. These can also result in longer restart times. You can configure innodb_log_size to any of these values - 256 MB, 512 MB, 1 GB, or 2 GB for Azure Database for MySQL flexible server. The parameter is static and requires a restart. > [!NOTE] > If you have changed the parameter innodb_log_file_size from default, check if the value of "show global status like 'innodb_buffer_pool_pages_dirty'" stays at 0 for 30 seconds to avoid restart delay. Upon initial deployment, an Azure Database for MySQL flexible server instance in In Azure Database for MySQL flexible server this parameter specifies the number of seconds the service waits before purging the binary log file. -The binary log contains "events" that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log is used mainly for two purposes, replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before it's been purged. If you want to persist binary logs for a more duration of time, you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it purges as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure Database for MySQL flexible server, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data-out from Azure Database for MySQL flexible server, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs won't be purged soon enough and can lead to increase in the storage billing. +The binary log contains "events" that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log is used mainly for two purposes, replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before it's been purged. If you want to persist binary logs for a more duration of time, you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it purges as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure Database for MySQL flexible server, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data-out from Azure Database for MySQL flexible server, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs won't be purged soon enough and can lead to increase in the storage billing. ### event_scheduler |
mysql | Azure Pipelines Mysql Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/azure-pipelines-mysql-deploy.md | - Title: Azure Pipelines task for Azure Database for MySQL single server -description: Enable Azure Database for MySQL Single Server CLI task for using with Azure Pipelines ------ Previously updated : 09/14/2022---# Azure Pipelines for Azure Database for MySQL - Single Server ---Get started with Azure Database for MySQL by deploying a database update with Azure Pipelines. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/). --You'll use the [Azure Database for MySQL Deployment task](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment). The Azure Database for MySQL Deployment task only works with Azure Database for MySQL single server. --## Prerequisites --Before you begin, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Azure DevOps organization. [Sign up for Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-sign-up).-- A GitHub repository that you can use for your pipeline. If you donΓÇÖt have an existing repository, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline). --This quickstart uses the resources created in either of these guides as a starting point: -- [Create an Azure Database for MySQL server using Azure portal](/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal)-- [Create an Azure Database for MySQL server using Azure CLI](/azure/mysql/quickstart-create-mysql-server-database-using-azure-cli)---## Create your pipeline --You'll use the basic starter pipeline as a basis for your pipeline. --1. Sign in to your Azure DevOps organization and go to your project. --2. In your project, navigate to the **Pipelines** page. Then choose the action to create a new pipeline. --3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code. --4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials. --5. When the list of repositories appears, select your desired repository. --6. Azure Pipelines will analyze your repository and offer configuration options. Select **Starter pipeline**. -- :::image type="content" source="media/azure-pipelines-mysql-task/configure-pipeline-option.png" alt-text="Screenshot of Select Starter pipeline."::: - -## Create a secret --You'll need to know your database server name, SQL username, and SQL password to use with the [Azure Database for MySQL Deployment task](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment). --For security, you'll want to save your SQL password as a secret variable in the pipeline settings UI for your pipeline. --1. Go to the **Pipelines** page, select the appropriate pipeline, and then select **Edit**. -1. Select **Variables**. -1. Add a new variable named `SQLpass` and select **Keep this value secret** to encrypt and save the variable. -- :::image type="content" source="media/azure-pipelines-mysql-task/save-secret-variable.png" alt-text="Screenshot of adding a secret variable."::: - -1. Select **Ok** and **Save** to add the variable. --## Verify permissions for your database --To access your MySQL database with Azure Pipelines, you need to set your database to accept connections from all Azure resources. --1. In the Azure portal, open your database resource. -1. Select **Connection security**. -1. Toggle **Allow access to Azure services** to **Yes**. -- :::image type="content" source="media/azure-pipelines-mysql-task/allow-azure-access-mysql.png" alt-text="Screenshot of setting MySQL to allow Azure connections."::: --## Add the Azure Database for MySQL Deployment task --In this example, we'll create a new databases named `quickstartdb` and add an inventory table. The inline SQL script will: --- Delete `quickstartdb` if it exists and create a new `quickstartdb` database.-- Delete the table `inventory` if it exists and creates a new `inventory` table.-- Insert three rows into `inventory`.-- Show all the rows.-- Update the value of the first row in `inventory`.-- Delete the second row in `inventory`.--You'll need to replace the following values in your deployment task. --|Input |Description |Example | -|||| -|`azureSubscription` | Authenticate with your Azure Subscription with a [service connection](/azure/devops/pipelines/library/connect-to-azure). | `My Subscription` | -|`ServerName` | The name of your Azure Database for MySQL server. | `fabrikam.mysql.database.azure.com` | -|`SqlUsername` | The user name of your Azure Database for MySQL. | `mysqladmin@fabrikam` | -|`SqlPassword` | The password for the username. This should be defined as a secret variable. | `$(SQLpass)` | --```yaml --trigger: -- main--pool: - vmImage: ubuntu-latest --steps: -- task: AzureMysqlDeployment@1- inputs: - azureSubscription: '<your-subscription> - ServerName: '<db>.mysql.database.azure.com' - SqlUsername: '<username>@<db>' - SqlPassword: '$(SQLpass)' - TaskNameSelector: 'InlineSqlTask' - SqlInline: | - DROP DATABASE IF EXISTS quickstartdb; - CREATE DATABASE quickstartdb; - USE quickstartdb; - - -- Create a table and insert rows - DROP TABLE IF EXISTS inventory; - CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER); - INSERT INTO inventory (name, quantity) VALUES ('banana', 150); - INSERT INTO inventory (name, quantity) VALUES ('orange', 154); - INSERT INTO inventory (name, quantity) VALUES ('apple', 100); - - -- Read - SELECT * FROM inventory; - - -- Update - UPDATE inventory SET quantity = 200 WHERE id = 1; - SELECT * FROM inventory; - - -- Delete - DELETE FROM inventory WHERE id = 2; - SELECT * FROM inventory; - IpDetectionMethod: 'AutoDetect' -``` --## Deploy and verify resources --Select **Save and run** to deploy your pipeline. The pipeline job will be launched and after few minutes, the job status should indicate `Success`. --You can verify that your pipeline ran successfully within the `AzureMysqlDeployment` task in the pipeline run. --Open the task and verify that the last two entries show two rows in `inventory`. There are two rows because the second row has been deleted. ----## Clean up resources --When youΓÇÖre done working with your pipeline, delete `quickstartdb` in your Azure Database for MySQL. You can also delete the deployment pipeline you created. --## Next steps --> [!div class="nextstepaction"] -> [Tutorial: Build an ASP.NET Core and Azure SQL Database app in Azure App Service](../../app-service/tutorial-dotnetcore-sqldb-app.md) |
mysql | Concepts Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-audit-logs.md | - Title: Audit logs - Azure Database for MySQL -description: Describes the audit logs available in Azure Database for MySQL, and the available parameters for enabling logging levels. ----- Previously updated : 06/20/2022---# Audit Logs in Azure Database for MySQL ----In Azure Database for MySQL, the audit log is available to users. The audit log can be used to track database-level activity and is commonly used for compliance. --## Configure audit logging -->[!IMPORTANT] -> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted and minimum amount of data is collected. --By default the audit log is disabled. To enable it, set `audit_log_enabled` to ON. --Other parameters you can adjust include: --- `audit_log_events`: controls the events to be logged. See below table for specific audit events.-- `audit_log_include_users`: MySQL users to be included for logging. The default value for this parameter is empty, which will include all the users for logging. This has higher priority over `audit_log_exclude_users`. Max length of the parameter is 512 characters.-- `audit_log_exclude_users`: MySQL users to be excluded from logging. Max length of the parameter is 512 characters.--> [!NOTE] -> `audit_log_include_users` has higher priority over `audit_log_exclude_users`. For example, if `audit_log_include_users` = `demouser` and `audit_log_exclude_users` = `demouser`, the user will be included in the audit logs because `audit_log_include_users` has higher priority. --| **Event** | **Description** | -||| -| `CONNECTION` | - Connection initiation (successful or unsuccessful) <br> - User reauthentication with different user/password during session <br> - Connection termination | -| `DML_SELECT`| SELECT queries | -| `DML_NONSELECT` | INSERT/DELETE/UPDATE queries | -| `DML` | DML = DML_SELECT + DML_NONSELECT | -| `DDL` | Queries like "DROP DATABASE" | -| `DCL` | Queries like "GRANT PERMISSION" | -| `ADMIN` | Queries like "SHOW STATUS" | -| `GENERAL` | All in DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN | -| `TABLE_ACCESS` | - Available for MySQL 5.7 and MySQL 8.0 <br> - Table read statements, such as SELECT or INSERT INTO ... SELECT <br> - Table delete statements, such as DELETE or TRUNCATE TABLE <br> - Table insert statements, such as INSERT or REPLACE <br> - Table update statements, such as UPDATE | --## Access audit logs --Audit logs are integrated with Azure Monitor Diagnostic Logs. Once you've enabled audit logs on your MySQL server, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs in the Azure portal, see the [audit log portal article](how-to-configure-audit-logs-portal.md#set-up-diagnostic-logs). -->[!NOTE] ->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings --## Diagnostic Logs Schemas --The following sections describe what's output by MySQL audit logs based on the event type. Depending on the output method, the fields included and the order in which they appear may vary. --### Connection --| **Property** | **Description** | -||| -| `TenantId` | Your tenant ID | -| `SourceSystem` | `Azure` | -| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC | -| `Type` | Type of the log. Always `AzureDiagnostics` | -| `SubscriptionId` | GUID for the subscription that the server belongs to | -| `ResourceGroup` | Name of the resource group the server belongs to | -| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | -| `ResourceType` | `Servers` | -| `ResourceId` | Resource URI | -| `Resource` | Name of the server | -| `Category` | `MySqlAuditLogs` | -| `OperationName` | `LogEvent` | -| `LogicalServerName_s` | Name of the server | -| `event_class_s` | `connection_log` | -| `event_subclass_s` | `CONNECT`, `DISCONNECT`, `CHANGE USER` (only available for MySQL 5.7) | -| `connection_id_d` | Unique connection ID generated by MySQL | -| `host_s` | Blank | -| `ip_s` | IP address of client connecting to MySQL | -| `user_s` | Name of user executing the query | -| `db_s` | Name of database connected to | -| `\_ResourceId` | Resource URI | --### General --Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN event types. --> [!NOTE] -> For `sql_text`, log will be truncated if it exceeds 2048 characters. --| **Property** | **Description** | -||| -| `TenantId` | Your tenant ID | -| `SourceSystem` | `Azure` | -| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC | -| `Type` | Type of the log. Always `AzureDiagnostics` | -| `SubscriptionId` | GUID for the subscription that the server belongs to | -| `ResourceGroup` | Name of the resource group the server belongs to | -| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | -| `ResourceType` | `Servers` | -| `ResourceId` | Resource URI | -| `Resource` | Name of the server | -| `Category` | `MySqlAuditLogs` | -| `OperationName` | `LogEvent` | -| `LogicalServerName_s` | Name of the server | -| `event_class_s` | `general_log` | -| `event_subclass_s` | `LOG`, `ERROR`, `RESULT` (only available for MySQL 5.6) | -| `event_time` | Query start time in UTC timestamp | -| `error_code_d` | Error code if query failed. `0` means no error | -| `thread_id_d` | ID of thread that executed the query | -| `host_s` | Blank | -| `ip_s` | IP address of client connecting to MySQL | -| `user_s` | Name of user executing the query | -| `sql_text_s` | Full query text | -| `\_ResourceId` | Resource URI | --### Table access --> [!NOTE] -> Table access logs are only output for MySQL 5.7.<br>For `sql_text`, log will be truncated if it exceeds 2048 characters. --| **Property** | **Description** | -||| -| `TenantId` | Your tenant ID | -| `SourceSystem` | `Azure` | -| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC | -| `Type` | Type of the log. Always `AzureDiagnostics` | -| `SubscriptionId` | GUID for the subscription that the server belongs to | -| `ResourceGroup` | Name of the resource group the server belongs to | -| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | -| `ResourceType` | `Servers` | -| `ResourceId` | Resource URI | -| `Resource` | Name of the server | -| `Category` | `MySqlAuditLogs` | -| `OperationName` | `LogEvent` | -| `LogicalServerName_s` | Name of the server | -| `event_class_s` | `table_access_log` | -| `event_subclass_s` | `READ`, `INSERT`, `UPDATE`, or `DELETE` | -| `connection_id_d` | Unique connection ID generated by MySQL | -| `db_s` | Name of database accessed | -| `table_s` | Name of table accessed | -| `sql_text_s` | Full query text | -| `\_ResourceId` | Resource URI | --## Analyze logs in Azure Monitor Logs --Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your audited events. Below are some sample queries to help you get started. Make sure to update the below with your server name. --- List GENERAL events on a particular server-- ```kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlAuditLogs' and event_class_s == "general_log" - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | order by TimeGenerated asc nulls last - ``` --- List CONNECTION events on a particular server-- ```kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlAuditLogs' and event_class_s == "connection_log" - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | order by TimeGenerated asc nulls last - ``` --- Summarize audited events on a particular server-- ```kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlAuditLogs' - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | summarize count() by event_class_s, event_subclass_s, user_s, ip_s - ``` --- Graph the audit event type distribution on a particular server-- ```kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlAuditLogs' - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m) - | render timechart - ``` --- List audited events across all MySQL servers with Diagnostic Logs enabled for audit logs-- ```kusto - AzureDiagnostics - | where Category == 'MySqlAuditLogs' - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | order by TimeGenerated asc nulls last - ``` --## Next steps --- [How to configure audit logs in the Azure portal](how-to-configure-audit-logs-portal.md) |
mysql | Concepts Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-ad-authentication.md | - Title: Active Directory authentication - Azure Database for MySQL -description: Learn about the concepts of Microsoft Entra ID for authentication with Azure Database for MySQL ----- Previously updated : 06/20/2022---# Use Microsoft Entra ID for authenticating with MySQL ----Microsoft Entra authentication is a mechanism of connecting to Azure Database for MySQL using identities defined in Microsoft Entra ID. -With Microsoft Entra authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. --Benefits of using Microsoft Entra ID include: --- Authentication of users across Azure Services in a uniform way-- Management of password policies and password rotation in a single place-- Multiple forms of authentication supported by Microsoft Entra ID, which can eliminate the need to store passwords-- Customers can manage database permissions using external (Microsoft Entra ID) groups.-- Microsoft Entra authentication uses MySQL database users to authenticate identities at the database level-- Support of token-based authentication for applications connecting to Azure Database for MySQL--To configure and use Microsoft Entra authentication, use the following process: --1. Create and populate Microsoft Entra ID with user identities as needed. -2. Optionally associate or change the Active Directory currently associated with your Azure subscription. -3. Create a Microsoft Entra administrator for the Azure Database for MySQL server. -4. Create database users in your database mapped to Microsoft Entra identities. -5. Connect to your database by retrieving a token for a Microsoft Entra identity and logging in. --> [!NOTE] -> To learn how to create and populate Microsoft Entra ID, and then configure Microsoft Entra ID with Azure Database for MySQL, see [Configure and sign in with Microsoft Entra ID for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md). --## Architecture --The following high-level diagram summarizes how authentication works using Microsoft Entra authentication with Azure Database for MySQL. The arrows indicate communication pathways. --![authentication flow][1] --## Administrator structure --When using Microsoft Entra authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Microsoft Entra administrator. Only the administrator based on a Microsoft Entra account can create the first Microsoft Entra ID contained database user in a user database. The Microsoft Entra administrator login can be a Microsoft Entra user or a Microsoft Entra group. When the administrator is a group account, it can be used by any group member, enabling multiple Microsoft Entra administrators for the MySQL server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Microsoft Entra ID without changing the users or permissions in the MySQL server. Only one Microsoft Entra administrator (a user or group) can be configured at any time. --![admin structure][2] --## Permissions --To create new users that can authenticate with Microsoft Entra ID, you must be the designated Microsoft Entra administrator. This user is assigned by configuring the Microsoft Entra Administrator account for a specific Azure Database for MySQL server. --To create a new Microsoft Entra database user, you must connect as the Microsoft Entra administrator. This is demonstrated in [Configure and Login with Microsoft Entra ID for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md). --Any Microsoft Entra authentication is only possible if the Microsoft Entra admin was created for Azure Database for MySQL. If the Microsoft Entra admin was removed from the server, existing Microsoft Entra users created previously can no longer connect to the database using their Microsoft Entra credentials. --<a name='connecting-using-azure-ad-identities'></a> --## Connecting using Microsoft Entra identities --Microsoft Entra authentication supports the following methods of connecting to a database using Microsoft Entra identities: --- Microsoft Entra Password-- Microsoft Entra integrated-- Microsoft Entra Universal with MFA-- Using Active Directory Application certificates or client secrets-- [Managed Identity](how-to-connect-with-managed-identity.md)--Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in. --Please note that management operations, such as adding new users, are only supported for Microsoft Entra user roles at this point. --> [!NOTE] -> For more details on how to connect with an Active Directory token, see [Configure and sign in with Microsoft Entra ID for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md). --## Additional considerations --- Microsoft Entra authentication is only available for MySQL 5.7 and newer.-- Only one Microsoft Entra administrator can be configured for an Azure Database for MySQL server at any time.-- Only a Microsoft Entra administrator for MySQL can initially connect to the Azure Database for MySQL using a Microsoft Entra account. The Active Directory administrator can configure subsequent Microsoft Entra database users.-- If a user is deleted from Microsoft Entra ID, that user will no longer be able to authenticate with Microsoft Entra ID, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user.-> [!NOTE] -> Login with the deleted Microsoft Entra user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for MySQL this access will be revoked immediately. -- If the Microsoft Entra admin is removed from the server, the server will no longer be associated with a Microsoft Entra tenant, and therefore all Microsoft Entra logins will be disabled for the server. Adding a new Microsoft Entra admin from the same tenant will re-enable Microsoft Entra logins.-- Azure Database for MySQL matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Microsoft Entra user ID, as opposed to using the username. This means that if a Microsoft Entra user is deleted in Microsoft Entra ID and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Microsoft Entra ID and then a new user with the same name added, the new user will not be able to connect with the existing user.--> [!NOTE] -> The subscriptions of an Azure MySQL with Microsoft Entra authentication enabled cannot be transferred to another tenant or directory. --## Next steps --- To learn how to create and populate Microsoft Entra ID, and then configure Microsoft Entra ID with Azure Database for MySQL, see [Configure and sign in with Microsoft Entra ID for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md).-- For an overview of logins, and database users for Azure Database for MySQL, see [Create users in Azure Database for MySQL](how-to-create-users.md).--<!--Image references--> --[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png -[2]: ./media/concepts-azure-ad-authentication/admin-structure.png |
mysql | Concepts Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-backup.md | - Title: Backup and restore - Azure Database for MySQL -description: Learn about automatic backups and restoring your Azure Database for MySQL server. ------ Previously updated : 06/20/2022---# Backup and restore in Azure Database for MySQL ----Azure Database for MySQL automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion. --## Backups --Azure Database for MySQL takes backups of the data files and the transaction log. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can [optionally configure it](how-to-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted using AES 256-bit encryption. --These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](concepts-migrate-dump-restore.md) to copy a database. --The backup type and frequency is depending on the backend storage for the servers. --### Backup type and frequency --#### Basic storage servers --The Basic storage is the backend storage supporting [Basic tier servers](concepts-pricing-tiers.md). Backups on Basic storage servers are snapshot-based. A full database snapshot is performed daily. There are no differential backups performed for basic storage servers and all snapshot backups are full database backups only. --Transaction log backups occur every five minutes. --#### General purpose storage v1 servers (supports up to 4-TB storage) --The General purpose storage is the backend storage supporting [General Purpose](concepts-pricing-tiers.md) and [Memory Optimized tier](concepts-pricing-tiers.md) server. For servers with general purpose storage up to 4 TB, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. The backups on general purpose storage up to 4-TB storage are not snapshot-based and consumes IO bandwidth at the time of backup. For large databases (> 1 TB) on 4-TB storage, we recommend you consider --- Provisioning more IOPs to account for backup IOs OR-- Alternatively, migrate to general purpose storage that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred [Azure regions](./concepts-pricing-tiers.md#storage). There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.--#### General purpose storage v2 servers (supports up to 16-TB storage) --In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes. --For more information of Basic and General purpose storage, refer [storage documentation](./concepts-pricing-tiers.md#storage). --### Backup retention --Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./how-to-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./how-to-restore-server-cli.md#set-backup-configuration). --The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days: --- General purpose storage v1 servers (supporting up to 4-TB storage) will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.-- General purpose storage v2 servers (supporting up to 16-TB storage) will retain the full database snapshots and transaction log backups in last 8 days.--#### Long-term retention --Long-term retention of backups beyond 35 days is currently not natively supported by the service yet. You have an option to use mysqldump to take backups and store them for long-term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it. --### Backup redundancy options --Azure Database for MySQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../../availability-zones/cross-region-replication-azure.md). This geo-redundancy provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage. --> [!NOTE] ->For the following regions - Central India, France Central, UAE North, South Africa North; General purpose storage v2 storage is in Public Preview. If you create a source server in General purpose storage v2 (Supporting up to 16-TB storage) in the above mentioned regions then enabling Geo-Redundant Backup is not supported. --#### Moving from locally redundant to geo-redundant backup storage --Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](concepts-migrate-dump-restore.md) is the only supported option. --### Backup storage cost --Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/). --You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available via the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage. --The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups. --## Restore --In Azure Database for MySQL, performing a restore creates a new server from the original server's backups and restores all databases contained in the server. Restore is currently not supported if original server is in stopped state. --There are two types of restore available: --- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server utilizing the combination of full and transaction log backups.-- **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region utilizing the most recent backup taken.--The estimated time for the recovery of the server depends on several factors: -* The size of the databases -* The number of transaction logs involved -* The amount of activity that needs to be replayed to recover to the restore point -* The network bandwidth if the restore is to a different region -* The number of concurrent restore requests being processed in the target region -* The presence of primary key in the tables in the database. For faster recovery, consider adding primary key for all the tables in your database. To check if your tables have primary key, you can use the following query: -```sql -select tab.table_schema as database_name, tab.table_name from information_schema.tables tab left join information_schema.table_constraints tco on tab.table_schema = tco.table_schema and tab.table_name = tco.table_name and tco.constraint_type = 'PRIMARY KEY' where tco.constraint_type is null and tab.table_schema not in('mysql', 'information_schema', 'performance_schema', 'sys') and tab.table_type = 'BASE TABLE' order by tab.table_schema, tab.table_name; -``` -For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that a high number of geo-restore requests will be initiated for disaster recovery. When there are many requests, the recovery time for individual databases can increase. Most database restores finish in less than 12 hours. --> [!IMPORTANT] -> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](how-to-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md). --### Point-in-time restore --Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option. --> [!NOTE] -> There are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation -> -> - time_zone - This value to set to DEFAULT value **SYSTEM** -> - event_scheduler - The event_scheduler is set to **OFF** on the restored server -> -> You will need to set these server parameters by reconfiguring the [server parameter](how-to-server-parameters.md) --Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect. --You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes. --### Geo-restore --You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups. -- General purpose storage v1 servers (supporting up to 4-TB storage) can be restored to the geo-paired region, or to any Azure region that supports Azure Database for MySQL - Single Server service.-- General purpose storage v2 servers (supporting up to 16-TB storage) can only be restored to Azure regions that support General purpose storage v2 servers infrastructure. -Review [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage) for the list of supported regions. --Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss. --> [!IMPORTANT] ->If a geo-restore is performed for a newly created server, the initial backup synchronization may take more than 24 hours depending on data size as the initial full snapshot backup copy time is much higher. Subsequent snapshot backups are incremental copy and hence the restores are faster after 24 hours of server creation. If you are evaluating geo-restores to define your RTO, we recommend you to wait and evaluate geo-restore **only after 24 hours** of server creation for better estimates. --During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size during geo-restore is not supported. --The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours. --### Perform post-restore tasks --After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running: --- If the new server is meant to replace the original server, redirect clients and client applications to the new server-- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.-- Ensure appropriate logins and database level permissions are in place-- Configure alerts, as appropriate--## Next steps --- To learn more about business continuity, see the [business continuity overview](concepts-business-continuity.md).-- To restore to a point-in-time using the Azure portal, see [restore server to a point-in-time using the Azure portal](how-to-restore-server-portal.md).-- To restore to a point-in-time using Azure CLI, see [restore server to a point-in-time using CLI](how-to-restore-server-cli.md). |
mysql | Concepts Business Continuity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-business-continuity.md | - Title: Business continuity - Azure Database for MySQL -description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MySQL service. ----- Previously updated : 06/20/2022---# Overview of business continuity with Azure Database for MySQL - Single Server ----This article describes the capabilities that Azure Database for MySQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance. --## Features that you can use to provide business continuity --As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after the disruptive event - this is your Recovery Time Objective (RTO). You also need to understand the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event - this is your Recovery Point Objective (RPO). --Azure Database for MySQL single server provides business continuity and disaster recovery features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in a different region. Each has different characteristics for the recovery time and the potential data loss. With [Geo-restore](concepts-backup.md) feature, a new server is created using the backup data that is replicated from another region. The overall time it takes to restore and recover depends on the size of the database and the amount of logs to recover. The overall time to establish the server varies from few minutes to few hours. With [read replicas](concepts-read-replicas.md), transaction logs from the primary are asynchronously streamed to the replica. In the event of a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss. --> [!NOTE] -> The lag between the primary and the replica depends on the latency between the sites, the amount of data to be transmitted and most importantly on the write workload of the primary server. Heavy write workloads can generate significant lag. -> -> Because of asynchronous nature of replication used for read-replicas, they **should not** be considered as a High Availability (HA) solution since the higher lags can mean higher RTO and RPO. Only for workloads where the lag remains smaller through the peak and non-peak times of the workload, read replicas can act as a HA alternative. Otherwise read replicas are intended for true read-scale for ready heavy workloads and for (Disaster Recovery) DR scenarios. --The following table compares RTO and RPO in a **typical workload** scenario: --| **Capability** | **Basic** | **General Purpose** | **Memory optimized** | -| :: | :-: | :--: | :: | -| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | -| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h | -| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*| -- \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload. --## Recover a server after a user or application error --You can use the service's backups to recover a server from various disruptive events. A user may accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data due to an application defect, and so on. --You can perform a point-in-time-restore to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server. --> [!IMPORTANT] -> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](how-to-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md). --## Recover from an Azure regional data center outage --Although rare, an Azure data center can have an outage. When an outage occurs, it causes a business disruption that might only last a few minutes, but could last for hours. --One option is to wait for your server to come back online when the data center outage is over. This works for applications that can afford to have the server offline for some period of time, for example a development environment. When data center has an outage, you do not know how long the outage might last, so this option only works if you don't need your server for a while. --## Geo-restore --The geo-restore feature restores the server using geo-redundant backups. The backups are hosted in your server's [paired region](../../availability-zones/cross-region-replication-azure.md). These backups are accessible even when the region your server is hosted in is offline. You can restore from these backups to any other region and bring your server back online. Learn more about geo-restore from the [backup and restore concepts article](concepts-backup.md). --> [!IMPORTANT] -> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using mysqldump of your existing server and restore it to a newly created server configured with geo-redundant backups. --## Cross-region read replicas --You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using MySQL's binary log replication technology. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md). --## FAQ --### Where does Azure Database for MySQL store customer data? -By default, Azure Database for MySQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region. --## Next steps --- Learn more about the [automated backups in Azure Database for MySQL](concepts-backup.md).-- Learn how to restore using [the Azure portal](how-to-restore-server-portal.md) or [the Azure CLI](how-to-restore-server-cli.md).-- Learn about [read replicas in Azure Database for MySQL](concepts-read-replicas.md). |
mysql | Concepts Certificate Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-certificate-rotation.md | - Title: Certificate rotation for Azure Database for MySQL -description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for MySQL ----- Previously updated : 06/20/2022---# Understanding the changes in the Root CA change for Azure Database for MySQL single server ----Azure Database for MySQL single server as part of standard maintenance and security best practices will complete the root certificate change starting October 2022. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server. --> [!NOTE] -> This article applies to [Azure Database for MySQL - Single Server](single-server-overview.md) ONLY. For [Azure Database for MySQL - Flexible Server](../flexible-server/overview.md), the certificate needed to communicate over SSL is [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) -> -> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. -> --#### Why is a root certificate update required? --Azure Database for MySQL users can only use the predefined certificate to connect to their MySQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant. --Per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers. --#### Do I need to make any changes on my client to maintain connectivity? --If you followed steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.** --###### Create a combined CA certificate --To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation, one of the allowed values will be used. Refer to the following steps: --1. Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from the following links: -- * [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) - * [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) --2. Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included. -- * For Java (MySQL Connector/J) users, execute: -- ```console - keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt - ``` -- ```console - keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt - ``` -- Then replace the original keystore file with the new generated one: -- * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file"); - * System.setProperty("javax.net.ssl.trustStorePassword","password"); -- * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate. -- :::image type="content" source="../flexible-server/media/overview-single/netconnecter-cert.png" alt-text="Azure Database for MySQL .NET cert diagram"::: -- * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file. -- * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format: -- ``` - --BEGIN CERTIFICATE-- - (Root CA1: BaltimoreCyberTrustRoot.crt.pem) - --END CERTIFICATE-- - --BEGIN CERTIFICATE-- - (Root CA2: DigiCertGlobalRootG2.crt.pem) - --END CERTIFICATE-- - ``` --3. Replace the original root CA pem file with the combined root CA file and restart your application/client. -- In the future, after the new certificate is deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem. --> [!NOTE] -> Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done and then it will be safe to drop the **Baltimore certificate**. --#### What if we removed the BaltimoreCyberTrustRoot certificate? --You'll start to encounter connectivity errors while connecting to your Azure Database for MySQL server. You'll need to [configure SSL](how-to-configure-ssl.md) with the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity. --## Frequently asked questions --#### If I'm not using SSL/TLS, do I still need to update the root CA? --No actions are required if you aren't using SSL/TLS. --#### When will my single server instance undergo root certificate change? --The migration from **BaltimoreCyberTrustRoot** to **DigiCertGlobalRootG2** will be carried out across all regions of Azure starting **October 2022** in phases. -To make sure that you do not lose connectivity to your server, follow the steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate). -Combined CA certificate will allow connectivity over SSL to your single server instance with either of these two certificates. ---#### When can I remove BaltimoreCyberTrustRoot certificate completely? --Once the migration is completed successfully across all Azure regions we'll send a communication post that you're safe to change single CA **DigiCertGlobalRootG2** certificate. ---#### I don't specify any CA cert while connecting to my single server instance over SSL, do I still need to perform [the steps](#create-a-combined-ca-certificate) mentioned above? --If you have both the CA root cert in your [trusted root store](/windows-hardware/drivers/install/trusted-root-certification-authorities-certificate-store), then no further actions are required. This also applies to your client drivers that use local store for accessing root CA certificate. ---#### If I'm using SSL/TLS, do I need to restart my database server to update the root CA? --No, you don't need to restart the database server to start using the new certificate. This root certificate is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server. --#### How do I know if I'm using SSL/TLS with root certificate verification? --You can identify whether your connections verify the root certificate by reviewing your connection string. --* If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate. -* If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you don't need to update certificates. -* If your connection string doesn't specify sslmode, you don't need to update certificates. --If you're using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. --#### What is the impact of using App Service with Azure Database for MySQL? --For Azure app services connecting to Azure Database for MySQL, there are two possible scenarios depending on how on you're using SSL with your application. --* This new certificate has been added to App Service at platform level. If you're using the SSL certificates included on App Service platform in your application, then no action is needed. This is the most common scenario. -* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and produce a combined certificate as mentioned above and use the certificate file. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress). This is an uncommon scenario but we have seen some users using this. --#### What is the impact of using Azure Kubernetes Services (AKS) with Azure Database for MySQL? --If you're trying to connect to the Azure Database for MySQL using Azure Kubernetes Services (AKS), it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-own-tls.md). --#### What is the impact of using Azure Data Factory to connect to Azure Database for MySQL? --For a connector using Azure Integration Runtime, the connector uses certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates, and therefore no action is needed. --For a connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you'll need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it. --#### Do I need to plan a database server maintenance downtime for this change? --No. Since the change is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change. --#### How often does Microsoft update their certificates or what is the expiry policy? --These certificates used by Azure Database for MySQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times. --#### If I'm using read replicas, do I need to perform this update only on source server or the read replicas? --Since this update is a client-side change, if the client used to read data from the replica server, you'll need to apply the changes for those clients as well. --#### If I'm using Data-in replication, do I need to perform any action? --If you're using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider: --* If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting. -- ```azurecli-interactive - Master_SSL_Allowed : Yes - Master_SSL_CA_File : ~\azure_mysqlservice.pem - Master_SSL_CA_Path : - Master_SSL_Cert : ~\azure_mysqlclient_cert.pem - Master_SSL_Cipher : - Master_SSL_Key : ~\azure_mysqlclient_key.pem - ``` -- If you see that the certificate is provided for the CA_file, SSL_Cert, and SSL_Key, you'll need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and create a combined cert file. --* If the data-replication is between two Azure Database for MySQL servers, then you'll need to reset the replica by executing **CALL mysql.az_replication_change_master** and provide the new dual root certificate as the last parameter [master_ssl_ca](how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication). --#### Is there a server-side query to determine whether SSL is being used? --To verify if you're using SSL connection to connect to the server refer [SSL verification](how-to-configure-ssl.md#step-4-verify-the-ssl-connection). --#### Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file? --No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**. --#### Why do I need to update my root certificate if I am using PHP driver with [enableRedirect](./how-to-redirection.md) ? -To address compliance requirements, the CA certificates of the host server were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2. With this update, database connections using the PHP Client driver with enableRedirect can no longer connect to the server, as the client devices are unaware of the certificate change and the new root CA details. Client devices that use PHP redirection drivers connect directly to the host server, bypassing the gateway. Refer this [link](single-server-overview.md#high-availability) for more on architecture of Azure Database for MySQL single server. --#### What if I have further questions? --For questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com). |
mysql | Concepts Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-compatibility.md | - Title: Driver and tools compatibility - Azure Database for MySQL -description: This article describes the MySQL drivers and management tools that are compatible with Azure Database for MySQL. ----- Previously updated : 06/20/2022---# MySQL drivers and management tools compatible with Azure Database for MySQL ----This article describes the drivers and management tools that are compatible with Azure Database for MySQL single server. --> [!NOTE] -> This article is only applicable to Azure Database for MySQL single server to ensure drivers are compatible with [connectivity architecture](concepts-connectivity-architecture.md) of Single Server service. [Azure Database for MySQL Flexible Server](../flexible-server/overview.md) is compatible with all the drivers and tools supported and compatible with MySQL community edition. --## MySQL Drivers -Azure Database for MySQL uses the world's most popular community edition of MySQL database. As such, it's compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open-source community to constantly improve the functionality and usability of MySQL drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 is provided in the following table: --| **Programming Language** | **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** | -| :-- | : | :-- | :- | : | :-- | -| PHP | mysqli, pdo_mysql, mysqlnd | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.| -| .NET | Async MySQL Connector for .NET | https://github.com/mysql-net/MySqlConnector <br> [Installation package from NuGet](https://www.nuget.org/packages/MySqlConnector/) | 0.27 and after | 0.26.5 and before | | -| .NET | MySQL Connector/NET | https://github.com/mysql/mysql-connector-net | 6.6.3, 7.0, 8.0 | | An encoding bug may cause connections to fail on some non-UTF8 Windows systems. | -| Node.js | mysqljs | https://github.com/mysqljs/mysql/ <br> Installation package from NPM:<br> Run `npm install mysql` from NPM | 2.15 | 2.14.1 and before | | -| Node.js | node-mysql2 | https://github.com/sidorares/node-mysql2 | 1.3.4+ | | | -| Go | Go MySQL Driver | https://github.com/go-sql-driver/mysql/releases | 1.3, 1.4 | 1.2 and before | Use `allowNativePasswords=true` in the connection string for version 1.3. Version 1.4 contains a fix and `allowNativePasswords=true` is no longer required. | -| Python | MySQL Connector/Python | https://pypi.python.org/pypi/mysql-connector-python | 1.2.3, 2.0, 2.1, 2.2, use 8.0.16+ with MySQL 8.0 | 1.2.2 and before | | -| Python | PyMySQL | https://pypi.org/project/PyMySQL/ | 0.7.11, 0.8.0, 0.8.1, 0.9.3+ | 0.9.0 - 0.9.2 (regression in web2py) | | -| Java | MariaDB Connector/J | https://downloads.mariadb.org/connector-java/ | 2.1, 2.0, 1.6 | 1.5.5 and before | | -| Java | MySQL Connector/J | https://github.com/mysql/mysql-connector-j | 5.1.21+, use 8.0.17+ with MySQL 8.0 | 5.1.20 and below | | -| C | MySQL Connector/C (libmysqlclient) | https://dev.mysql.com/doc/c-api/5.7/en/c-api-implementations.html | 6.0.2+ | | | -| C | MySQL Connector/ODBC (myodbc) | https://github.com/mysql/mysql-connector-odbc | 3.51.29+ | | | -| C++ | MySQL Connector/C++ | https://github.com/mysql/mysql-connector-cpp | 1.1.9+ | 1.1.3 and below | | -| C++ | MySQL++| https://github.com/tangentsoft/mysqlpp | 3.2.3+ | | | -| Ruby | mysql2 | https://github.com/brianmario/mysql2 | 0.4.10+ | | | -| R | RMySQL | https://github.com/rstats-db/RMySQL | 0.10.16+ | | | -| Swift | mysql-swift | https://github.com/novi/mysql-swift | 0.7.2+ | | | -| Swift | vapor/mysql | https://github.com/vapor/mysql-kit | 2.0.1+ | | | --## Management Tools -The compatibility advantage extends to database management tools as well. Your existing tools should continue to work with Azure Database for MySQL, as long as the database manipulation operates within the confines of user permissions. Three common database management tools that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 are listed in the following table: --| | **MySQL Workbench 6.x and up** | **Navicat 12** | **PHPMyAdmin 4.x and up** | **dbForge Studio for MySQL 9.0** | -| :- | :-- | :- | :-| :- | -| **Create, Update, Read, Write, Delete** | X | X | X | X | -| **SSL Connection** | X | X | X | X | -| **SQL Query Auto Completion** | X | X | | X | -| **Import and Export Data** | X | X | X | X | -| **Export to Multiple Formats** | X | X | X | X | -| **Backup and Restore** | | X | | X | -| **Display Server Parameters** | X | X | X | X | -| **Display Client Connections** | X | X | X | X | --## Next steps --- [Troubleshoot connection issues to Azure Database for MySQL](how-to-troubleshoot-common-connection-issues.md) |
mysql | Concepts Connect To A Gateway Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connect-to-a-gateway-node.md | - Title: Azure Database for MySQL managing updates and upgrades -description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL Service. ----- Previously updated : 06/20/2022---# Connect to a gateway node to a specific MySQL version ----In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL Service architecture. --As Azure Database for MySQL supports major version v5.7 and v8.0, the default port 3306 to connect to Azure Database for MySQL runs MySQL client version 5.6 (least common denominator) to support connections to servers of all 2 supported major versions. However, if your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string. --In Azure Database for MySQL Service, gateway nodes listens on port 3308 for v5.7 clients and port 3309 for v8.0 clients. In other words, if you would like to connect to v5.7 gateway client, you should use your fully qualified server name and port 3308 to connect to your server from client application. Similarly, if you would like to connect to v8.0 gateway client, you can use your fully qualified server name and port 3309 to connect to your server. Check the following example for further clarity. ---> [!NOTE] -> Connecting to Azure Database for MySQL via ports 3308 and 3309 are only supported for public connectivity, Private Link and VNet service endpoints can only be used with port 3306. --Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql) --## Managing updates and upgrades --The service automatically manages patching for bug fix version updates. For example, 5.7.20 to 5.7.21. --Major version upgrade is currently supported by service for upgrades from MySQL v5.6 to v5.7. For more details, refer [how to perform major version upgrades](how-to-major-version-upgrade.md). If you'd like to upgrade from 5.7 to 8.0, we recommend you perform [dump and restore](./concepts-migrate-dump-restore.md) to a server that was created with the new engine version. --## Next steps --- To see supported versions, visit [Azure Database for MySQL version support policy](../concepts-version-policy.md)-- For details around Azure Database for MySQL versioning policy, see [this document](concepts-version-policy.md).-- For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](./concepts-pricing-tiers.md) |
mysql | Concepts Connectivity Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md | - Title: Connectivity architecture - Azure Database for MySQL -description: Describes the connectivity architecture for your Azure Database for MySQL server. ----- Previously updated : 06/20/2022---# Connectivity architecture in Azure Database for MySQL ----This article explains the Azure Database for MySQL connectivity architecture and how the traffic is directed to your Azure Database for MySQL instance from clients both within and outside Azure. --## Connectivity architecture -Connection to your Azure Database for MySQL is established through a gateway that is responsible for routing incoming connections to the physical location of your server in our clusters. The following diagram illustrates the traffic flow. ---As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MySQL. Therefore, in order to connect to your server, such as from corporate networks, it's necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region. --## Azure Database for MySQL gateway IP addresses --The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MySQL server. --As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal. The decommissioning of gateways can impact the connectivity to your servers if --* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**. You should use fully qualified domain name (FQDN) of your server in the format `<servername>.mysql.database.azure.com`, in the connection string for your application. -* You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings. --> [!IMPORTANT] -> If customer connectivity stack needs to connect directly to gateway instead of **recommended DNS name approach**, or allow-list gateway in the firewall rules for connections to\from customer infrastructure, we **strongly encourage** customers to use Gateway IP address **subnets** versus hardcoding static IP in order to not be impacted by this activity in a region that may cause IP to change within the subnet range. ---The following table lists the gateway IP addresses of the Azure Database for MySQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following: --* **Gateway IP address subnets:** This column lists the IP address subnets of the gateway rings located in the particular region. As we retire older gateway hardware, we recommend that you open the client-side firewall to allow outbound traffic for the IP address subnets in the region you're operating. -* **Gateway IP addresses**: Periodically, individual **Gateway IP addresses** will be retired and traffic will be migrated to corresponding **Gateway IP address subnets**. --We strongly encourage customers to move away from relying on any individual Gateway IP address (since these will be retired in the future). Instead allow network traffic to reach both the individual Gateway IP addresses and Gateway IP address subnets in a region. --| **Region name** |**Current Gateway IP address**| **Gateway IP address subnets** | -|:-|:--|:--| -| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 | -| Australia Central2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 | -| Australia East | 13.70.112.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 40.79.160.32/29, 20.53.46.128/27 | -| Australia South East |13.77.49.33 |13.77.49.32/29, 104.46.179.160/27| -| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27| -|Brazil Southeast|191.233.48.2|191.233.48.32/29, 191.233.15.160/27| -| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27| -| Canada East |40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 | -| Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27| -| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27| -| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27| -| China North | 52.130.128.89| 52.130.128.88/29, 40.72.77.128/27 | -| China North 2 |40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27| -| East Asia |13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27| -| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27| -| East US 2 |52.167.105.38, 40.70.144.38| 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27| -| France Central |40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 | -| France South |40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27| -| Germany North| 51.116.56.0| 51.116.57.32/29, 51.116.54.96/27| -| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27| -| Central India | 20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27| -| South India | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27| -| West India | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27| -| Japan East | 40.79.184.8, 40.79.192.23| 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 | -| Japan West |40.74.96.6| 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 | -| Jio India Central| 20.192.233.32|20.192.233.32/29, 20.192.48.32/27| -| Jio India West|20.193.200.32|20.193.200.32/29, 20.192.167.224/27| -| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27| -| Korea South |52.231.145.3| 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 | -| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27| -| North Europe |52.138.224.6, 52.138.224.7 |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 | -|Norway East|51.120.96.0|51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27| -|Norway West|51.120.216.0|51.120.217.32/29, 51.13.136.224/27| -| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 | -| South Africa West |102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27| -| South Central US | 20.45.120.0 |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27| -| South East Asia | 23.98.80.12, 40.78.233.2|13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 | -| Sweden Central|51.12.96.32|51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27| -| Sweden South|51.12.200.32|51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27| -| Switzerland North |51.107.56.0 |51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27| -| Switzerland West | 51.107.152.0| 51.107.153.32/29, 51.107.250.64/27| -| UAE Central | 20.37.72.64| 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 | -| UAE North |65.52.248.0 |20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 | -| UK South | 51.105.64.0|51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27| -| UK West |51.140.208.98 |51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 | -| West Central US |13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 | -| West Europe | 13.69.105.208,104.40.169.187|104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27| -| West US |13.86.216.212, 13.86.217.212 |20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27| -| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27| -| West US 3 |20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 | ----## Connection redirection --Azure Database for MySQL supports an extra connection policy, **redirection** that helps to reduce network latency between client applications and MySQL servers. With redirection, and after the initial TCP session is established to the Azure Database for MySQL server, the server returns the backend address of the node hosting the MySQL server to the client. Thereafter, all subsequent packets flow directly to the server, bypassing the gateway. As packets flow directly to the server, latency and throughput have improved performance. --This feature is supported in Azure Database for MySQL servers with engine versions 5.7, and 8.0. --Support for redirection is available in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft, and is available on [PECL](https://pecl.php.net/package/mysqlnd_azure). See the [configuring redirection](./how-to-redirection.md) article for more information on how to use redirection in your applications. ---> [!IMPORTANT] -> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview. --## Frequently asked questions --### What you need to know about this planned maintenance? -This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache is refreshed within 5 minutes, and it's automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications. --### What are we decommissioning? -Only Gateway nodes are decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We're decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification. --### How can you validate if your connections are going to old gateway nodes or new gateway nodes? -Ping your server's FQDN, for example ``ping xxx.mysql.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip-address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway. --You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses --### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned? -You receive an email to inform you when we start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Prepare your client to connect to the database server using the FQDN or using the new IP address from the table above. --### What do I do if my client applications are still connecting to old gateway server? -This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code. --### Is there any impact for my application connections? -This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections connect to the new IP address and all the existing connections will still work fine until the old IP address fully get decommissioned, which happens several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Use FQDN to connect to the database server in your application connection string. This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring. --### Can I request for a specific time window for the maintenance? -As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string. --### I'm using private link, will my connections get affected? -No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses. ----## Next steps -* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md) -* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md) -* [Configure redirection with Azure Database for MySQL](./how-to-redirection.md) |
mysql | Concepts Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity.md | - Title: Transient connectivity errors - Azure Database for MySQL -description: Learn how to handle transient connectivity errors and connect efficiently to Azure Database for MySQL. -keywords: mysql connection,connection string,connectivity issues,transient error,connection error,connect efficiently ----- Previously updated : 06/20/2022---# Handle transient errors and connect efficiently to Azure Database for MySQL ----This article describes how to handle transient errors and connect efficiently to Azure Database for MySQL. --## Transient errors --A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens. Another reason could be a new version of a PaaS service that is being rolled out. Most of these events are automatically mitigated by the system in less than 60 seconds. A best practice for designing and developing applications in the cloud is to expect transient errors. Assume they can happen in any component at any time and to have the appropriate logic in place to handle these situations. --## Handling transient errors --Transient errors should be handled using retry logic. Situations that must be considered: --* An error occurs when you try to open a connection -* An idle connection is dropped on the server side. When you try to issue a command it can't be executed -* An active connection that currently is executing a command is dropped. --The first and second case are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for MySQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is: --* Wait for 5 seconds before your first retry. -* For each following retry, the increase the wait exponentially, up to 60 seconds. -* Set a max number of retries at which point your application considers the operation failed. --When a connection with an active transaction fails, it is more difficult to handle the recovery correctly. There are two cases: If the transaction was read-only in nature, it is safe to reopen the connection and to retry the transaction. If however the transaction was also writing to the database, you must determine if the transaction was rolled back, or if it succeeded before the transient error happened. In that case, you might just not have received the commit acknowledgment from the database server. --One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client-generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully. --When your program communicates with Azure Database for MySQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors. --Make sure to test you retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for MySQL server. Your application should handle the brief downtime that is encountered during this operation without any problems. --## Connect efficiently to Azure Database for MySQL --Database connections are a limited resource, so making effective use of connection pooling to access Azure Database for MySQL optimizes performance. The below section explains how to use connection pooling or persistent connections to more effectively access Azure Database for MySQL. --## Access databases by using connection pooling (recommended) --Managing database connections can have a significant impact on the performance of the application as a whole. To optimize the performance of your application, the goal should be to reduce the number of times connections are established and time for establishing connections in key code paths. We strongly recommend using database connection pooling or persistent connections to connect to Azure Database for MySQL. Database connection pooling handles the creation, management, and allocation of database connections. When a program requests a database connection, it prioritizes the allocation of existing idle database connections, rather than the creation of a new connection. After the program has finished using the database connection, the connection is recovered in preparation for further use, rather than simply being closed down. --For better illustration, this article provides [a piece of sample code](./sample-scripts-java-connection-pooling.md) that uses JAVA as an example. For more information, see [Apache common DBCP](https://commons.apache.org/proper/commons-dbcp/). --> [!NOTE] -> The server configures a timeout mechanism to close a connection that has been in an idle state for some time to free up resources. Be sure to set up the verification system to ensure the effectiveness of persistent connections when you are using them. For more information, see [Configure verification systems on the client side to ensure the effectiveness of persistent connections](concepts-connectivity.md#configure-verification-mechanisms-in-clients-to-confirm-the-effectiveness-of-persistent-connections). --## Access databases by using persistent connections (recommended) --The concept of persistent connections is similar to that of connection pooling. Replacing short connections with persistent connections requires only minor changes to the code, but it has a major effect in terms of improving performance in many typical application scenarios. --## Access databases by using wait and retry mechanism with short connections --If you have resource limitations, we strongly recommend that you use database pooling or persistent connections to access databases. If your application use short connections and experience connection failures when you approach the upper limit on the number of concurrent connections,you can try wait and retry mechanism. You can set an appropriate wait time, with a shorter wait time after the first attempt. Thereafter, you can try waiting for events multiple times. --## Configure verification mechanisms in clients to confirm the effectiveness of persistent connections --The server configures a timeout mechanism to close a connection thatΓÇÖs been in an idle state for some time to free up resources. When the client accesses the database again, itΓÇÖs equivalent to creating a new connection request between the client and the server. To ensure the effectiveness of connections during the process of using them, configure a verification mechanism on the client. As shown in the following example, you can use Tomcat JDBC connection pooling to configure this verification mechanism. --By setting the TestOnBorrow parameter, when there's a new request, the connection pool automatically verifies the effectiveness of any available idle connections. If such a connection is effective, its directly returned otherwise connection pool withdraws the connection. The connection pool then creates a new effective connection and returns it. This process ensures that database is accessed efficiently. --For information on the specific settings, see the [JDBC connection pool official introduction document](https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Common_Attributes). You mainly need to set the following three parameters: TestOnBorrow (set to true), ValidationQuery (set to SELECT 1), and ValidationQueryTimeout (set to 1). The specific sample code is shown below: --```java -public class SimpleTestOnBorrowExample { - public static void main(String[] args) throws Exception { - PoolProperties p = new PoolProperties(); - p.setUrl("jdbc:mysql://localhost:3306/mysql"); - p.setDriverClassName("com.mysql.jdbc.Driver"); - p.setUsername("root"); - p.setPassword("password"); - // The indication of whether objects will be validated by the idle object evictor (if any). - // If an object fails to validate, it will be dropped from the pool. - // NOTE - for a true value to have any effect, the validationQuery or validatorClassName parameter must be set to a non-null string. - p.setTestOnBorrow(true); -- // The SQL query that will be used to validate connections from this pool before returning them to the caller. - // If specified, this query does not have to return any data, it just can't throw a SQLException. - p.setValidationQuery("SELECT 1"); -- // The timeout in seconds before a connection validation queries fail. - // This works by calling java.sql.Statement.setQueryTimeout(seconds) on the statement that executes the validationQuery. - // The pool itself doesn't timeout the query, it is still up to the JDBC driver to enforce query timeouts. - // A value less than or equal to zero will disable this feature. - p.setValidationQueryTimeout(1); - // set other useful pool properties. - DataSource datasource = new DataSource(); - datasource.setPoolProperties(p); -- Connection con = null; - try { - con = datasource.getConnection(); - // execute your query here - } finally { - if (con!=null) try {con.close();}catch (Exception ignore) {} - } - } - } -``` --## Next steps --* [Troubleshoot connection issues to Azure Database for MySQL](how-to-troubleshoot-common-connection-issues.md) |
mysql | Concepts Data Access And Security Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-and-security-vnet.md | - Title: VNet service endpoints - Azure Database for MySQL -description: 'Describes how VNet service endpoints work for your Azure Database for MySQL server.' ----- Previously updated : 06/20/2022---# Use Virtual Network service endpoints and rules for Azure Database for MySQL ----*Virtual network rules* are one firewall security feature that controls whether your Azure Database for MySQL server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for MySQL server. --To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for MySQL: ---> [!NOTE] -> This feature is available in all regions of Azure where Azure Database for MySQL is deployed for General Purpose and Memory Optimized servers. -> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server. --You can also consider using [Private Link](concepts-data-access-security-private-link.md) for connections. Private Link provides a private IP address in your VNet for the Azure Database for MySQL server. --<a name="anch-terminology-and-description-82f"></a> --## Terminology and description --**Virtual network:** You can have virtual networks associated with your Azure subscription. --**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) that you have are assigned to subnets. One subnet can contain multiple VMs or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access. --**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for MySQL and PostgreSQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for all Azure SQL Database, Azure Database for MySQL and Azure Database for PostgreSQL servers on the subnet. --**Virtual network rule:** A virtual network rule for your Azure Database for MySQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for MySQL server. To be in the ACL for your Azure Database for MySQL server, the subnet must contain the **Microsoft.Sql** type name. --A virtual network rule tells your Azure Database for MySQL server to accept communications from every node that is on the subnet. --------<a name="anch-benefits-of-a-vnet-rule-68b"></a> --## Benefits of a virtual network rule --Until you take action, the VMs on your subnets cannot communicate with your Azure Database for MySQL server. One action that establishes the communication is the creation of a virtual network rule. The rationale for choosing the VNet rule approach requires a compare-and-contrast discussion involving the competing security options offered by the firewall. --### A. Allow access to Azure services --The Connection security pane has an **ON/OFF** button that is labeled **Allow access to Azure services**. The **ON** setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This **ON** setting is probably more open than you want your Azure Database for MySQL Database to be. The virtual network rule feature offers much finer granular control. --### B. IP rules --The Azure Database for MySQL firewall allows you to specify IP address ranges from which communications are accepted into the Azure Database for MySQL Database. This approach is fine for stable IP addresses that are outside the Azure private network. But many nodes inside the Azure private network are configured with *dynamic* IP addresses. Dynamic IP addresses might change, such as when your VM is restarted. It would be folly to specify a dynamic IP address in a firewall rule, in a production environment. --You can salvage the IP option by obtaining a *static* IP address for your VM. For details, see [Configure private IP addresses for a virtual machine by using the Azure portal][vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]. --However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage. --<a name="anch-details-about-vnet-rules-38q"></a> --## Details about virtual network rules --This section describes several details about virtual network rules. --### Only one geographic region --Each Virtual Network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet. --Any virtual network rule is limited to the region that its underlying endpoint applies to. --### Server-level, not database-level --Each virtual network rule applies to your whole Azure Database for MySQL server, not just to one particular database on the server. In other words, virtual network rule applies at the server-level, not at the database-level. --### Security administration roles --There is a separation of security roles in the administration of Virtual Network service endpoints. Action is required from each of the following roles: --- **Network Admin:** Turn on the endpoint.-- **Database Admin:** Update the access control list (ACL) to add the given subnet to the Azure Database for MySQL server.--*Azure RBAC alternative:* --The roles of Network Admin and Database Admin have more capabilities than are needed to manage virtual network rules. Only a subset of their capabilities is needed. --You have the option of using [Azure role-based access control (Azure RBAC)][rbac-what-is-813s] in Azure to create a single custom role that has only the necessary subset of capabilities. The custom role could be used instead of involving either the Network Admin or the Database Admin. The surface area of your security exposure is lower if you add a user to a custom role, versus adding the user to the other two major administrator roles. --> [!NOTE] -> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations: -> - Both subscriptions must be in the same Microsoft Entra tenant. -> - The user has the required permissions to initiate operations, such as enabling service endpoints and adding a VNet-subnet to the given Server. -> - Make sure that both the subscription have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] --## Limitations --For Azure Database for MySQL, the virtual network rules feature has the following limitations: --- A Web App can be mapped to a private IP in a VNet/subnet. Even if service endpoints are turned ON from the given VNet/subnet, connections from the Web App to the server will have an Azure public IP source, not a VNet/subnet source. To enable connectivity from a Web App to a server that has VNet firewall rules, you must Allow Azure services to access server on the server.--- In the firewall for your Azure Database for MySQL, each virtual network rule references a subnet. All these referenced subnets must be hosted in the same geographic region that hosts the Azure Database for MySQL.--- Each Azure Database for MySQL server can have up to 128 ACL entries for any given virtual network.--- Virtual network rules apply only to Azure Resource Manager virtual networks; and not to [classic deployment model][arm-deployment-model-568f] networks.--- Turning ON virtual network service endpoints to Azure Database for MySQL using the **Microsoft.Sql** service tag also enables the endpoints for all Azure Database --- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.--- If **Microsoft.Sql** is enabled in a subnet, it indicates that you only want to use VNet rules to connect. [Non-VNet firewall rules](concepts-firewall-rules.md) of resources in that subnet will not work.--- On the firewall, IP address ranges do apply to the following networking items, but virtual network rules do not:- - [Site-to-Site (S2S) virtual private network (VPN)][vpn-gateway-indexmd-608y] - - On-premises via [ExpressRoute][expressroute-indexmd-744v] --## ExpressRoute --If your network is connected to the Azure network through use of [ExpressRoute][expressroute-indexmd-744v], each circuit is configured with two public IP addresses at the Microsoft Edge. The two IP addresses are used to connect to Microsoft Services, such as to Azure Storage, by using Azure Public Peering. --To allow communication from your circuit to Azure Database for MySQL, you must create IP network rules for the public IP addresses of your circuits. In order to find the public IP addresses of your ExpressRoute circuit, open a support ticket with ExpressRoute by using the Azure portal. --## Adding a VNET Firewall rule to your server without turning on VNET Service Endpoints --Merely setting a VNet firewall rule does not help secure the server to the VNet. You must also turn VNet service endpoints **On** for the security to take effect. When you turn service endpoints **On**, your VNet-subnet experiences downtime until it completes the transition from **Off** to **On**. This is especially true in the context of large VNets. You can use the **IgnoreMissingServiceEndpoint** flag to reduce or eliminate the downtime during transition. --You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal. --## Related articles -- [Azure virtual networks][vm-virtual-network-overview]-- [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d]--## Next steps -For articles on creating VNet rules, see: -- [Create and manage Azure Database for MySQL VNet rules using the Azure portal](how-to-manage-vnet-using-portal.md)-- [Create and manage Azure Database for MySQL VNet rules using Azure CLI](how-to-manage-vnet-using-cli.md)--<!-- Link references, to text, Within this same GitHub repo. --> -[arm-deployment-model-568f]: ../../azure-resource-manager/management/deployment-models.md --[vm-virtual-network-overview]: ../../virtual-network/virtual-networks-overview.md --[vm-virtual-network-service-endpoints-overview-649d]: ../../virtual-network/virtual-network-service-endpoints-overview.md --[vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]: ../../virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md --[rbac-what-is-813s]: ../../role-based-access-control/overview.md --[vpn-gateway-indexmd-608y]: ../../vpn-gateway/index.yml --[expressroute-indexmd-744v]: ../../expressroute/index.yml --[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | Concepts Data Access Security Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-security-private-link.md | - Title: Private Link - Azure Database for MySQL -description: Learn how Private link works for Azure Database for MySQL. ----- Previously updated : 06/20/2022---# Private Link for Azure Database for MySQL ----Private Link allows you to connect to various PaaS services in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. --For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../../virtual-network/virtual-networks-overview.md) and Subnet. --> [!NOTE] -> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. --## Data exfiltration prevention --Data ex-filtration in Azure Database for MySQL is when an authorized user, such as a database admin, is able to extract data from one system and move it to another location or system outside the organization. For example, the user moves the data to a storage account owned by a third party. --Consider a scenario with a user running MySQL Workbench inside an Azure Virtual Machine (VM) that is connecting to an Azure Database for MySQL server provisioned in West US. The example below shows how to limit access with public endpoints on Azure Database for MySQL using network access controls. --* Disable all Azure service traffic to Azure Database for MySQL via the public endpoint by setting *Allow Azure Services* to OFF. Ensure no IP addresses or ranges are allowed to access the server either via [firewall rules](./concepts-firewall-rules.md) or [virtual network service endpoints](./concepts-data-access-and-security-vnet.md). --* Only allow traffic to the Azure Database for MySQL using the Private IP address of the VM. For more information, see the articles on [Service Endpoint](concepts-data-access-and-security-vnet.md) and [VNet firewall rules](how-to-manage-vnet-using-portal.md). --* On the Azure VM, narrow down the scope of outgoing connection by using Network Security Groups (NSGs) and Service Tags as follows -- * Specify an NSG rule to allow traffic for *Service Tag = SQL.WestUs* - only allowing connection to Azure Database for MySQL in West US - * Specify an NSG rule (with a higher priority) to deny traffic for *Service Tag = SQL* - denying connections to Update to Azure Database for MySQL in all regions</br></br> --At the end of this setup, the Azure VM can connect only to Azure Database for MySQL in the West US region. However, the connectivity isn't restricted to a single Azure Database for MySQL. The VM can still connect to any Azure Database for MySQL in the West US region, including the databases that aren't part of the subscription. While we've reduced the scope of data exfiltration in the above scenario to a specific region, we haven't eliminated it altogether.</br> --With Private Link, you can now set up network access controls like NSGs to restrict access to the private endpoint. Individual Azure PaaS resources are then mapped to specific private endpoints. A malicious insider can only access the mapped PaaS resource (for example an Azure Database for MySQL) and no other resource. --## On-premises connectivity over private peering --When you connect to the public endpoint from on-premises machines, your IP address needs to be added to the IP-based firewall using a server-level firewall rule. While this model works well for allowing access to individual machines for dev or test workloads, it's difficult to manage in a production environment. --With Private Link, you can enable cross-premises access to the private endpoint using [Express Route](https://azure.microsoft.com/services/expressroute/) (ER), private peering or [VPN tunnel](../../vpn-gateway/index.yml). They can subsequently disable all access via public endpoint and not use the IP-based firewall. --> [!NOTE] -> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations: -> - Make sure that both subscriptions have the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] --## Configure Private Link for Azure Database for MySQL --### Creation Process --Private endpoints are required to enable Private Link. This can be done using the following how-to guides. --* [Azure portal](./how-to-configure-private-link-portal.md) -* [CLI](./how-to-configure-private-link-cli.md) --### Approval Process -Once the network admin creates the private endpoint (PE), the MySQL admin can manage the private endpoint Connection (PEC) to Azure Database for MySQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for MySQL connectivity. --* Navigate to the Azure Database for MySQL server resource in the Azure portal. - * Select the private endpoint connections in the left pane - * Shows a list of all private endpoint Connections (PECs) - * Corresponding private endpoint (PE) created ---* Select an individual PEC from the list by selecting it. ---* The MySQL server admin can choose to approve or reject a PEC and optionally add a short text response. ---* After approval or rejection, the list will reflect the appropriate state along with the response text ---## Use cases of Private Link for Azure Database for MySQL --Clients can connect to the private endpoint from the same VNet, [peered VNet](../../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases. ---### Connecting from an Azure VM in Peered Virtual Network (VNet) -Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for MySQL from an Azure VM in a peered VNet. --### Connecting from an Azure VM in VNet-to-VNet environment -Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to an Azure Database for MySQL from an Azure VM in a different region or subscription. --### Connecting from an on-premises environment over VPN -To establish connectivity from an on-premises environment to the Azure Database for MySQL, choose and implement one of the options: --* [Point-to-Site connection](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md) -* [Site-to-Site VPN connection](../../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md) -* [ExpressRoute circuit](../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md) --## Private Link combined with firewall rules --The following situations and outcomes are possible when you use Private Link in combination with firewall rules: --* If you don't configure any firewall rules, then by default, no traffic will be able to access the Azure Database for MySQL. --* If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule. --* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for MySQL is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for MySQL. --## Deny public access for Azure Database for MySQL --If you want to rely only on private endpoints for accessing their Azure Database for MySQL, you can disable setting all public endpoints (i.e. [firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server. --When this setting is set to *YES*, only connections via private endpoints are allowed to your Azure Database for MySQL. When this setting is set to *NO*, clients can connect to your Azure Database for MySQL based on your firewall or VNet service endpoint settings. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'. --> [!NOTE] -> This feature is available in all Azure regions where Azure Database for MySQL - Single Server supports General Purpose and Memory Optimized pricing tiers. -> -> This setting does not have any impact on the SSL and TLS configurations for your Azure Database for MySQL. --To learn how to set the **Deny Public Network Access** for your Azure Database for MySQL from Azure portal, refer to [How to configure Deny Public Network Access](how-to-deny-public-network-access.md). --## Next steps --To learn more about Azure Database for MySQL security features, see the following articles: --* To configure a firewall for Azure Database for MySQL, see [Firewall support](./concepts-firewall-rules.md). --* To learn how to configure a virtual network service endpoint for your Azure Database for MySQL, see [Configure access from virtual networks](./concepts-data-access-and-security-vnet.md). --* For an overview of Azure Database for MySQL connectivity, see [Azure Database for MySQL Connectivity Architecture](./concepts-connectivity-architecture.md) --<!-- Link references, to text, Within this same GitHub repo. --> -[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | Concepts Data Encryption Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-encryption-mysql.md | - Title: Data encryption with customer-managed key - Azure Database for MySQL -description: Azure Database for MySQL data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. ----- Previously updated : 06/20/2022---# Azure Database for MySQL data encryption with a customer-managed key ----Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys. --Data encryption with customer-managed keys for Azure Database for MySQL, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../../key-vault/general/security-features.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article. --Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by [FIPS 140 validated](/azure/key-vault/keys/about-keys#compliance) hardware security modules (HSMs). It doesn't allow direct access to a stored key, but does provide services of encryption and decryption to authorized entities. Key Vault can generate the key, import it, or [have it transferred from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md). --> [!NOTE] -> This feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-data-encryption-mysql.md#limitations) section. --## Benefits --Data encryption with customer-managed keys for Azure Database for MySQL provides the following benefits: --* Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible -* Full control over the key-lifecycle, including rotation of the key to align with corporate policies -* Central management and organization of keys in Azure Key Vault -* Ability to implement separation of duties between security officers, and DBA and system administrators ---## Terminology and description --**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key. --**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK. --The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../../security/fundamentals/encryption-atrest.md). --## How data encryption with a customer-managed key work ---For a MySQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server: --* **get**: For retrieving the public part and properties of the key in the key vault. -* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL. -* **unwrapKey**: To be able to decrypt the DEK. Azure Database for MySQL needs the decrypted DEK to encrypt/decrypt the data --The key vault administrator can also [enable logging of Key Vault audit events](../../key-vault/key-vault-insights-overview.md), so they can be audited later. --When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled. --## Requirements for configuring data encryption for Azure Database for MySQL --The following are requirements for configuring Key Vault: --* Key Vault and Azure Database for MySQL must belong to the same Microsoft Entra tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving Key Vault resource afterwards requires you to reconfigure the data encryption. -* Enable the [soft-delete](../../key-vault/general/soft-delete-overview.md) feature on the key vault with retention period set to **90 days**, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days by default, unless the retention period is explicitly set to <=90 days. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal). -* Enable the [Purge Protection](../../key-vault/general/soft-delete-overview.md#purge-protection) feature on the key vault with retention period set to **90 days**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via Azure CLI or PowerShell. When purge protection is on, a vault or an object in the deleted state cannot be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed. -* Grant the Azure Database for MySQL access to the key vault with the get, wrapKey, and unwrapKey permissions by using its unique managed identity. In the Azure portal, the unique 'Service' identity is automatically created when data encryption is enabled on the MySQL. See [Configure data encryption for MySQL](how-to-data-encryption-portal.md) for detailed, step-by-step instructions when you're using the Azure portal. --The following are requirements for configuring the customer-managed key: --* The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048. -* The key activation date (if set) must be a date and time in the past. The expiration date not set. -* The key must be in the *Enabled* state. -* The key must have [soft delete](../../key-vault/general/soft-delete-overview.md) with retention period set to **90 days**.This implicitly sets the required key attribute recoveryLevel: ΓÇ£RecoverableΓÇ¥. If the retention is set to < 90 days, the recoveryLevel: "CustomizedRecoverable", which doesn't the requirement so ensure to set the retention period is set to **90 days**. -* The key must have [purge protection enabled](../../key-vault/general/soft-delete-overview.md#purge-protection). -* If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key) into the key vault, make sure to provide it in the supported file formats (`.pfx`, `.byok`, `.backup`). --## Recommendations --When you're using data encryption by using a customer-managed key, here are recommendations for configuring Key Vault: --* Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion. -* Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated. -* Ensure that Key Vault and Azure Database for MySQL reside in the same region, to ensure a faster access for DEK wrap, and unwrap operations. -* Lock down the Azure KeyVault to only **private endpoint and selected networks** and allow only *trusted Microsoft* services to secure the resources. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/keyvault-trusted-service.png" alt-text="trusted-service-with-AKV"::: --Here are recommendations for configuring a customer-managed key: --* Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service. --* If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey). --## Inaccessible customer-managed key condition --When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*. Some of the reason why the server can reach this state are: --* If we create a Point In Time Restore server for your Azure Database for MySQL, which has data encryption enabled, the newly created server will be in *Inaccessible* state. You can fix this through [Azure portal](how-to-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](how-to-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers). -* If we create a read replica for your Azure Database for MySQL, which has data encryption enabled, the replica server will be in *Inaccessible* state. You can fix this through [Azure portal](how-to-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](how-to-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers). -* If you delete the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*. -* If we delete the key from the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*. -* If the key stored in the Azure KeyVault expires, the key will become invalid and the Azure Database for MySQL will transition into *Inaccessible* state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the server *Available*. --### Accidental key access revocation from Key Vault --It might happen that someone with sufficient access rights to Key Vault accidentally disables server access to the key by: --* Revoking the key vault's `get`, `wrapKey`, and `unwrapKey` permissions from the server. -* Deleting the key. -* Deleting the key vault. -* Changing the key vault's firewall rules. -* Deleting the managed identity of the server in Microsoft Entra ID. --## Monitor the customer-managed key in Key Vault --To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features: --* [Azure Resource Health](../../service-health/resource-health-overview.md): An inaccessible database that has lost access to the customer key shows as "Inaccessible" after the first connection to the database has been denied. -* [Activity log](../../service-health/alerts-activity-log-service-notifications-portal.md): When access to the customer key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible, if you create alerts for these events. --* [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences. --## Restore and replicate with a customer's managed key in Key Vault --After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through read replicas. However, the copy can be changed to reflect a new customer's managed key for encryption. When the customer-managed key is changed, old backups of the server start using the latest key. --To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the source and restored/replica servers: --* Initiate the restore or read replica creation process from the source Azure Database for MySQL. -* Keep the newly created server (restored/replica) in an inaccessible state, because its unique identity hasn't yet been given permissions to Key Vault. -* On the restored/replica server, revalidate the customer-managed key in the data encryption settings to ensures that the newly created server is given wrap and unwrap permissions to the key stored in Key Vault. --## Limitations --For Azure Database for MySQL, the support for encryption of data at rest using customers managed key (CMK) has few limitations - --* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers. -* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage) -- > [!NOTE] - > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ. - > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. --* Encryption is only supported with RSA 2048 cryptographic key. --## Next steps --* Learn how to set up data encryption with a customer-managed key for your Azure database for MySQL by using the [Azure portal](how-to-data-encryption-portal.md) and [Azure CLI](how-to-data-encryption-cli.md). -* Learn about the storage type support for [Azure Database for MySQL - Single Server](concepts-pricing-tiers.md#storage) |
mysql | Concepts Data In Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-in-replication.md | - Title: Data-in Replication - Azure Database for MySQL -description: Learn about using Data-in Replication to synchronize from an external server into the Azure Database for MySQL service. ----- Previously updated : 06/20/2022---# Replicate data into Azure Database for MySQL ----Data-in Replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL service. The external server can be on-premises, in virtual machines, or a database service hosted by other cloud providers. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). --## When to use Data-in Replication --The main scenarios to consider about using Data-in Replication are: --- **Hybrid Data Synchronization:** With Data-in Replication, you can keep data synchronized between your on-premises servers and Azure Database for MySQL. This synchronization is useful for creating hybrid applications. This method is appealing when you have an existing local database server but want to move the data to a region closer to end users.-- **Multi-Cloud Synchronization:** For complex cloud solutions, use Data-in Replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database services hosted in those clouds.--For migration scenarios, use the [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/)(DMS). --## Limitations and considerations --### Data not replicated --The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html). --### Filtering --To skip replicating tables from your source server (hosted on-premises, in virtual machines, or a database service hosted by other cloud providers), the `replicate_wild_ignore_table` parameter is supported. Optionally, update this parameter on the replica server hosted in Azure using the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md). --To learn more about this parameter, review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table). --## Supported in General Purpose or Memory Optimized tier only --Data-in Replication is only supported in General Purpose and Memory Optimized pricing tiers. --## Private Link support --The private link for Azure database for MySQL support only inbound connections. As data-in replication requires outbound connection from service private link is not supported for the data-in traffic. -->[!NOTE] ->GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2). --### Requirements --- The source server version must be at least MySQL version 5.6.-- The source and replica server versions must be the same. For example, both must be MySQL version 5.6 or both must be MySQL version 5.7.-- Each table must have a primary key.-- The source server should use the MySQL InnoDB engine.-- User must have permissions to configure binary logging and create new users on the source server.-- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` or `mysql.az_replication_change_master_with_gtid` stored procedure. Refer to the following [examples](./how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter.-- Ensure that the source server's IP address has been added to the Azure Database for MySQL replica server's firewall rules. Update firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md).-- Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306.-- Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).--## Next steps --- Learn how to [set up data-in replication](how-to-data-in-replication.md)-- Learn about [replicating in Azure with read replicas](concepts-read-replicas.md)-- Learn about how to [migrate data with minimal downtime using DMS](how-to-migrate-online.md) |
mysql | Concepts Database Application Development | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-database-application-development.md | - Title: Application development - Azure Database for MySQL -description: Introduces design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL ----- Previously updated : 06/20/2022---# Application development overview for Azure Database for MySQL ----This article discusses design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL. --> [!TIP] -> For a tutorial showing you how to create a server, create a server-based firewall, view server properties, create database, and connect and query by using workbench and mysql.exe, see [Design your first Azure Database for MySQL database](tutorial-design-database-using-portal.md) --## Language and platform -There are code samples available for various programming languages and platforms. You can find links to the code samples at: -[Connectivity libraries used to connect to Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md) --## Tools -Azure Database for MySQL uses the MySQL community version, compatible with MySQL common management tools such as Workbench or MySQL utilities such as mysql.exe, [phpMyAdmin](https://www.phpmyadmin.net/), [Navicat](https://www.navicat.com/products/navicat-for-mysql), [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/) and others. You can also use the Azure portal, Azure CLI, and REST APIs to interact with the database service. --## Resource limitations -Azure Database for MySQL manages the resources available to a server by using two different mechanisms: -- Resources Governance.-- Enforcement of Limits.--## Security -Azure Database for MySQL provides resources for limiting access, protecting data, configuring users and roles, and monitoring activities on a MySQL database. --## Authentication -Azure Database for MySQL supports server authentication of users and logins. --## Resiliency -When a transient error occurs while connecting to a MySQL database, your code should retry the call. We recommend that the retry logic use back off logic so that it does not overwhelm the SQL database with multiple clients retrying simultaneously. --- Code samples: For code samples that illustrate retry logic, see samples for the language of your choice at: [Connectivity libraries used to connect to Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md)--## Managing connections -Database connections are a limited resource, so we recommend sensible use of connections when accessing your MySQL database to achieve better performance. -- Access the database by using connection pooling or persistent connections.-- Access the database by using short connection life span. -- Use retry logic in your application at the point of the connection attempt to catch failures resulting from concurrent connections have reached the maximum allowed. In the retry logic, set a short delay, and then wait for a random time before the additional connection attempts. |
mysql | Concepts Firewall Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-firewall-rules.md | - Title: Firewall rules - Azure Database for MySQL -description: Learn about using firewall rules to enable connections to your Azure Database for MySQL server. ----- Previously updated : 06/20/2022---# Azure Database for MySQL server firewall rules ----Firewalls prevent all access to your database server until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request. --To configure a firewall, create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level. --**Firewall rules:** These rules enable clients to access your entire Azure Database for MySQL server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor. --## Firewall overview -All database access to your Azure Database for MySQL server is by default blocked by the firewall. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules. --Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your Azure Database for MySQL database, as shown in the following diagram: ---## Connecting from the Internet -Server-level firewall rules apply to all databases on the Azure Database for MySQL server. --If the IP address of the request is within one of the ranges specified in the server-level firewall rules, then the connection is granted. --If the IP address of the request is outside the ranges specified in any of the database-level or server-level firewall rules, then the connection request fails. --## Connecting from Azure -It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints). --If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL server. --> [!IMPORTANT] -> The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users. -> ---### Connecting from a VNet -To connect securely to your Azure Database for MySQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md). --## Programmatically managing firewall rules -In addition to the Azure portal, firewall rules can be managed programmatically by using the Azure CLI. See also [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md) --## Troubleshooting firewall issues -Consider the following points when access to the Microsoft Azure Database for MySQL server service does not behave as expected: --* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect. --* **The login is not authorized or an incorrect password was used:** If a login does not have permissions on the Azure Database for MySQL server or the password used is incorrect, the connection to the Azure Database for MySQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must provide the necessary security credentials. --* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you can try one of the following solutions: -- * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for MySQL server, and then add the IP address range as a firewall rule. -- * Get static IP addressing instead for your client computers, and then add the IP addresses as firewall rules. --* **Server's IP appears to be public:** Connections to the Azure Database for MySQL server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md). --* **Cannot connect from Azure resource with allowed IP:** Check whether the **Microsoft.Sql** service endpoint is enabled for the subnet you are connecting from. If **Microsoft.Sql** is enabled, it indicates that you only want to use [VNet service endpoint rules](concepts-data-access-and-security-vnet.md) on that subnet. -- For example, you may see the following error if you are connecting from an Azure VM in a subnet that has **Microsoft.Sql** enabled but has no corresponding VNet rule: - `FATAL: Client from Azure Virtual Networks is not allowed to access the server` --* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error. --## Next steps --* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md) -* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md) -* [VNet service endpoints in Azure Database for MySQL](./concepts-data-access-and-security-vnet.md) |
mysql | Concepts High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-high-availability.md | - Title: High availability - Azure Database for MySQL -description: This article provides information on high availability in Azure Database for MySQL ----- Previously updated : 06/20/2022---# High availability in Azure Database for MySQL ----The Azure Database for MySQL service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/mysql) uptime. Azure Database for MySQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MySQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service. --Azure Database for MySQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components. --## Components in Azure Database for MySQL --| **Component** | **Description**| -| | -- | -| <b>MySQL Database Server | Azure Database for MySQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in 60-120 seconds depending on the transactional activity on the database. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (ib_log) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://dev.mysql.com/doc/refman/5.7/en/innodb-checkpoints.html) process, data pages from the database server memory are also flushed to the storage. | -| <b>Remote Storage | All MySQL physical data files and log files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within 60 seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. | -| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. | --## Planned downtime mitigation -Azure Database for MySQL is architected to provide high availability during planned downtime operations. ---Here are some planned maintenance scenarios: --| **Scenario** | **Description**| -| | -- | -| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.| -| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.| -| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| -| <b>Minor version upgrades | Azure Database for MySQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| ---## Unplanned downtime mitigation --Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in 60-120 seconds. The remote storage is automatically attached to the new database server. MySQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MySQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention. ----### Unplanned downtime: failure scenarios and service recovery -Here are some failure scenarios and how Azure Database for MySQL automatically recovers: --| **Scenario** | **Automatic recovery** | -| - | - | -| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. | -| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. | --Here are some failure scenarios that require user action to recover: --| **Scenario** | **Recovery plan** | -| - | - | -| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](how-to-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. | -| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [mysqldump](concepts-migrate-dump-restore.md), and then use [restore](concepts-migrate-dump-restore.md#restore-your-mysql-database-using-command-line) to restore those tables into your database. | ----## Summary --Azure Database for MySQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for MySQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/mysql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications. --## Next steps -- Learn about [Azure regions](../../availability-zones/az-overview.md)-- Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](how-to-read-replicas-portal.md) |
mysql | Concepts Infrastructure Double Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-infrastructure-double-encryption.md | - Title: Infrastructure double encryption - Azure Database for MySQL -description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service managed keys. ----- Previously updated : 06/20/2022---# Azure Database for MySQL Infrastructure double encryption ----Azure Database for MySQL uses storage [encryption of data at-rest](concepts-security.md#at-rest) for data using Microsoft's managed keys. Data, including backups, are encrypted on disk and this encryption is always on and can't be disabled. The encryption uses FIPS 140-2 validated cryptographic module and an AES 256-bit cipher for the Azure storage encryption. --Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for MySQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact. --> [!NOTE] -> Like data encryption at rest, this feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-infrastructure-double-encryption.md#limitations) section. --Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for MySQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-mysql.md) for the provisioned MySQL server. --Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures. --> [!NOTE] -> Using Infrastructure double encryption will have 5-10% impact on the throughput of your Azure Database for MySQL server due to the additional encryption process. --## Benefits --Infrastructure double encryption for Azure Database for MySQL provides the following benefits: --1. **Additional diversity of crypto implementation** - The planned move to hardware-based encryption will further diversify the implementations by providing a hardware-based implementation in addition to the software-based implementation. -2. **Implementation errors** - Two layers of encryption at infrastructure layer protects against any errors in caching or memory management in higher layers that exposes plaintext data. Additionally, the two layers also ensures against errors in the implementation of the encryption in general. --The combination of these provides strong protection against common threats and weaknesses used to attack cryptography. --## Supported scenarios with infrastructure double encryption --The encryption capabilities that are provided by Azure Database for MySQL can be used together. Below is a summary of the various scenarios that you can use: --| ## | Default encryption | Infrastructure double encryption | Data encryption using Customer-managed keys | -|:|::|:--:|:--:| -| 1 | *Yes* | *No* | *No* | -| 2 | *Yes* | *Yes* | *No* | -| 3 | *Yes* | *No* | *Yes* | -| 4 | *Yes* | *Yes* | *Yes* | -| | | | | --> [!Important] -> - Scenario 2 and 4 can introduce 5-10 percent drop in throughput based on the workload type for Azure Database for MySQL server due to the additional layer of infrastructure encryption. -> - Configuring Infrastructure double encryption for Azure Database for MySQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption. However, you can still enable Data encryption using customer-managed keys for the server created with/without infrastructure double encryption. --## Limitations --For Azure Database for MySQL, the support for infrastruction double encryption has few limitations - --* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers. -* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage) -- > [!NOTE] - > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ. - > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. ---## Next steps --Learn how to [set up Infrastructure double encryption for Azure database for MySQL](how-to-double-encryption.md). |
mysql | Concepts Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-limits.md | - Title: Limitations - Azure Database for MySQL -description: This article describes limitations in Azure Database for MySQL, such as number of connection and storage engine options. ----- Previously updated : 06/20/2022---# Limitations in Azure Database for MySQL ----The following sections describe capacity, storage engine support, privilege support, data manipulation statement support, and functional limits in the database service. Also see [general limitations](https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.6/en/limits.html) applicable to the MySQL database engine. --## Server parameters --> [!NOTE] -> If you are looking for min/max values for server parameters like `max_connections` and `innodb_buffer_pool_size`, this information has moved to the **[server parameters](./concepts-server-parameters.md)** article. --Azure Database for MySQL supports tuning the values of server parameters. The min and max value of some parameters (ex. `max_connections`, `join_buffer_size`, `query_cache_size`) is determined by the pricing tier and vCores of the server. Refer to [server parameters](./concepts-server-parameters.md) for more information about these limits. --Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables are not populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](how-to-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](how-to-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones. --Password plugins such as "validate_password" and "caching_sha2_password" aren't supported by the service. --## Storage engines --MySQL supports many storage engines. On Azure Database for MySQL, the following storage engines are supported and unsupported: --### Supported -- [InnoDB](https://dev.mysql.com/doc/refman/5.7/en/innodb-introduction.html)-- [MEMORY](https://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html)--### Unsupported -- [MyISAM](https://dev.mysql.com/doc/refman/5.7/en/myisam-storage-engine.html)-- [BLACKHOLE](https://dev.mysql.com/doc/refman/5.7/en/blackhole-storage-engine.html)-- [ARCHIVE](https://dev.mysql.com/doc/refman/5.7/en/archive-storage-engine.html)-- [FEDERATED](https://dev.mysql.com/doc/refman/5.7/en/federated-storage-engine.html)--## Privileges & data manipulation support --Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MySQL server. To maintain the service integrity and SLA at a product level, this service doesn't expose multiple roles. --The MySQL service doesn't allow direct access to the underlying file system. Some data manipulation commands aren't supported. --### Unsupported --The following are unsupported: -- DBA role: Restricted. Alternatively, you can use the administrator user (created during new server creation), allows you to perform most of DDL and DML statements. -- SUPER privilege: Similarly, [SUPER privilege](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_super) is restricted.-- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, remove the `CREATE DEFINER` commands manually or by using the `--skip-definer` command when performing a [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html).-- System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionality. You can't make changes to the `mysql` system database.-- `SELECT ... INTO OUTFILE`: Not supported in the service.-- `LOAD_FILE(file_name)`: Not supported in the service.-- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege is not supported for taking backups using any [utility tools](./how-to-decide-on-right-migration-tools.md).--### Supported -- `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). Additionally, if you are using MySQL client version >= 8.0 you need to include `-ΓÇôlocal-infile=1` parameter in your connection string.---## Functional limitations --### Scale operations -- Dynamic scaling to and from the Basic pricing tiers is currently not supported.-- Decreasing server storage size is not supported.--### Major version upgrades -- [Major version upgrade is supported for v5.6 to v5.7 upgrades only](how-to-major-version-upgrade.md). Upgrades to v8.0 is not supported yet.--### Point-in-time-restore -- When using the PITR feature, the new server is created with the same configurations as the server it is based on.-- Restoring a deleted server is not supported.--### VNet service endpoints -- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.--### Storage size -- Please refer to [pricing tiers](concepts-pricing-tiers.md#storage) for the storage size limits per pricing tier.--## Current known issues -- MySQL server instance displays the wrong server version after connection is established. To get the correct server instance engine version, use the `select version();` command.--## Next steps -- [What's available in each service tier](concepts-pricing-tiers.md)-- [Supported MySQL database versions](concepts-supported-versions.md) |
mysql | Concepts Migrate Dbforge Studio For Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dbforge-studio-for-mysql.md | - Title: Use dbForge Studio for MySQL to migrate a MySQL database to Azure Database for MySQL -description: The article demonstrates how to migrate to Azure Database for MySQL by using dbForge Studio for MySQL. ----- Previously updated : 06/20/2022---# Migrate data to Azure Database for MySQL with dbForge Studio for MySQL ----Looking to move your MySQL databases to Azure Database for MySQL? Consider using the migration tools in dbForge Studio for MySQL. With it, database transfer can be configured, saved, edited, automated, and scheduled. --To complete the examples in this article, you'll need to download and install [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/). --## Connect to Azure Database for MySQL --1. In dbForge Studio for MySQL, select **New Connection** from the **Database** menu. --1. Provide a host name and sign-in credentials. --1. Select **Test Connection** to check the configuration. ---## Migrate with the Backup and Restore functionality --You can choose from many options when using dbForge Studio for MySQL to migrate databases to Azure. If you need to move the entire database, it's best to use the **Backup and Restore** functionality. --In this example, we migrate the *sakila* database from MySQL server to Azure Database for MySQL. The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL. --### Back up the database --1. In dbForge Studio for MySQL, select **Backup Database** from the **Backup and Restore** menu. The **Database Backup Wizard** appears. --1. On the **Backup content** tab of the **Database Backup Wizard**, select database objects you want to back up. --1. On the **Options** tab, configure the backup process to fit your requirements. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png" alt-text="Screenshot showing the options pane of the Backup wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png"::: --1. Select **Next**, and then specify error processing behavior and logging options. --1. Select **Backup**. --### Restore the database --1. In dbForge Studio for MySQL, connect to Azure Database for MySQL. [Refer to the instructions](#connect-to-azure-database-for-mysql). --1. Select **Restore Database** from the **Backup and Restore** menu. The **Database Restore Wizard** appears. --1. In the **Database Restore Wizard**, select a file with a database backup. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png" alt-text="Screenshot showing the Restore step of the Database Restore wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png"::: --1. Select **Restore**. --1. Check the result. --## Migrate with the Copy Databases functionality --The **Copy Databases** functionality in dbForge Studio for MySQL is similar to **Backup and Restore**, except that it doesn't require two steps to migrate a database. It also lets you transfer two or more databases at once. -->[!NOTE] -> The **Copy Databases** functionality is only available in the Enterprise edition of dbForge Studio for MySQL. --In this example, we migrate the *world_x* database from MySQL server to Azure Database for MySQL. --To migrate a database using the Copy Databases functionality: --1. In dbForge Studio for MySQL, select **Copy Databases** from the **Database** menu. --1. On the **Copy Databases** tab, specify the source and target connection. Also select the databases to be migrated. -- We enter the Azure MySQL connection and select the *world_x* database. Select the green arrow to start the process. --1. Check the result. --You'll see that the *world_x* database has successfully appeared in Azure MySQL. ---## Migrate a database with schema and data comparison --You can choose from many options when using dbForge Studio for MySQL to migrate databases, schemas, and/or data to Azure. If you need to move selective tables from a MySQL database to Azure, it's best to use the **Schema Comparison** and the **Data Comparison** functionality. --In this example, we migrate the *world* database from MySQL server to Azure Database for MySQL. --The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL. --The logic behind this approach is to create an empty database in Azure Database for MySQL and synchronize it with the source MySQL database. We first use the **Schema Comparison** tool, and next we use the **Data Comparison** functionality. These steps ensure that the MySQL schemas and data are accurately moved to Azure. --To complete this exercise, you'll first need to [connect to Azure Database for MySQL](#connect-to-azure-database-for-mysql) and create an empty database. --### Schema synchronization --1. On the **Comparison** menu, select **New Schema Comparison**. The **New Schema Comparison Wizard** appears. --1. Choose your source and target, and then specify the schema comparison options. Select **Compare**. --1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Schema Synchronization Wizard**. --1. Walk through the steps of the wizard to configure synchronization. Select **Synchronize** to deploy the changes. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png" alt-text="Screenshot showing the schema synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png"::: --### Data Comparison --1. On the **Comparison** menu, select **New Data Comparison**. The **New Data Comparison Wizard** appears. --1. Choose your source and target, and then specify the data comparison options. Change mappings if necessary, and then select **Compare**. --1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Data Synchronization Wizard**. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png" alt-text="Screenshot showing the results of the data comparison." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png"::: --1. Walk through the steps of the wizard configuring synchronization. Select **Synchronize** to deploy the changes. --1. Check the result. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png" alt-text="Screenshot showing the results of the Data Synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png"::: --## Next steps -- [MySQL overview](overview.md) |
mysql | Concepts Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-monitoring.md | - Title: Monitoring - Azure Database for MySQL -description: This article describes the metrics for monitoring and alerting for Azure Database for MySQL, including CPU, storage, and connection statistics. ------ Previously updated : 06/20/2022---# Monitoring in Azure Database for MySQL ----Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for MySQL provides various metrics that give insight into the behavior of your server. --## Metrics --All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](how-to-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md). --### List of metrics --These metrics are available for Azure Database for MySQL: --|Metric|Metric Display Name|Unit|Description| -||||| -|cpu_percent|CPU percent|Percent|The percentage of CPU in use.| -|memory_percent|Memory percent|Percent|The percentage of memory in use.| -|io_consumption_percent|IO percent|Percent|The percentage of IO in use. (Not applicable for Basic tier servers)| -|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.| -|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.| -|serverlog_storage_percent|Server Log storage percent|Percent|The percentage of server log storage used out of the server's maximum server log storage.| -|serverlog_storage_usage|Server Log storage used|Bytes|The amount of server log storage in use.| -|serverlog_storage_limit|Server Log storage limit|Bytes|The maximum server log storage for this server.| -|storage_limit|Storage limit|Bytes|The maximum storage for this server.| -|active_connections|Active Connections|Count|The number of active connections to the server.| -|connections_failed|Failed Connections|Count|The number of failed connections to the server.| -|seconds_behind_master|Replication lag in seconds|Count|The number of seconds the replica server is lagging against the source server. (Not applicable for Basic tier servers)| -|network_bytes_egress|Network Out|Bytes|Network Out across active connections.| -|network_bytes_ingress|Network In|Bytes|Network In across active connections.| -|backup_storage_used|Backup Storage Used|Bytes|The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.| --## Server logs --You can enable slow query and audit logging on your server. These logs are also available through Azure Diagnostic Logs in Azure Monitor logs, Event Hubs, and Storage Account. To learn more about logging, visit the [audit logs](concepts-audit-logs.md) and [slow query logs](concepts-server-logs.md) articles. --## Query Store --[Query Store](concepts-query-store.md) is a feature that keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in the **mysql** schema. You can control the collection and storage of data via various configuration knobs. --## Query Performance Insight --[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible in the **Intelligent Performance** section of your Azure Database for MySQL server's portal page. --## Performance Recommendations --The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes. --## Planned maintenance notification --[Planned maintenance notifications](./concepts-planned-maintenance-notification.md) allow you to receive alerts for upcoming planned maintenance to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 hours before the event. --Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document. --## Next steps --- See [How to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric.-- For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md).-- Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-mysql-monitoring/).-- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for MySQL - Single Server |
mysql | Concepts Performance Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-performance-recommendations.md | - Title: Performance recommendations - Azure Database for MySQL -description: This article describes the Performance Recommendation feature in Azure Database for MySQL ----- Previously updated : 06/20/2022---# Performance Recommendations in Azure Database for MySQL ----**Applies to:** Azure Database for MySQL 5.7, 8.0 --The Performance Recommendations feature analyzes your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. If performance schema is OFF, turning on Query Store enables performance_schema and a subset of performance schema instruments required for the feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes. --## Permissions --**Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature. --## Performance recommendations --The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance. --Open **Performance Recommendations** from the **Intelligent Performance** section of the menu bar on the Azure portal page for your MySQL server. ---Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, the analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods. --The **Recommendations** window will show a list of recommendations if any were found and the related query ID that generated this recommendation. With the query ID, you can use the [mysql.query_store](concepts-query-store.md#mysqlquery_store) view to learn more about the query. ---Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation. --## Recommendation types --### Index recommendations --*Create Index* recommendations suggest new indexes to speed up the most frequently run or time-consuming queries in the workload. This recommendation type requires [Query Store](concepts-query-store.md) to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation. --### Query recommendations --Query recommendations suggest optimizations and rewrites for queries in the workload. By identifying MySQL query anti-patterns and fixing them syntactically, the performance of time-consuming queries can be improved. This recommendation type requires Query Store to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation. --## Next steps -- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL. |
mysql | Concepts Planned Maintenance Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-planned-maintenance-notification.md | - Title: Planned maintenance notification - Azure Database for MySQL - Single Server -description: This article describes the Planned maintenance notification feature in Azure Database for MySQL - Single Server ----- Previously updated : 06/20/2022---# Planned maintenance notification in Azure Database for MySQL - Single Server ----Learn how to prepare for planned maintenance events on your Azure Database for MySQL. --## What is a planned maintenance? --Azure Database for MySQL service performs automated patching of the underlying hardware, OS, and database engine. The patch includes new service features, security, and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patch is tested extensively and rolled out using safe deployment practices. --A planned maintenance is a maintenance window when these service updates are deployed to servers in a given Azure region. During planned maintenance, a notification event is created to inform customers when the service update is deployed in the Azure region hosting their servers. Minimum duration between two planned maintenance is 30 days. You receive a notification of the next maintenance window 72 hours in advance. --## Planned maintenance - duration and customer impact --A planned maintenance for a given Azure region is typically expected to run 15 hrs. The window also includes buffer time to execute a rollback plan if necessary. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events. --In summary, while the planned maintenance event runs for 15 hours, the individual server impact generally lasts 60 seconds depending on the transactional activity on the server. A notification is sent 72 calendar hours before planned maintenance starts and another one while maintenance is in progress for a given region. --## How can I get notified of planned maintenance? --You can utilize the planned maintenance notifications feature to receive alerts for an upcoming planned maintenance event. You will receive the notification about the upcoming maintenance 72 calendar hours before the event and another one while maintenance is in-progress for a given region. --### Planned maintenance notification --> [!IMPORTANT] -> Planned maintenance notifications are currently available in preview in all regions **except** West Central US --**Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event. --We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted. --You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification. --### Check planned maintenance notification from Azure portal --1. In the [Azure portal](https://portal.azure.com), select **Service Health**. -2. Select **Planned Maintenance** tab -3. Select **Subscription**, **Region**, and **Service** for which you want to check the planned maintenance notification. - -### To receive planned maintenance notification --1. In the [portal](https://portal.azure.com), select **Service Health**. -2. In the **Alerts** section, select **Health alerts**. -3. Select **+ Add service health alert** and fill in the fields. -4. Fill out the required fields. -5. Choose the **Event type**, select **Planned maintenance** or **Select all** -6. In **Action groups** define how you would like to receive the alert (get an email, trigger a logic app etc.) -7. Ensure Enable rule upon creation is set to Yes. -8. Select **Create alert rule** to complete your alert --For detailed steps on how to create **service health alerts**, refer to [Create activity log alerts on service notifications](../../service-health/alerts-activity-log-service-notifications-portal.md). --## Can I cancel or postpone planned maintenance? --Maintenance is needed to keep your server secure, stable, and up-to-date. The planned maintenance event cannot be canceled or postponed. Once the notification is sent to a given Azure region, the patching schedule changes cannot be made for any individual server in that region. The patch is rolled out for entire region at once. Azure Database for MySQL - Single Server service is designed for cloud native application that doesn't require granular control or customization of the service. If you are looking to have ability to schedule maintenance for your servers, we recommend you consider [Flexible servers](../flexible-server/overview.md). --## Are all the Azure regions patched at the same time? --No, all the Azure regions are patched during the deployment wise window timings. The deployment wise window generally stretches from 5 PM - 8 AM local time next day, in a given Azure region. Geo-paired Azure regions are patched on different days. For high availability and business continuity of database servers, leveraging [cross region read replicas](./concepts-read-replicas.md#cross-region-replication) is recommended. --## Retry logic --A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors). ---## Next steps --- See [How to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric.-- [Troubleshoot connection issues to Azure Database for MySQL - Single Server](how-to-troubleshoot-common-connection-issues.md)-- [Handle transient errors and connect efficiently to Azure Database for MySQL - Single Server](concepts-connectivity.md) |
mysql | Concepts Pricing Tiers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-pricing-tiers.md | - Title: Azure Database for MySQL - Single Server service tiers -description: Learn about the various service tiers for Azure Database for MySQL including compute generations, storage types, storage size, vCores, memory, and backup retention periods. ----- Previously updated : 06/20/2022---# Azure Database for MySQL - Single Server service tiers ----You can create an Azure Database for MySQL server in one of three different service tiers: Basic, General Purpose, and Memory Optimized. The service tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MySQL server level. A server can have one or many databases. --| Attribute | **Basic** | **General Purpose** | **Memory Optimized** | -|:|:-|:--|:| -| Compute generation | Gen 4, Gen 5 | Gen 4, Gen 5 | Gen 5 | -| vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 | -| Memory per vCore | 2 GB | 5 GB | 10 GB | -| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB | -| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days | --To choose a pricing tier, use the following table as a starting point. --| Service tier | Target workloads | -|:-|:--| -| Basic | Workloads that require light compute and I/O performance. Examples include servers used for development or testing or small-scale infrequently used applications. | -| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.| -| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.| --> [!NOTE] -> Dynamic scaling to and from the Basic service tiers is currently not supported. Basic Tier SKUs servers can't be scaled up to General Purpose or Memory Optimized Tiers. --After you create a General Purpose or Memory Optimized server, the number of vCores, hardware generation, and pricing tier can be changed up or down within seconds. You also can independently adjust the amount of storage up and the backup retention period up or down with no application downtime. You can't change the backup storage type after a server is created. For more information, see the [Scale resources](#scale-resources) section. --## Compute generations and vCores --Compute resources are provided as vCores, which represent the logical CPU of the underlying hardware. China East 1, China North 1, US DoD Central, and US DoD East utilize Gen 4 logical CPUs that are based on Intel E5-2673 v3 (Haswell) 2.4-GHz processors. All other regions utilize Gen 5 logical CPUs that are based on Intel E5-2673 v4 (Broadwell) 2.3-GHz processors. --## Storage --The storage you provision is the amount of storage capacity available to your Azure Database for MySQL server. The storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server. --Azure Database for MySQL ΓÇô Single Server supports the following the backend storage for the servers. --| Storage type | Basic | General purpose v1 | General purpose v2 | -|:|:-|:--|:| -| Storage size | 5 GB to 1 TB | 5 GB to 4 TB | 5 GB to 16 TB | -| Storage increment size | 1 GB | 1 GB | 1 GB | -| IOPS | Variable |3 IOPS/GB<br/>Min 100 IOPS<br/>Max 6000 IOPS | 3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS | -->[!NOTE] -> Basic storage does not provide an IOPS guarantee. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. --### Basic storage -Basic storage is the backend storage supporting Basic pricing tier servers. Basic storage uses Azure standard storage in the backend where iops provisioned are not guaranteed and latency is variable. Basic tier is best suited for workloads that require light compute, low cost and I/O performance for development or small-scale infrequently used applications. --### General purpose storage -General purpose storage is the backend storage supporting General Purpose and Memory Optimized tier server. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. There are two generations of general purpose storage as described below: --#### General purpose storage v1 (Supports up to 4-TB) -General purpose storage v1 is based on the legacy storage technology which can support up to 4-TB storage and 6000 IOPs per server. General purpose storage v1 is optimized to leverage memory from the compute nodes running MySQL engine for local caching and backups. The backup process on general purpose storage v1 reads from the data and log files in the memory of the compute nodes and copies it to the target backup storage for retention up to 35 days. As a result, the memory and io consumption of storage during backups is relatively higher. --All Azure regions supports General purpose storage v1 --For General Purpose or Memory Optimized server on general purpose storage v1, we recommend you consider --* Plan for compute sku tier accounting for 10-30% excess memory for storage caching and backup buffers -* Provision 10% higher IOPs than required by the database workload to account for backup IOs -* Alternatively, migrate to general purpose storage v2 described below that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred Azure regions shared below. --#### General purpose storage v2 (Supports up to 16-TB storage) -General purpose storage v2 is based on the latest storage infrastructure which can support up to 16-TB and 20000 IOPs. In a subset of Azure regions where the infrastructure is available, all newly provisioned servers land on general purpose storage v2 by default. General purpose storage v2 does not consume any memory from the compute node of MySQL and provides better predictable IO latencies compared to general purpose v1 storage. Backups on the general purpose v2 storage servers are snapshot-based with no additional IO overhead. On general purpose v2 storage, the MySQL server performance is expected to higher compared to general purpose storage v1 for the same storage and iops provisioned.There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal. --General purpose storage v2 is supported in the following Azure regions: --| Region | General purpose storage v2 availability | -| | | -| Australia East | :heavy_check_mark: | -| Australia South East | :heavy_check_mark: | -| Brazil South | :heavy_check_mark: | -| Canada Central | :heavy_check_mark: | -| Canada East | :heavy_check_mark: | -| Central US | :heavy_check_mark: | -| East US | :heavy_check_mark: | -| East US 2 | :heavy_check_mark: | -| East Asia | :heavy_check_mark: | -| Japan East | :heavy_check_mark: | -| Japan West | :heavy_check_mark: | -| Korea Central | :heavy_check_mark: | -| Korea South | :heavy_check_mark: | -| North Europe | :heavy_check_mark: | -| North Central US | :heavy_check_mark: | -| South Central US | :heavy_check_mark: | -| Southeast Asia | :heavy_check_mark: | -| UK South | :heavy_check_mark: | -| UK West | :heavy_check_mark: | -| West Central US | :heavy_check_mark: | -| West US | :heavy_check_mark: | -| West US 2 | :heavy_check_mark: | -| West Europe | :heavy_check_mark: | -| Central India | :heavy_check_mark: | -| France Central* | :heavy_check_mark: | -| UAE North* | :heavy_check_mark: | -| South Africa North* | :heavy_check_mark: | --> [!NOTE] -> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br /> -> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, following are the limitations, <br /> -> * Geo-Redundant Backup will not be supported<br /> -> * The replica server should be in the regions which support General purpose storage v2. <br /> --### How can I determine which storage type my server is running on? --You can find the storage type of your server by going to **Settings** > **Compute + storage** page -* If the server is provisioned using Basic SKU, the storage type is Basic storage. -* If the server is provisioned using General Purpose or Memory Optimized SKU, the storage type is General Purpose storage - * If the maximum storage that can be provisioned on your server is up to 4-TB, the storage type is General Purpose storage v1. - * If the maximum storage that can be provisioned on your server is up to 16-TB, the storage type is General Purpose storage v2. --### Can I move from general purpose storage v1 to general purpose storage v2? if yes, how and is there any additional cost? -Yes, migration to general purpose storage v2 from v1 is supported if the underlying storage infrastructure is available in the Azure region of the source server. The migration and v2 storage is available at no additional cost. --### Can I grow storage size after server is provisioned? -You can add additional storage capacity during and after the creation of the server, and allow the system to grow storage automatically based on the storage consumption of your workload. --> [!IMPORTANT] -> Storage can only be scaled up, not down. --### Monitoring IO consumption -You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and IO percent](concepts-monitoring.md).The monitoring metrics for the MySQL server with general purpose storage v1 reports the memory and IO consumed by the MySQL engine but may not capture the memory and IO consumption of the storage layer which is a limitation. --### Reaching the storage limit --Servers with less than or equal to 100 GB provisioned storage are marked read-only if the free storage is less than 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB. --For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 256 MB. --While the service attempts to make the server read-only, all new write transaction requests are blocked and existing active transactions will continue to execute. When the server is set to read-only, all subsequent write operations and transaction commits fail. Read queries will continue to work uninterrupted. After you increase the provisioned storage, the server will be ready to accept write transactions again. --We recommend that you turn on storage auto-grow or to set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on [how to set up an alert](how-to-alert-on-metric.md). --### Storage auto-grow --Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than or equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply. --For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free. --Remember that storage can only be scaled up, not down. --## Backup storage --Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any backup storage you use in excess of this amount is charged in GB per month. For example, if you provision a server with 250 GB of storage, youΓÇÖll have 250 GB of additional storage available for server backups at no charge. Storage for backups in excess of the 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/). To understand factors influencing backup storage usage, monitoring and controlling backup storage cost, you can refer to the [backup documentation](concepts-backup.md). --## Scale resources --After you create your server, you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI. For an example of scaling by using Azure CLI, see [Monitor and scale an Azure Database for MySQL server by using Azure CLI](../scripts/sample-scale-server.md). --When you change the number of vCores, the hardware generation, or the pricing tier, a copy of the original server is created with the new compute allocation. After the new server is up and running, connections are switched over to the new server. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This downtime during scaling can be around 60-120 seconds. The downtime during scaling is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of scaling operation. To avoid longer restart time, it is recommended to perform scaling operations during periods of low transactional activity on the server. --Scaling storage and changing the backup retention period are true online operations. There is no downtime, and your application isn't affected. As IOPS scale with the size of the provisioned storage, you can increase the IOPS available to your server by scaling up storage. --## Pricing --For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/mysql/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.MySQLServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for MySQL** to customize the options. --## Next steps --- Learn how to [create a MySQL server in the portal](how-to-create-manage-server-portal.md).-- Learn about [service limits](concepts-limits.md).-- Learn how to [scale out with read replicas](how-to-read-replicas-portal.md). |
mysql | Concepts Query Performance Insight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-performance-insight.md | - Title: Query Performance Insight - Azure Database for MySQL -description: This article describes the Query Performance Insight feature in Azure Database for MySQL ----- Previously updated : 06/20/2022---# Query Performance Insight in Azure Database for MySQL ----**Applies to:** Azure Database for MySQL 5.7, 8.0 --Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them. --## Common scenarios --### Long running queries --- Identifying longest running queries in the past X hours-- Identifying top N queries that are waiting on resources- -### Wait statistics --- Understanding wait nature for a query-- Understanding trends for resource waits and where resource contention exists--## Prerequisites --For Query Performance Insight to function, data must exist in the [Query Store](concepts-query-store.md). --## Viewing performance insights --The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store. --In the portal page of your Azure Database for MySQL server, select **Query Performance Insight** under the **Intelligent Performance** section of the menu bar. --### Long running queries --The **Long running queries** tab shows the top 5 Query IDs by average duration per execution, aggregated in 15-minute intervals. You can view more Query IDs by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this. --> [!NOTE] -> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk. --The recommended steps to view the query text is shared below: - 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal. -1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries. - -```sql - SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store - SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics -``` --You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger time period respectively. ---### Wait statistics --> [!NOTE] -> Wait statistics are meant for troubleshooting query performance issues. It is recommended to be turned on only for troubleshooting purposes. <br>If you receive the error message in the Azure portal "*The issue encountered for 'Microsoft.DBforMySQL'; cannot fulfill the request. If this issue continues or is unexpected, please contact support with this information.*" while viewing wait statistics, use a smaller time period. --Wait statistics provides a view of the wait events that occur during the execution of a specific query. Learn more about the wait event types in the [MySQL engine documentation](https://go.microsoft.com/fwlink/?linkid=2098206). --Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server. --Queries displayed in the wait statistics view are grouped by the queries that exhibit the largest waits during the specified time interval. --> [!NOTE] -> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk. --The recommended steps to view the query text is shared below: - 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal. -1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries. - -```sql - SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store - SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics -``` ---## Next steps --- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL. |
mysql | Concepts Query Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-store.md | - Title: Query Store - Azure Database for MySQL -description: Learn about the Query Store feature in Azure Database for MySQL to help you track performance over time. ----- Previously updated : 06/20/2022---# Monitor Azure Database for MySQL performance with Query Store ----**Applies to:** Azure Database for MySQL 5.7, 8.0 --The Query Store feature in Azure Database for MySQL provides a way to track query performance over time. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in the **mysql** schema database in the Azure Database for MySQL instance. --## Common scenarios for using Query Store --Query store can be used in a number of scenarios, including the following: --- Detecting regressed queries-- Determining the number of times a query was executed in a given time window-- Comparing the average execution time of a query across time windows to see large deltas--## Enabling Query Store --Query Store is an opt-in feature, so it isn't active by default on a server. The query store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database. --### Enable Query Store using the Azure portal --1. Sign in to the Azure portal and select your Azure Database for MySQL server. -1. Select **Server Parameters** in the **Settings** section of the menu. -1. Search for the query_store_capture_mode parameter. -1. Set the value to ALL and **Save**. --To enable wait statistics in your Query Store: --1. Search for the query_store_wait_sampling_capture_mode parameter. -1. Set the value to ALL and **Save**. --Allow up to 20 minutes for the first batch of data to persist in the mysql database. --## Information in Query Store --Query Store has two stores: --- A runtime statistics store for persisting the query execution statistics information.-- A wait statistics store for persisting wait statistics information.--To minimize space usage, the runtime execution statistics in the runtime statistics store are aggregated over a fixed, configurable time window. The information in these stores is visible by querying the query store views. --The following query returns information about queries in Query Store: --```sql -SELECT * FROM mysql.query_store; -``` --Or this query for wait statistics: --```sql -SELECT * FROM mysql.query_store_wait_stats; -``` --## Finding wait queries --> [!NOTE] -> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely. --Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics. --Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store: --| **Observation** | **Action** | -||| -|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level. | -|High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. | -|High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries. | --## Configuration options --When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window. --The following options are available for configuring Query Store parameters. --| **Parameter** | **Description** | **Default** | **Range** | -||||| -| query_store_capture_mode | Turn the query store feature ON/OFF based on the value. Note: If performance_schema is OFF, turning on query_store_capture_mode will turn on performance_schema and a subset of performance schema instruments required for this feature. | ALL | NONE, ALL | -| query_store_capture_interval | The query store capture interval in minutes. Allows specifying the interval in which the query metrics are aggregated | 15 | 5 - 60 | -| query_store_capture_utility_queries | Turning ON or OFF to capture all the utility queries that is executing in the system. | NO | YES, NO | -| query_store_retention_period_in_days | Time window in days to retain the data in the query store. | 7 | 1 - 30 | --The following options apply specifically to wait statistics. --| **Parameter** | **Description** | **Default** | **Range** | -||||| -| query_store_wait_sampling_capture_mode | Allows turning ON / OFF the wait statistics. | NONE | NONE, ALL | -| query_store_wait_sampling_frequency | Alters frequency of wait-sampling in seconds. 5 to 300 seconds. | 30 | 5-300 | --> [!NOTE] -> Currently **query_store_capture_mode** supersedes this configuration, meaning both **query_store_capture_mode** and **query_store_wait_sampling_capture_mode** have to be enabled to ALL for wait statistics to work. If **query_store_capture_mode** is turned off, then wait statistics is turned off as well since wait statistics utilizes the performance_schema enabled, and the query_text captured by query store. --Use the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md) to get or set a different value for a parameter. --## Views and functions --View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](how-to-create-users.md) can use these views to see the data in Query Store. These views are only available in the **mysql** database. --Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash. --### mysql.query_store --This view returns all the data in Query Store. There is one row for each distinct database ID, user ID, and query ID. --| **Name** | **Data Type** | **IS_NULLABLE** | **Description** | -||||| -| `schema_name`| varchar(64) | NO | Name of the schema | -| `query_id`| bigint(20) | NO| Unique ID generated for the specific query, if the same query executes in different schema, a new ID will be generated | -| `timestamp_id` | timestamp| NO| Timestamp in which the query is executed. This is based on the query_store_interval configuration| -| `query_digest_text`| longtext| NO| The normalized query text after removing all the literals| -| `query_sample_text` | longtext| NO| First appearance of the actual query with literals| -| `query_digest_truncated` | bit| YES| Whether the query text has been truncated. Value will be Yes if the query is longer than 1 KB| -| `execution_count` | bigint(20)| NO| The number of times the query got executed for this timestamp ID / during the configured interval period| -| `warning_count` | bigint(20)| NO| Number of warnings this query generated during the internal| -| `error_count` | bigint(20)| NO| Number of errors this query generated during the interval| -| `sum_timer_wait` | double| YES| Total execution time of this query during the interval in milliseconds| -| `avg_timer_wait` | double| YES| Average execution time for this query during the interval in milliseconds| -| `min_timer_wait` | double| YES| Minimum execution time for this query in milliseconds| -| `max_timer_wait` | double| YES| Maximum execution time in milliseconds| -| `sum_lock_time` | bigint(20)| NO| Total amount of time spent for all the locks for this query execution during this time window| -| `sum_rows_affected` | bigint(20)| NO| Number of rows affected| -| `sum_rows_sent` | bigint(20)| NO| Number of rows sent to client| -| `sum_rows_examined` | bigint(20)| NO| Number of rows examined| -| `sum_select_full_join` | bigint(20)| NO| Number of full joins| -| `sum_select_scan` | bigint(20)| NO| Number of select scans | -| `sum_sort_rows` | bigint(20)| NO| Number of rows sorted| -| `sum_no_index_used` | bigint(20)| NO| Number of times when the query did not use any indexes| -| `sum_no_good_index_used` | bigint(20)| NO| Number of times when the query execution engine did not use any good indexes| -| `sum_created_tmp_tables` | bigint(20)| NO| Total number of temp tables created| -| `sum_created_tmp_disk_tables` | bigint(20)| NO| Total number of temp tables created in disk (generates I/O)| -| `first_seen` | timestamp| NO| The first occurrence (UTC) of the query during the aggregation window| -| `last_seen` | timestamp| NO| The last occurrence (UTC) of the query during this aggregation window| --### mysql.query_store_wait_stats --This view returns wait events data in Query Store. There is one row for each distinct database ID, user ID, query ID, and event. --| **Name**| **Data Type** | **IS_NULLABLE** | **Description** | -||||| -| `interval_start` | timestamp | NO| Start of the interval (15-minute increment)| -| `interval_end` | timestamp | NO| End of the interval (15-minute increment)| -| `query_id` | bigint(20) | NO| Generated unique ID on the normalized query (from query store)| -| `query_digest_id` | varchar(32) | NO| The normalized query text after removing all the literals (from query store) | -| `query_digest_text` | longtext | NO| First appearance of the actual query with literals (from query store) | -| `event_type` | varchar(32) | NO| Category of the wait event | -| `event_name` | varchar(128) | NO| Name of the wait event | -| `count_star` | bigint(20) | NO| Number of wait events sampled during the interval for the query | -| `sum_timer_wait_ms` | double | NO| Total wait time (in milliseconds) of this query during the interval | --### Functions --| **Name**| **Description** | -||| -| `mysql.az_purge_querystore_data(TIMESTAMP)` | Purges all query store data before the given time stamp | -| `mysql.az_procedure_purge_querystore_event(TIMESTAMP)` | Purges all wait event data before the given time stamp | -| `mysql.az_procedure_purge_recommendation(TIMESTAMP)` | Purges recommendations whose expiration is before the given time stamp | --## Limitations and known issues --- If a MySQL server has the parameter `read_only` on, Query Store cannot capture data.-- Query Store functionality can be interrupted if it encounters long Unicode queries (\>= 6000 bytes).-- The retention period for wait statistics is 24 hours.-- Wait statistics uses sample to capture a fraction of events. The frequency can be modified using the parameter `query_store_wait_sampling_frequency`.--## Next steps --- Learn more about [Query Performance Insights](concepts-query-performance-insight.md) |
mysql | Concepts Read Replicas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-read-replicas.md | - Title: Read replicas - Azure Database for MySQL -description: 'Learn about read replicas in Azure Database for MySQL: choosing regions, creating replicas, connecting to replicas, monitoring replication, and stopping replication.' ------ Previously updated : 06/20/2022---# Read replicas in Azure Database for MySQL ----The read replica feature allows you to replicate data from an Azure Database for MySQL server to a read-only server. You can replicate from the source server to up to five replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). --Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month. --To learn more about MySQL replication features and issues, see the [MySQL replication documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html). --> [!NOTE] -> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. -> --## When to use a read replica --The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the source. --A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting. --Because replicas are read-only, they don't directly reduce write-capacity burdens on the source. This feature isn't targeted at write-intensive workloads. --The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay. --## Cross-region replication --You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users. --You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its [paired region](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) or the universal replica regions. The following picture shows which replica regions are available depending on your source region. --### Universal replica regions --You can create a read replica in any of the following regions, regardless of where your source server is located. The supported universal replica regions include: --| Region | Replica availability | -| | | -| Australia East | :heavy_check_mark: | -| Australia South East | :heavy_check_mark: | -| Brazil South | :heavy_check_mark: | -| Canada Central | :heavy_check_mark: | -| Canada East | :heavy_check_mark: | -| Central US | :heavy_check_mark: | -| East US | :heavy_check_mark: | -| East US 2 | :heavy_check_mark: | -| East Asia | :heavy_check_mark: | -| Japan East | :heavy_check_mark: | -| Japan West | :heavy_check_mark: | -| Korea Central | :heavy_check_mark: | -| Korea South | :heavy_check_mark: | -| North Europe | :heavy_check_mark: | -| North Central US | :heavy_check_mark: | -| South Central US | :heavy_check_mark: | -| Southeast Asia | :heavy_check_mark: | -| Switzerland North | :heavy_check_mark: | -| UK South | :heavy_check_mark: | -| UK West | :heavy_check_mark: | -| West Central US | :heavy_check_mark: | -| West US | :heavy_check_mark: | -| West US 2 | :heavy_check_mark: | -| West Europe | :heavy_check_mark: | -| Central India* | :heavy_check_mark: | -| France Central* | :heavy_check_mark: | -| UAE North* | :heavy_check_mark: | -| South Africa North* | :heavy_check_mark: | --> [!NOTE] -> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br /> -> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, you are limited to create replica server only in the Azure regions which support General purpose storage v2. --### Paired regions --In addition to the universal replica regions, you can create a read replica in the Azure paired region of your source server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../../availability-zones/cross-region-replication-azure.md). --If you're using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency. --However, there are limitations to consider: --* Regional availability: Azure Database for MySQL is available in France Central, UAE North, and Germany Central. However, their paired regions aren't available. --* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia. - This means that a source server in West India can create a replica in South India. However, a source server in South India can't create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region isn't West India. --## Create a replica --> [!IMPORTANT] -> * The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers. -> * If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details. ---When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the source server. The creation time depends on the amount of data on the source and the time since the last weekly full backup. The time can range from a few minutes to several hours. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. --Every replica is enabled for storage [auto-grow](concepts-pricing-tiers.md#storage-auto-grow). The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent an interruption in replication caused by out-of-storage errors. --Learn how to [create a read replica in the Azure portal](how-to-read-replicas-portal.md). --## Connect to a replica --At creation, a replica inherits the firewall rules of the source server. Afterwards, these rules are independent from the source server. --The replica inherits the admin account from the source server. All user accounts on the source server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the source server. --You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for MySQL server. For a server named **myreplica** with the admin username **myadmin**, you can connect to the replica by using the mysql CLI: --```bash -mysql -h myreplica.mysql.database.azure.com -u myadmin@myreplica -p -``` --At the prompt, enter the password for the user account. --## Monitor replication --Azure Database for MySQL provides the **Replication lag in seconds** metric in Azure Monitor. This metric is available for replicas only. This metric is calculated using the `seconds_behind_master` metric available in MySQL's `SHOW SLAVE STATUS` command. Set an alert to inform you when the replication lag reaches a value that isn't acceptable for your workload. --If you see increased replication lag, refer to [troubleshooting replication latency](how-to-troubleshoot-replication-latency.md) to troubleshoot and understand possible causes. --## Stop replication --You can stop replication between a source and a replica. After replication is stopped between a source server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the source server. --When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There's no automated failover between a source and its replica. --> [!IMPORTANT] -> The standalone server can't be made into a replica again. -> Before you stop replication on a read replica, ensure the replica has all the data that you require. --Learn how to [stop replication to a replica](how-to-read-replicas-portal.md). --## Failover --There's no automated failover between source and replica servers. --Since replication is asynchronous, there's lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action. --> [!Tip] -> If you failover to the replica, the lag at the time you delink the replica from the source will indicate how much data is lost. --After you've decided you want to failover to a replica: --1. Stop replication to the replica<br/> - This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action. --2. Point your application to the (former) replica<br/> - Each server has a unique connection string. Update your application to point to the (former) replica instead of the source. --After your application is successfully processing reads and writes, you've completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 listed previously. --## Global transaction identifier (GTID) --Global transaction identifier (GTID) is a unique identifier created with each committed transaction on a source server and is OFF by default in Azure Database for MySQL. GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB(General purpose storage v2). To learn more about GTID and how it's used in replication, refer to MySQL's [replication with GTID](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html) documentation. --MySQL supports two types of transactions: GTID transactions (identified with GTID) and anonymous transactions (don't have a GTID allocated) --The following server parameters are available for configuring GTID: --|**Server parameter**|**Description**|**Default Value**|**Values**| -|--|--|--|--| -|`gtid_mode`|Indicates if GTIDs are used to identify transactions. Changes between modes can only be done one step at a time in ascending order (ex. `OFF` -> `OFF_PERMISSIVE` -> `ON_PERMISSIVE` -> `ON`)|`OFF`|`OFF`: Both new and replication transactions must be anonymous <br> `OFF_PERMISSIVE`: New transactions are anonymous. Replicated transactions can either be anonymous or GTID transactions. <br> `ON_PERMISSIVE`: New transactions are GTID transactions. Replicated transactions can either be anonymous or GTID transactions. <br> `ON`: Both new and replicated transactions must be GTID transactions.| -|`enforce_gtid_consistency`|Enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. This value must be set to `ON` before enabling GTID replication. |`OFF`|`OFF`: All transactions are allowed to violate GTID consistency. <br> `ON`: No transaction is allowed to violate GTID consistency. <br> `WARN`: All transactions are allowed to violate GTID consistency, but a warning is generated. | --> [!NOTE] -> * After GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support. -> -> * To change GTID's from one value to another can only be one step at a time in ascending order of modes. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to ON_PERMISSIVE but not to ON. -> -> * To keep replication consistent, you cannot update it for a master/replica server. -> -> * Recommended to SET enforce_gtid_consistency to ON before you can set gtid_mode=ON ---To enable GTID and configure the consistency behavior, update the `gtid_mode` and `enforce_gtid_consistency` server parameters using the [Azure portal](how-to-server-parameters.md), [Azure CLI](how-to-configure-server-parameters-using-cli.md), or [PowerShell](how-to-configure-server-parameters-using-powershell.md). --If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID replication. In order to make sure that the replication is consistent, `gtid_mode` cannot be changed once the master or replica server(s) is created with GTID enabled. --## Considerations and limitations --### Pricing tiers --Read replicas are currently only available in the General Purpose and Memory Optimized pricing tiers. --> [!NOTE] -> The cost of running the replica server is based on the region where the replica server is running. --### Source server restart --Server that has General purpose storage v1, the `log_bin` parameter will be OFF by default. The value will be turned ON when you create the first read replica. If a source server has no existing read replicas, source server will first restart to prepare itself for replication. Please consider server restart and perform this operation during off-peak hours. --Source server that has General purpose storage v2, the `log_bin` parameter will be ON by default and does not require a restart when you add a read replica. --### New replicas --A read replica is created as a new Azure Database for MySQL server. An existing server can't be made into a replica. You can't create a replica of another read replica. --### Replica configuration --A replica is created by using the same server configuration as the source. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and backup retention period. The pricing tier can also be changed independently, except to or from the Basic tier. --> [!IMPORTANT] -> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the source. --Firewall rules and parameter settings are inherited from the source server to the replica when the replica is created. Afterwards, the replica's rules are independent. --### Stopped replicas --If you stop replication between a source server and a read replica, the stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again. --### Deleted source and standalone servers --When a source server is deleted, replication is stopped to all read replicas. These replicas automatically become standalone servers and can accept both reads and writes. The source server itself is deleted. --### User accounts --Users on the source server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the source server. --### Server parameters --To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. --The following server parameters are locked on both the source and replica servers: --* [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html) -* [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) --The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers. --To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas. --### GTID --GTID is supported on: --* MySQL versions 5.7 and 8.0. -* Servers that support storage up to 16 TB. Refer to the [pricing tier](concepts-pricing-tiers.md#storage) article for the full list of regions that support 16 TB storage. --GTID is OFF by default. After GTID is enabled, you can't turn it back off. If you need to turn GTID OFF, contact support. --If GTID is enabled on a source server, newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you can't update `gtid_mode` on the source or replica server(s). --### Other --* Creating a replica of a replica isn't supported. -* In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information. -* Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas. -* Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html) --## Next steps --* Learn how to [create and manage read replicas using the Azure portal](how-to-read-replicas-portal.md) -* Learn how to [create and manage read replicas using the Azure CLI and REST API](how-to-read-replicas-cli.md) |
mysql | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-security.md | - Title: Security - Azure Database for MySQL -description: An overview of the security features in Azure Database for MySQL. ----- Previously updated : 06/20/2022---# Security in Azure Database for MySQL ----There are multiple layers of security that are available to protect the data on your Azure Database for MySQL server. This article outlines those security options. --## Information protection and encryption --### In-transit -Azure Database for MySQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default. --### At-rest -The Azure Database for MySQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled. ---## Network security -Connections to an Azure Database for MySQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md). --A newly created Azure Database for MySQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server. --### IP firewall rules -IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information. --### Virtual network firewall rules -Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for MySQL server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-and-security-vnet.md). --### Private IP -Private Link allows you to connect to your Azure Database for MySQL in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-security-private-link.md) --## Access management --While creating the Azure Database for MySQL server, you provide credentials for an administrator user. This administrator can be used to create additional MySQL users. ---## Threat protection --You can opt in to [Microsoft Defender for open-source relational databases](/azure/security-center/defender-for-databases-introduction) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers. --[Audit logging](concepts-audit-logs.md) is available to track activity in your databases. ---## Next steps -- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md) |
mysql | Concepts Server Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-logs.md | - Title: Slow query logs - Azure Database for MySQL -description: Describes the slow query logs available in Azure Database for MySQL, and the available parameters for enabling different logging levels. ----- Previously updated : 06/20/2022---# Slow query logs in Azure Database for MySQL ----In Azure Database for MySQL, the slow query log is available to users. Access to the transaction log is not supported. The slow query log can be used to identify performance bottlenecks for troubleshooting. --For more information about the MySQL slow query log, see the MySQL reference manual's [slow query log section](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html). --When [Query Store](concepts-query-store.md) is enabled on your server, you may see the queries like "`CALL mysql.az_procedure_collect_wait_stats (900, 30);`" logged in your slow query logs. This behavior is expected as the Query Store feature collects statistics about your queries. --## Configure slow query logging -By default the slow query log is disabled. To enable it, set `slow_query_log` to ON. This can be enabled using the Azure portal or Azure CLI. --Other parameters you can adjust include: --- **long_query_time**: if a query takes longer than long_query_time (in seconds) that query is logged. The default is 10 seconds.-- **log_slow_admin_statements**: if ON includes administrative statements like ALTER_TABLE and ANALYZE_TABLE in the statements written to the slow_query_log.-- **log_queries_not_using_indexes**: determines whether queries that do not use indexes are logged to the slow_query_log-- **log_throttle_queries_not_using_indexes**: This parameter limits the number of non-index queries that can be written to the slow query log. This parameter takes effect when log_queries_not_using_indexes is set to ON.-- **log_output**: if "File", allows the slow query log to be written to both the local server storage and to Azure Monitor Diagnostic Logs. If "None", the slow query log will only be written to Azure Monitor Diagnostics Logs. --> [!IMPORTANT] -> If your tables are not indexed, setting the `log_queries_not_using_indexes` and `log_throttle_queries_not_using_indexes` parameters to ON may affect MySQL performance since all queries running against these non-indexed tables will be written to the slow query log.<br><br> -> If you plan on logging slow queries for an extended period of time, it is recommended to set `log_output` to "None". If set to "File", these logs are written to the local server storage and can affect MySQL performance. --See the MySQL [slow query log documentation](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) for full descriptions of the slow query log parameters. --## Access slow query logs -There are two options for accessing slow query logs in Azure Database for MySQL: local server storage or Azure Monitor Diagnostic Logs. This is set using the `log_output` parameter. --For local server storage, you can list and download slow query logs using the Azure portal or the Azure CLI. In the Azure portal, navigate to your server in the Azure portal. Under the **Monitoring** heading, select the **Server Logs** page. For more information on Azure CLI, see [Configure and access slow query logs using Azure CLI](how-to-configure-server-logs-in-cli.md). --Azure Monitor Diagnostic Logs allows you to pipe slow query logs to Azure Monitor Logs (Log Analytics), Azure Storage, or Event Hubs. See [below](concepts-server-logs.md#diagnostic-logs) for more information. --## Local server storage log retention -When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available.The 7 GB storage limit for the server logs is available free of cost and cannot be extended. --Logs are rotated every 24 hours or 7 GB, whichever comes first. --> [!NOTE] -> The above log retention does not apply to logs that are piped using Azure Monitor Diagnostic Logs. You can change the retention period for the data sinks being emitted to (ex. Azure Storage). --## Diagnostic logs -Azure Database for MySQL is integrated with Azure Monitor Diagnostic Logs. Once you have enabled slow query logs on your MySQL server, you can choose to have them emitted to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs, see the how to section of the [diagnostic logs documentation](../../azure-monitor/essentials/platform-logs-overview.md). -->[!NOTE] ->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings --The following table describes what's in each log. Depending on the output method, the fields included and the order in which they appear may vary. --| **Property** | **Description** | -||| -| `TenantId` | Your tenant ID | -| `SourceSystem` | `Azure` | -| `TimeGenerated` [UTC] | Time stamp when the log was recorded in UTC | -| `Type` | Type of the log. Always `AzureDiagnostics` | -| `SubscriptionId` | GUID for the subscription that the server belongs to | -| `ResourceGroup` | Name of the resource group the server belongs to | -| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | -| `ResourceType` | `Servers` | -| `ResourceId` | Resource URI | -| `Resource` | Name of the server | -| `Category` | `MySqlSlowLogs` | -| `OperationName` | `LogEvent` | -| `Logical_server_name_s` | Name of the server | -| `start_time_t` [UTC] | Time the query began | -| `query_time_s` | Total time in seconds the query took to execute | -| `lock_time_s` | Total time in seconds the query was locked | -| `user_host_s` | Username | -| `rows_sent_d` | Number of rows sent | -| `rows_examined_s` | Number of rows examined | -| `last_insert_id_s` | [last_insert_id](https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id) | -| `insert_id_s` | Insert ID | -| `sql_text_s` | Full query | -| `server_id_s` | The server's ID | -| `thread_id_s` | Thread ID | -| `\_ResourceId` | Resource URI | --> [!NOTE] -> For `sql_text`, log will be truncated if it exceeds 2048 characters. --## Analyze logs in Azure Monitor Logs --Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your slow queries. Below are some sample queries to help you get started. Make sure to update the below with your server name. --- Queries longer than 10 seconds on a particular server-- ```Kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | where query_time_d > 10 - ``` --- List top 5 longest queries on a particular server-- ```Kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | order by query_time_d desc - | take 5 - ``` --- Summarize slow queries by minimum, maximum, average, and standard deviation query time on a particular server-- ```Kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | summarize count(), min(query_time_d), max(query_time_d), avg(query_time_d), stdev(query_time_d), percentile(query_time_d, 95) by LogicalServerName_s - ``` --- Graph the slow query distribution on a particular server-- ```Kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m) - | render timechart - ``` --- Display queries longer than 10 seconds across all MySQL servers with Diagnostic Logs enabled-- ```Kusto - AzureDiagnostics - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | where query_time_d > 10 - ``` - -## Next Steps -- [How to configure slow query logs from the Azure portal](how-to-configure-server-logs-in-portal.md)-- [How to configure slow query logs from the Azure CLI](how-to-configure-server-logs-in-cli.md) |
mysql | Concepts Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-parameters.md | - Title: Server parameters - Azure Database for MySQL -description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL. ----- Previously updated : 04/26/2023---# Server parameters in Azure Database for MySQL ----This article provides considerations and guidelines for configuring server parameters in Azure Database for MySQL. --## What are server parameters? --The MySQL engine provides many different server variables and parameters that you use to configure and tune engine behavior. Some parameters can be set dynamically during runtime, while others are static, and require a server restart in order to apply. --Azure Database for MySQL exposes the ability to change the value of various MySQL server parameters by using the [Azure portal](./how-to-server-parameters.md), the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), and [PowerShell](./how-to-configure-server-parameters-using-powershell.md) to match your workload's needs. --## Configurable server parameters --The list of supported server parameters is constantly growing. In the Azure portal, use the server parameters tab to view the full list and configure server parameters values. --Refer to the following sections to learn more about the limits of several commonly updated server parameters. The limits are determined by the pricing tier and vCores of the server. --### Thread pools --MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there's a corresponding drop in performance. Many active threads can affect the performance significantly, due to increased context switching, thread contention, and bad locality for CPU caches. --*Thread pools*, a server-side feature and distinct from connection pooling, maximize performance by introducing a dynamic pool of worker threads. You use this feature to limit the number of active threads running on the server and minimize thread churn. This helps ensure that a burst of connections doesn't cause the server to run out of resources or memory. Thread pools are most efficient for short queries and CPU intensive workloads, such as OLTP workloads. --For more information, see [Introducing thread pools in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/introducing-thread-pools-in-azure-database-for-mysql-service/ba-p/1504173). --> [!NOTE] -> Thread pools aren't supported for MySQL 5.6. --### Configure the thread pool --To enable a thread pool, update the `thread_handling` server parameter to `pool-of-threads`. By default, this parameter is set to `one-thread-per-connection`, which means MySQL creates a new thread for each new connection. This is a static parameter, and requires a server restart to apply. --You can also configure the maximum and minimum number of threads in the pool by setting the following server parameters: --- `thread_pool_max_threads`: This value limits the number of threads in the pool.-- `thread_pool_min_threads`: This value sets the number of threads that are reserved, even after connections are closed.--To improve performance issues of short queries on the thread pool, you can enable *batch execution*. Instead of returning back to the thread pool immediately after running a query, threads will keep active for a short time to wait for the next query through this connection. The thread then runs the query rapidly and, when this is complete, the thread waits for the next one. This process continues until the overall time spent exceeds a threshold. --You determine the behavior of batch execution by using the following server parameters: --- `thread_pool_batch_wait_timeout`: This value specifies the time a thread waits for another query to process.-- `thread_pool_batch_max_time`: This value determines the maximum time a thread will repeat the cycle of query execution and waiting for the next query.--> [!IMPORTANT] -> Don't turn on the thread pool in production until you've tested it. --### log_bin_trust_function_creators --In Azure Database for MySQL, binary logs are always enabled (the `log_bin` parameter is set to `ON`). If you want to use triggers, you get an error similar to the following: *You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*. --The binary logging format is always **ROW**, and all connections to the server *always* use row-based binary logging. Row-based binary logging helps maintain security, and binary logging can't break, so you can safely set [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to `TRUE`. --### innodb_buffer_pool_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) to learn more about this parameter. --#### Servers on [general purpose storage v1 (supporting up to 4 TB)](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb) --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|872415232|134217728|872415232| -|Basic|2|2684354560|134217728|2684354560| -|General Purpose|2|3758096384|134217728|3758096384| -|General Purpose|4|8053063680|134217728|8053063680| -|General Purpose|8|16106127360|134217728|16106127360| -|General Purpose|16|32749125632|134217728|32749125632| -|General Purpose|32|66035122176|134217728|66035122176| -|General Purpose|64|132070244352|134217728|132070244352| -|Memory Optimized|2|7516192768|134217728|7516192768| -|Memory Optimized|4|16106127360|134217728|16106127360| -|Memory Optimized|8|32212254720|134217728|32212254720| -|Memory Optimized|16|65498251264|134217728|65498251264| -|Memory Optimized|32|132070244352|134217728|132070244352| --#### Servers on [general purpose storage v2 (supporting up to 16 TB)](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|872415232|134217728|872415232| -|Basic|2|2684354560|134217728|2684354560| -|General Purpose|2|7516192768|134217728|7516192768| -|General Purpose|4|16106127360|134217728|16106127360| -|General Purpose|8|32212254720|134217728|32212254720| -|General Purpose|16|65498251264|134217728|65498251264| -|General Purpose|32|132070244352|134217728|132070244352| -|General Purpose|64|264140488704|134217728|264140488704| -|Memory Optimized|2|15032385536|134217728|15032385536| -|Memory Optimized|4|32212254720|134217728|32212254720| -|Memory Optimized|8|64424509440|134217728|64424509440| -|Memory Optimized|16|130996502528|134217728|130996502528| -|Memory Optimized|32|264140488704|134217728|264140488704| --### innodb_file_per_table --MySQL stores the `InnoDB` table in different tablespaces, based on the configuration you provide during the table creation. The [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html) is the storage area for the `InnoDB` data dictionary. A [file-per-table tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html) contains data and indexes for a single `InnoDB` table, and is stored in the file system in its own data file. --You control this behavior by using the `innodb_file_per_table` server parameter. Setting `innodb_file_per_table` to `OFF` causes `InnoDB` to create tables in the system tablespace. Otherwise, `InnoDB` creates tables in file-per-table tablespaces. --> [!NOTE] -> You can only update `innodb_file_per_table` in the general purpose and memory optimized pricing tiers on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) and [general purpose storage v1](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb). --Azure Database for MySQL supports 4 TB (at the largest) in a single data file on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage). If your database size is larger than 4 TB, you should create the table in the [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size that is larger than 4 TB, you should use the partition table. --### join_buffer_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_join_buffer_size) to learn more about this parameter. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|262144|128|268435455| -|General Purpose|4|262144|128|536870912| -|General Purpose|8|262144|128|1073741824| -|General Purpose|16|262144|128|2147483648| -|General Purpose|32|262144|128|4294967295| -|General Purpose|64|262144|128|4294967295| -|Memory Optimized|2|262144|128|536870912| -|Memory Optimized|4|262144|128|1073741824| -|Memory Optimized|8|262144|128|2147483648| -|Memory Optimized|16|262144|128|4294967295| -|Memory Optimized|32|262144|128|4294967295| --### max_connections --|**Pricing tier**|**vCore(s)**|**Default value**|**Min value**|**Max value**| -|||||| -|Basic|1|50|10|50| -|Basic|2|100|10|100| -|General Purpose|2|300|10|600| -|General Purpose|4|625|10|1250| -|General Purpose|8|1250|10|2500| -|General Purpose|16|2500|10|5000| -|General Purpose|32|5000|10|10000| -|General Purpose|64|10000|10|20000| -|Memory Optimized|2|625|10|1250| -|Memory Optimized|4|1250|10|2500| -|Memory Optimized|8|2500|10|5000| -|Memory Optimized|16|5000|10|10000| -|Memory Optimized|32|10000|10|20000| --When the number of connections exceeds the limit, you might receive an error. --> [!TIP] -> To manage connections efficiently, it's a good idea to use a connection pooler, like ProxySQL. To learn about setting up ProxySQL, see the blog post [Load balance read replicas using ProxySQL in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042). Note that ProxySQL is an open source community tool. It's supported by Microsoft on a best-effort basis. --### max_heap_table_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_heap_table_size) to learn more about this parameter. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|16777216|16384|268435455| -|General Purpose|4|16777216|16384|536870912| -|General Purpose|8|16777216|16384|1073741824| -|General Purpose|16|16777216|16384|2147483648| -|General Purpose|32|16777216|16384|4294967295| -|General Purpose|64|16777216|16384|4294967295| -|Memory Optimized|2|16777216|16384|536870912| -|Memory Optimized|4|16777216|16384|1073741824| -|Memory Optimized|8|16777216|16384|2147483648| -|Memory Optimized|16|16777216|16384|4294967295| -|Memory Optimized|32|16777216|16384|4294967295| --### query_cache_size --The query cache is turned off by default. To enable the query cache, configure the `query_cache_type` parameter. --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_query_cache_size) to learn more about this parameter. --> [!NOTE] -> The query cache is deprecated as of MySQL 5.7.20 and has been removed in MySQL 8.0. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|0|0|16777216| -|General Purpose|4|0|0|33554432| -|General Purpose|8|0|0|67108864| -|General Purpose|16|0|0|134217728| -|General Purpose|32|0|0|134217728| -|General Purpose|64|0|0|134217728| -|Memory Optimized|2|0|0|33554432| -|Memory Optimized|4|0|0|67108864| -|Memory Optimized|8|0|0|134217728| -|Memory Optimized|16|0|0|134217728| -|Memory Optimized|32|0|0|134217728| --### lower_case_table_names --The `lower_case_table_name` parameter is set to 1 by default, and you can update this parameter in MySQL 5.6 and MySQL 5.7. --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_lower_case_table_names) to learn more about this parameter. --> [!NOTE] -> In MySQL 8.0, `lower_case_table_name` is set to 1 by default, and you can't change it. --### innodb_strict_mode --If you receive an error similar to `Row size too large (> 8126)`, consider turning off the `innodb_strict_mode` parameter. You can't modify `innodb_strict_mode` globally at the server level. If row data size is larger than 8K, the data is truncated, without an error notification, leading to potential data loss. It's a good idea to modify the schema to fit the page size limit. --You can set this parameter at a session level, by using `init_connect`. To set `innodb_strict_mode` at a session level, refer to [setting parameter not listed](./how-to-server-parameters.md#setting-parameters-not-listed). --> [!NOTE] -> If you have a read replica server, setting `innodb_strict_mode` to `OFF` at the session-level on a source server will break the replication. We suggest keeping the parameter set to `ON` if you have read replicas. --### sort_buffer_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sort_buffer_size) to learn more about this parameter. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|524288|32768|4194304| -|General Purpose|4|524288|32768|8388608| -|General Purpose|8|524288|32768|16777216| -|General Purpose|16|524288|32768|33554432| -|General Purpose|32|524288|32768|33554432| -|General Purpose|64|524288|32768|33554432| -|Memory Optimized|2|524288|32768|8388608| -|Memory Optimized|4|524288|32768|16777216| -|Memory Optimized|8|524288|32768|33554432| -|Memory Optimized|16|524288|32768|33554432| -|Memory Optimized|32|524288|32768|33554432| --### tmp_table_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_tmp_table_size) to learn more about this parameter. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|16777216|1024|67108864| -|General Purpose|4|16777216|1024|134217728| -|General Purpose|8|16777216|1024|268435456| -|General Purpose|16|16777216|1024|536870912| -|General Purpose|32|16777216|1024|1073741824| -|General Purpose|64|16777216|1024|1073741824| -|Memory Optimized|2|16777216|1024|134217728| -|Memory Optimized|4|16777216|1024|268435456| -|Memory Optimized|8|16777216|1024|536870912| -|Memory Optimized|16|16777216|1024|1073741824| -|Memory Optimized|32|16777216|1024|1073741824| --### InnoDB buffer pool warmup --After you restart Azure Database for MySQL, the data pages that reside in the disk are loaded, as the tables are queried. This leads to increased latency and slower performance for the first run of the queries. For workloads that are sensitive to latency, you might find this slower performance unacceptable. --You can use `InnoDB` buffer pool warmup to shorten the warmup period. This process reloads disk pages that were in the buffer pool *before* the restart, rather than waiting for DML or SELECT operations to access corresponding rows. For more information, see [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html). --However, improved performance comes at the expense of longer start-up time for the server. When you enable this parameter, the server startup and restart times are expected to increase, depending on the IOPS provisioned on the server. It's a good idea to test and monitor the restart time, to ensure that the start-up or restart performance is acceptable, because the server is unavailable during that time. Don't use this parameter when the number of IOPS provisioned is less than 1000 IOPS (in other words, when the storage provisioned is less than 335 GB). --To save the state of the buffer pool at server shutdown, set the server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set the server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up or restart by lowering and fine-tuning the value of the server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`. --> [!NOTE] -> `InnoDB` buffer pool warmup parameters are only supported in general purpose storage servers with up to 16 TB storage. For more information, see [Azure Database for MySQL storage options](./concepts-pricing-tiers.md#storage). --### time_zone --Upon initial deployment, a server running Azure Database for MySQL includes systems tables for time zone information, but these tables aren't populated. You can populate the tables by calling the `mysql.az_load_timezone` stored procedure from tools like the MySQL command line or MySQL Workbench. For information about how to call the stored procedures and set the global or session-level time zones, see [Working with the time zone parameter (Azure portal)](how-to-server-parameters.md#working-with-the-time-zone-parameter) or [Working with the time zone parameter (Azure CLI)](how-to-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter). --### binlog_expire_logs_seconds --In Azure Database for MySQL, this parameter specifies the number of seconds the service waits before purging the binary log file. --The *binary log* contains events that describe database changes, such as table creation operations or changes to table data. It also contains events for statements that can potentially make changes. The binary log is used mainly for two purposes, replication and data recovery operations. --Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before being purged. If you want binary logs to persist longer, you can configure the parameter `binlog_expire_logs_seconds`. If you set `binlog_expire_logs_seconds` to `0`, which is the default value, it purges as soon as the handle to the binary log is freed. If you set `binlog_expire_logs_seconds` to greater than 0, then the binary log only purges after that period of time. --For Azure Database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data out from the Azure Database for MySQL service, you must set this parameter in the primary to avoid purging binary logs before the replica reads from the changes from the primary. If you set the `binlog_expire_logs_seconds` to a higher value, then the binary logs won't get purged soon enough. This can lead to an increase in the storage billing. --### event_scheduler --In Azure Database for MySQL, the `event_schedule` server parameter manages creating, scheduling, and running events, i.e., tasks that run according to a schedule, and they're run by a special event scheduler thread. When the `event_scheduler` parameter is set to ON, the event scheduler thread is listed as a daemon process in the output of SHOW PROCESSLIST. You can create and schedule events using the following SQL syntax: --```sql -CREATE EVENT <event name> -ON SCHEDULE EVERY _ MINUTE / HOUR / DAY -STARTS TIMESTAMP / CURRENT_TIMESTAMP -ENDS TIMESTAMP / CURRENT_TIMESTAMP + INTERVAL 1 MINUTE / HOUR / DAY -COMMENT ΓÇÿ<comment>ΓÇÖ -DO -<your statement>; -``` --> [!NOTE] -> For more information about creating an event, see the MySQL Event Scheduler documentation here: -> -> - [MySQL :: MySQL 5.7 Reference Manual :: 23.4 Using the Event Scheduler](https://dev.mysql.com/doc/refman/5.7/en/event-scheduler.html) -> - [MySQL :: MySQL 8.0 Reference Manual :: 25.4 Using the Event Scheduler](https://dev.mysql.com/doc/refman/8.0/en/event-scheduler.html) -> --#### Configuring the event_scheduler server parameter --The following scenario illustrates one way to use the `event_scheduler` parameter in Azure Database for MySQL. To demonstrate the scenario, consider the following example, a simple table: --```azurecli -mysql> describe tab1; -+--+-++--++-+ -| Field | Type | Null | Key | Default | Extra | -+--+-++--++-+ -| id | int(11) | NO | PRI | NULL | auto_increment | -| CreatedAt | timestamp | YES | | NULL | | -| CreatedBy | varchar(16) | YES | | NULL | | -+--+-++--++-+ -3 rows in set (0.23 sec) -``` --To configure the `event_scheduler` server parameter in Azure Database for MySQL, perform the following steps: --1. In the Azure portal, navigate to your server, and then, under **Settings**, select **Server parameters**. -2. On the **Server parameters** blade, search for `event_scheduler`, in the **VALUE** drop-down list, select **ON**, and then select **Save**. -- > [!NOTE] - > The dynamic server parameter configuration change will be deployed without a restart. --3. Then to create an event, connect to the MySQL server, and run the following SQL command: -- ```sql - CREATE EVENT test_event_01 - ON SCHEDULE EVERY 1 MINUTE - STARTS CURRENT_TIMESTAMP - ENDS CURRENT_TIMESTAMP + INTERVAL 1 HOUR - COMMENT ΓÇÿInserting record into the table tab1 with current timestampΓÇÖ - DO - INSERT INTO tab1(id,createdAt,createdBy) - VALUES('',NOW(),CURRENT_USER()); - ``` --4. To view the Event Scheduler Details, run the following SQL statement: -- ```sql - SHOW EVENTS; - ``` -- The following output appears: -- ```azurecli - mysql> show events; - +--++-+--+--++-+-+++++-+-+--+ - | Db | Name | Definer | Time zone | Type | Execute at | Interval value | Interval field | Starts | Ends | Status | Originator | character_set_client | collation_connection | Database Collation | - +--++-+--+--++-+-+++++-+-+--+ - | db1 | test_event_01 | azureuser@% | SYSTEM | RECURRING | NULL | 1 | MINUTE | 2023-04-05 14:47:04 | 2023-04-05 15:47:04 | ENABLED | 3221153808 | latin1 | latin1_swedish_ci | latin1_swedish_ci | - +--++-+--+--++-+-+++++-+-+--+ - 1 row in set (0.23 sec) - ``` --5. After few minutes, query the rows from the table to begin viewing the rows inserted every minute as per the `event_scheduler` parameter you configured: -- ```azurecli - mysql> select * from tab1; - +-++-+ - | id | CreatedAt | CreatedBy | - +-++-+ - | 1 | 2023-04-05 14:47:04 | azureuser@% | - | 2 | 2023-04-05 14:48:04 | azureuser@% | - | 3 | 2023-04-05 14:49:04 | azureuser@% | - | 4 | 2023-04-05 14:50:04 | azureuser@% | - +-++-+ - 4 rows in set (0.23 sec) - ``` --6. After an hour, run a Select statement on the table to view the complete result of the values inserted into table every minute for an hour as the `event_scheduler` is configured in our case. -- ```azurecli - mysql> select * from tab1; - +-++-+ - | id | CreatedAt | CreatedBy | - +-++-+ - | 1 | 2023-04-05 14:47:04 | azureuser@% | - | 2 | 2023-04-05 14:48:04 | azureuser@% | - | 3 | 2023-04-05 14:49:04 | azureuser@% | - | 4 | 2023-04-05 14:50:04 | azureuser@% | - | 5 | 2023-04-05 14:51:04 | azureuser@% | - | 6 | 2023-04-05 14:52:04 | azureuser@% | - ..< 50 lines trimmed to compact output >.. - | 56 | 2023-04-05 15:42:04 | azureuser@% | - | 57 | 2023-04-05 15:43:04 | azureuser@% | - | 58 | 2023-04-05 15:44:04 | azureuser@% | - | 59 | 2023-04-05 15:45:04 | azureuser@% | - | 60 | 2023-04-05 15:46:04 | azureuser@% | - | 61 | 2023-04-05 15:47:04 | azureuser@% | - +-++-+ - 61 rows in set (0.23 sec) - ``` --#### Other scenarios --You can set up an event based on the requirements of your specific scenario. A few similar examples of scheduling SQL statements to run at different time intervals follow. --**Run a SQL statement now and repeat one time per day with no end** --```sql -CREATE EVENT <event name> -ON SCHEDULE -EVERY 1 DAY -STARTS (TIMESTAMP(CURRENT_DATE) + INTERVAL 1 DAY + INTERVAL 1 HOUR) -COMMENT 'Comment' -DO -<your statement>; -``` --**Run a SQL statement every hour with no end** --```sql -CREATE EVENT <event name> -ON SCHEDULE -EVERY 1 HOUR -COMMENT 'Comment' -DO -<your statement>; -``` --**Run a SQL statement every day with no end** --```sql -CREATE EVENT <event name> -ON SCHEDULE -EVERY 1 DAY -STARTS str_to_date( date_format(now(), '%Y%m%d 0200'), '%Y%m%d %H%i' ) + INTERVAL 1 DAY -COMMENT 'Comment' -DO -<your statement>; -``` --## Nonconfigurable server parameters --The following server parameters aren't configurable in the service: --|**Parameter**|**Fixed value**| -| : | :-- | -|`innodb_file_per_table` in the basic tier|OFF| -|`innodb_flush_log_at_trx_commit`|1| -|`sync_binlog`|1| -|`innodb_log_file_size`|256 MB| -|`innodb_log_files_in_group`|2| --Other variables not listed here are set to the default MySQL values. Refer to the MySQL docs for versions [8.0](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html), [5.7](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html), and [5.6](https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html). --## Next steps --- Learn how to [configure server parameters by using the Azure portal](./how-to-server-parameters.md)-- Learn how to [configure server parameters by using the Azure CLI](./how-to-configure-server-parameters-using-cli.md)-- Learn how to [configure server parameters by using PowerShell](./how-to-configure-server-parameters-using-powershell.md) |
mysql | Concepts Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-servers.md | - Title: Server concepts - Azure Database for MySQL -description: This topic provides considerations and guidelines for working with Azure Database for MySQL servers. ----- Previously updated : 06/20/2022---# Server concepts in Azure Database for MySQL ----This article provides considerations and guidelines for working with Azure Database for MySQL servers. --## What is an Azure Database for MySQL server? --An Azure Database for MySQL server is a central administrative point for multiple databases. It is the same MySQL server construct that you may be familiar with in the on-premises world. Specifically, the Azure Database for MySQL service is managed, provides performance guarantees, and exposes access and features at server-level. --An Azure Database for MySQL server: --- Is created within an Azure subscription.-- Is the parent resource for databases.-- Provides a namespace for databases.-- Is a container with strong lifetime semantics - delete a server and it deletes the contained databases.-- Collocates resources in a region.-- Provides a connection endpoint for server and database access.-- Provides the scope for management policies that apply to its databases: login, firewall, users, roles, configurations, etc.-- Is available in multiple versions. For more information, see [Supported Azure Database for MySQL database versions](./concepts-supported-versions.md).--Within an Azure Database for MySQL server, you can create one or multiple databases. You can opt to create a single database per server to use all the resources or to create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Pricing tiers](./concepts-pricing-tiers.md). --## How do I connect and authenticate to an Azure Database for MySQL server? --The following elements help ensure safe access to your database. --| Security concept | Description | -| :-- | :-- | -| **Authentication and authorization** | Azure Database for MySQL server supports native MySQL authentication. You can connect and authenticate to a server with the server's admin login. | -| **Protocol** | The service supports a message-based protocol used by MySQL. | -| **TCP/IP** | The protocol is supported over TCP/IP and over Unix-domain sockets. | -| **Firewall** | To help protect your data, a firewall rule prevents all access to your database server, until you specify which computers have permission. See [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md). | -| **SSL** | The service supports enforcing SSL connections between your applications and your database server. See [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). | --## Stop/Start an Azure Database for MySQL --Azure Database for MySQL gives you the ability to **Stop** the server when not in use and **Start** the server when you resume activity. This is essentially done to save costs on the database servers and only pay for the resource when in use. This becomes even more important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. Later, when you want to bring the server back online, you can either use the [Azure portal](how-to-stop-start-server.md) or [CLI](how-to-stop-start-server.md). --When the server is in the **Stopped** state, the server's compute is not billed. However, storage continues to be billed as the server's storage remains to ensure that data files are available when the server is started again. --> [!IMPORTANT] -> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can chose to **Stop** it again if you are not using the server. --During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to [start the server](how-to-stop-start-server.md). --### Limitations of Stop/start operation -- Not supported with read replica configurations (both source and replicas).--## How do I manage a server? --You can manage the creation, deletion, server parameter configuration (my.cnf), scaling, networking, security, high availability, backup & restore, monitoring of your Azure Database for MySQL servers by using the Azure portal or the Azure CLI. In addition, following stored procedures are available in Azure Database for MySQL to perform certain database administration tasks required as SUPER user privilege is not supported on the server. --|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**| -|--|--|--|--| -|*mysql.az_kill*|processlist_id|N/A|Equivalent to [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the connection associated with the provided processlist_id after terminating any statement the connection is executing.| -|*mysql.az_kill_query*|processlist_id|N/A|Equivalent to [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the statement the connection is currently executing. Leaves the connection itself alive.| -|*mysql.az_load_timezone*|N/A|N/A|Loads [time zone tables](how-to-server-parameters.md#working-with-the-time-zone-parameter) to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").| --## Next steps --- For an overview of the service, see [Azure Database for MySQL Overview](./overview.md)-- For information about specific resource quotas and limitations based on your **pricing tier**, see [Pricing tiers](./concepts-pricing-tiers.md)-- For information about connecting to the service, see [Connection libraries for Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md). |
mysql | Concepts Ssl Connection Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-ssl-connection-security.md | - Title: SSL/TLS connectivity - Azure Database for MySQL -description: Information for configuring Azure Database for MySQL and associated applications to properly use SSL connections ----- Previously updated : 06/20/2022---# SSL/TLS connectivity in Azure Database for MySQL ----Azure Database for MySQL supports connecting your database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application. --> [!NOTE] -> Updating the `require_secure_transport` server parameter value does not affect the MySQL service's behavior. Use the SSL and TLS enforcement features outlined in this article to secure connections to your database. -->[!NOTE] -> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021). --> [!IMPORTANT] -> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md) --## SSL Default settings --By default, the database service should be configured to require SSL connections when connecting to MySQL. We recommend to avoid disabling the SSL option whenever possible. --When provisioning a new Azure Database for MySQL server through the Azure portal and CLI, enforcement of SSL connections is enabled by default. --Connection strings for various programming languages are shown in the Azure portal. Those connection strings include the required SSL parameters to connect to your database. In the Azure portal, select your server. Under the **Settings** heading, select the **Connection strings**. The SSL parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations. --In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MySQL server, which is located at https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem. --Similarly, the following links point to the certificates for servers in sovereign clouds: [Azure Government](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem), [Microsoft Azure operated by 21Vianet](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt). --To learn how to enable or disable SSL connection when developing application, refer to [How to configure SSL](how-to-configure-ssl.md). --## TLS enforcement in Azure Database for MySQL --Azure Database for MySQL supports encryption for clients connecting to your database server using Transport Layer Security (TLS). TLS is an industry standard protocol that ensures secure network connections between your database server and client applications, allowing you to adhere to compliance requirements. --### TLS settings --Azure Database for MySQL provides the ability to enforce the TLS version for the client connections. To enforce the TLS version, use the **Minimum TLS version** option setting. The following values are allowed for this option setting: --| Minimum TLS setting | Client TLS version supported | -|:|-:| -| TLSEnforcementDisabled (default) | No TLS required | -| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher | -| TLS1_1 | TLS 1.1, TLS 1.2 and higher | -| TLS1_2 | TLS version 1.2 and higher | ---For example, setting the value of minimum TLS setting version to TLS 1.0 means your server allows connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected. --> [!NOTE] -> By default, Azure Database for MySQL does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`). -> -> Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement. --The minimum TLS version setting doesn't require any restart of the server can be set while the server is online. To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](how-to-tls-configurations.md). --## Cipher support by Azure Database for MySQL single server --As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database server. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites don't match one of suites listed below, incoming client connections will be rejected. --### Cipher suite supported --* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 -* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 -* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 --## Next steps --- [Connection libraries for Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md)-- Learn how to [configure SSL](how-to-configure-ssl.md)-- Learn how to [configure TLS](how-to-tls-configurations.md) |
mysql | Connect Cpp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-cpp.md | - Title: 'Quickstart: Connect using C++ - Azure Database for MySQL' -description: This quickstart provides a C++ code sample you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Quickstart: Use Connector/C++ to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C++ application. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes you're familiar with developing using C++ and you're new to working with Azure Database for MySQL. --## Prerequisites --This quickstart uses the resources created in either of the following guides as a starting point: -- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)--You also need to: -- Install [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework)-- Install [Visual Studio](https://www.visualstudio.com/downloads/)-- Install [MySQL Connector/C++](https://dev.mysql.com/downloads/connector/cpp/) -- Install [Boost](https://www.boost.org/)--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install Visual Studio and .NET -The steps in this section assume that you're familiar with developing using .NET. --### **Windows** -- Install Visual Studio 2019 Community. Visual Studio 2019 Community is a full featured, extensible, free IDE. With this IDE, you can create modern applications for Android, iOS, Windows, web and database applications, and cloud services. You can install either the full .NET Framework or just .NET Core: the code snippets in the Quickstart work with either. If you already have Visual Studio installed on your computer, skip the next two steps.- 1. Download the [Visual Studio 2019 installer](https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15). - 2. Run the installer and follow the installation prompts to complete the installation. --### **Configure Visual Studio** -1. From Visual Studio, Project -> Properties -> Linker -> General > Additional Library Directories, add the "\lib\opt" directory (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\lib\opt) of the C++ connector. -2. From Visual Studio, Project -> Properties -> C/C++ -> General -> Additional Include Directories: - - Add the "\include" directory of c++ connector (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\include\). - - Add the Boost library's root directory (for example: C:\boost_1_64_0\). -3. From Visual Studio, Project -> Properties -> Linker -> Input > Additional Dependencies, add **mysqlcppconn.lib** into the text field. -4. Either copy **mysqlcppconn.dll** from the C++ connector library folder in step 3 to the same directory as the application executable or add it to the environment variable so your application can find it. --## Get connection information -Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Sign in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Click the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-cpp/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: --## Connect, create table, and insert data -Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method createStatement() and execute() to run the database commands. --Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database. --```c++ -#include <stdlib.h> -#include <iostream> -#include "stdafx.h" --#include "mysql_connection.h" -#include <cppconn/driver.h> -#include <cppconn/exception.h> -#include <cppconn/prepared_statement.h> -using namespace std; --//for demonstration only. never save your password in the code! -const string server = "tcp://yourservername.mysql.database.azure.com:3306"; -const string username = "username@servername"; -const string password = "yourpassword"; --int main() -{ - sql::Driver *driver; - sql::Connection *con; - sql::Statement *stmt; - sql::PreparedStatement *pstmt; -- try - { - driver = get_driver_instance(); - con = driver->connect(server, username, password); - } - catch (sql::SQLException e) - { - cout << "Could not connect to server. Error message: " << e.what() << endl; - system("pause"); - exit(1); - } -- //please create database "quickstartdb" ahead of time - con->setSchema("quickstartdb"); -- stmt = con->createStatement(); - stmt->execute("DROP TABLE IF EXISTS inventory"); - cout << "Finished dropping table (if existed)" << endl; - stmt->execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);"); - cout << "Finished creating table" << endl; - delete stmt; -- pstmt = con->prepareStatement("INSERT INTO inventory(name, quantity) VALUES(?,?)"); - pstmt->setString(1, "banana"); - pstmt->setInt(2, 150); - pstmt->execute(); - cout << "One row inserted." << endl; -- pstmt->setString(1, "orange"); - pstmt->setInt(2, 154); - pstmt->execute(); - cout << "One row inserted." << endl; -- pstmt->setString(1, "apple"); - pstmt->setInt(2, 100); - pstmt->execute(); - cout << "One row inserted." << endl; -- delete pstmt; - delete con; - system("pause"); - return 0; -} -``` --## Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the select commands. Next, the code uses next() to advance to the records in the results. Finally, the code uses getInt() and getString() to parse the values in the record. --Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database. --```c++ -#include <stdlib.h> -#include <iostream> -#include "stdafx.h" --#include "mysql_connection.h" -#include <cppconn/driver.h> -#include <cppconn/exception.h> -#include <cppconn/resultset.h> -#include <cppconn/prepared_statement.h> -using namespace std; --//for demonstration only. never save your password in the code! -const string server = "tcp://yourservername.mysql.database.azure.com:3306"; -const string username = "username@servername"; -const string password = "yourpassword"; --int main() -{ - sql::Driver *driver; - sql::Connection *con; - sql::PreparedStatement *pstmt; - sql::ResultSet *result; -- try - { - driver = get_driver_instance(); - //for demonstration only. never save password in the code! - con = driver->connect(server, username, password); - } - catch (sql::SQLException e) - { - cout << "Could not connect to server. Error message: " << e.what() << endl; - system("pause"); - exit(1); - } -- con->setSchema("quickstartdb"); -- //select - pstmt = con->prepareStatement("SELECT * FROM inventory;"); - result = pstmt->executeQuery(); -- while (result->next()) - printf("Reading from table=(%d, %s, %d)\n", result->getInt(1), result->getString(2).c_str(), result->getInt(3)); -- delete result; - delete pstmt; - delete con; - system("pause"); - return 0; -} -``` --## Update data -Use the following code to connect and read the data by using an **UPDATE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the update commands. --Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database. --```c++ -#include <stdlib.h> -#include <iostream> -#include "stdafx.h" --#include "mysql_connection.h" -#include <cppconn/driver.h> -#include <cppconn/exception.h> -#include <cppconn/resultset.h> -#include <cppconn/prepared_statement.h> -using namespace std; --//for demonstration only. never save your password in the code! -const string server = "tcp://yourservername.mysql.database.azure.com:3306"; -const string username = "username@servername"; -const string password = "yourpassword"; --int main() -{ - sql::Driver *driver; - sql::Connection *con; - sql::PreparedStatement *pstmt; -- try - { - driver = get_driver_instance(); - //for demonstration only. never save password in the code! - con = driver->connect(server, username, password); - } - catch (sql::SQLException e) - { - cout << "Could not connect to server. Error message: " << e.what() << endl; - system("pause"); - exit(1); - } - - con->setSchema("quickstartdb"); -- //update - pstmt = con->prepareStatement("UPDATE inventory SET quantity = ? WHERE name = ?"); - pstmt->setInt(1, 200); - pstmt->setString(2, "banana"); - pstmt->executeQuery(); - printf("Row updated\n"); -- delete con; - delete pstmt; - system("pause"); - return 0; -} -``` ---## Delete data -Use the following code to connect and read the data by using a **DELETE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the delete commands. --Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database. --```c++ -#include <stdlib.h> -#include <iostream> -#include "stdafx.h" --#include "mysql_connection.h" -#include <cppconn/driver.h> -#include <cppconn/exception.h> -#include <cppconn/resultset.h> -#include <cppconn/prepared_statement.h> -using namespace std; --//for demonstration only. never save your password in the code! -const string server = "tcp://yourservername.mysql.database.azure.com:3306"; -const string username = "username@servername"; -const string password = "yourpassword"; --int main() -{ - sql::Driver *driver; - sql::Connection *con; - sql::PreparedStatement *pstmt; - sql::ResultSet *result; -- try - { - driver = get_driver_instance(); - //for demonstration only. never save password in the code! - con = driver->connect(server, username, password); - } - catch (sql::SQLException e) - { - cout << "Could not connect to server. Error message: " << e.what() << endl; - system("pause"); - exit(1); - } - - con->setSchema("quickstartdb"); - - //delete - pstmt = con->prepareStatement("DELETE FROM inventory WHERE name = ?"); - pstmt->setString(1, "orange"); - result = pstmt->executeQuery(); - printf("Row deleted\n"); - - delete pstmt; - delete con; - delete result; - system("pause"); - return 0; -} -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps -> [!div class="nextstepaction"] -> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md) |
mysql | Connect Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-csharp.md | - Title: 'Quickstart: Connect using C# - Azure Database for MySQL' -description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for MySQL." ------ Previously updated : 06/20/2022---# Quickstart: Use .NET (C#) to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. --## Prerequisites -For this quickstart you need: --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-- Install the [.NET SDK for your platform](https://dotnet.microsoft.com/download) (Windows, Ubuntu Linux, or macOS) for your platform.--|Action| Connectivity method|How-to guide| -|: |: |: | -| **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)| -| **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)| -| **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) | --- [Create a database and non-admin user](./how-to-create-users.md)---## Create a C# project -At a command prompt, run: --``` -mkdir AzureMySqlExample -cd AzureMySqlExample -dotnet new console -dotnet add package MySqlConnector -``` --## Get connection information -Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Click the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-csharp/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: --## Step 1: Connect and insert data -Use the following code to connect and load the data by using `CREATE TABLE` and `INSERT INTO` SQL statements. The code uses the methods of the `MySqlConnection` class: -- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand), sets the CommandText property-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. --Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. --```csharp -using System; -using System.Threading.Tasks; -using MySqlConnector; --namespace AzureMySqlExample -{ - class MySqlCreate - { - static async Task Main(string[] args) - { - var builder = new MySqlConnectionStringBuilder - { - Server = "YOUR-SERVER.mysql.database.azure.com", - Database = "YOUR-DATABASE", - UserID = "USER@YOUR-SERVER", - Password = "PASSWORD", - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.WriteLine("Opening connection"); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "DROP TABLE IF EXISTS inventory;"; - await command.ExecuteNonQueryAsync(); - Console.WriteLine("Finished dropping table (if existed)"); -- command.CommandText = "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);"; - await command.ExecuteNonQueryAsync(); - Console.WriteLine("Finished creating table"); -- command.CommandText = @"INSERT INTO inventory (name, quantity) VALUES (@name1, @quantity1), - (@name2, @quantity2), (@name3, @quantity3);"; - command.Parameters.AddWithValue("@name1", "banana"); - command.Parameters.AddWithValue("@quantity1", 150); - command.Parameters.AddWithValue("@name2", "orange"); - command.Parameters.AddWithValue("@quantity2", 154); - command.Parameters.AddWithValue("@name3", "apple"); - command.Parameters.AddWithValue("@quantity3", 100); -- int rowCount = await command.ExecuteNonQueryAsync(); - Console.WriteLine(String.Format("Number of rows inserted={0}", rowCount)); - } -- // connection will be closed by the 'using' block - Console.WriteLine("Closing connection"); - } -- Console.WriteLine("Press RETURN to exit"); - Console.ReadLine(); - } - } -} -``` --## Step 2: Read data --Use the following code to connect and read the data by using a `SELECT` SQL statement. The code uses the `MySqlConnection` class with methods: -- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.-- [ExecuteReaderAsync()](/dotnet/api/system.data.common.dbcommand.executereaderasync) to run the database commands. -- [ReadAsync()](/dotnet/api/system.data.common.dbdatareader.readasync#System_Data_Common_DbDataReader_ReadAsync) to advance to the records in the results. Then the code uses GetInt32 and GetString to parse the values in the record.---Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. --```csharp -using System; -using System.Threading.Tasks; -using MySqlConnector; --namespace AzureMySqlExample -{ - class MySqlRead - { - static async Task Main(string[] args) - { - var builder = new MySqlConnectionStringBuilder - { - Server = "YOUR-SERVER.mysql.database.azure.com", - Database = "YOUR-DATABASE", - UserID = "USER@YOUR-SERVER", - Password = "PASSWORD", - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.WriteLine("Opening connection"); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "SELECT * FROM inventory;"; -- using (var reader = await command.ExecuteReaderAsync()) - { - while (await reader.ReadAsync()) - { - Console.WriteLine(string.Format( - "Reading from table=({0}, {1}, {2})", - reader.GetInt32(0), - reader.GetString(1), - reader.GetInt32(2))); - } - } - } -- Console.WriteLine("Closing connection"); - } -- Console.WriteLine("Press RETURN to exit"); - Console.ReadLine(); - } - } -} -``` --## Step 3: Update data -Use the following code to connect and read the data by using an `UPDATE` SQL statement. The code uses the `MySqlConnection` class with method: -- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL. -- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. ---Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. --```csharp -using System; -using System.Threading.Tasks; -using MySqlConnector; --namespace AzureMySqlExample -{ - class MySqlUpdate - { - static async Task Main(string[] args) - { - var builder = new MySqlConnectionStringBuilder - { - Server = "YOUR-SERVER.mysql.database.azure.com", - Database = "YOUR-DATABASE", - UserID = "USER@YOUR-SERVER", - Password = "PASSWORD", - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.WriteLine("Opening connection"); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "UPDATE inventory SET quantity = @quantity WHERE name = @name;"; - command.Parameters.AddWithValue("@quantity", 200); - command.Parameters.AddWithValue("@name", "banana"); -- int rowCount = await command.ExecuteNonQueryAsync(); - Console.WriteLine(String.Format("Number of rows updated={0}", rowCount)); - } -- Console.WriteLine("Closing connection"); - } -- Console.WriteLine("Press RETURN to exit"); - Console.ReadLine(); - } - } -} -``` --## Step 4: Delete data -Use the following code to connect and delete the data by using a `DELETE` SQL statement. --The code uses the `MySqlConnection` class with method -- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. ---Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. --```csharp -using System; -using System.Threading.Tasks; -using MySqlConnector; --namespace AzureMySqlExample -{ - class MySqlDelete - { - static async Task Main(string[] args) - { - var builder = new MySqlConnectionStringBuilder - { - Server = "YOUR-SERVER.mysql.database.azure.com", - Database = "YOUR-DATABASE", - UserID = "USER@YOUR-SERVER", - Password = "PASSWORD", - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.WriteLine("Opening connection"); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "DELETE FROM inventory WHERE name = @name;"; - command.Parameters.AddWithValue("@name", "orange"); -- int rowCount = await command.ExecuteNonQueryAsync(); - Console.WriteLine(String.Format("Number of rows deleted={0}", rowCount)); - } -- Console.WriteLine("Closing connection"); - } -- Console.WriteLine("Press RETURN to exit"); - Console.ReadLine(); - } - } -} -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps -> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/> --> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md) --[Cannot find what you are looking for?Let us know.](https://aka.ms/mysql-doc-feedback) |
mysql | Connect Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-go.md | - Title: 'Quickstart: Connect using Go - Azure Database for MySQL' -description: This quickstart provides several Go code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 05/03/2023---# Quickstart: Use Go language to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL from Windows, Ubuntu Linux, and Apple macOS platforms by using code written in the [Go](https://go.dev/) language. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Go and that you are new to working with Azure Database for MySQL. --## Prerequisites --This quickstart uses the resources created in either of these guides as a starting point: --- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install Go and MySQL connector --Install [Go](https://go.dev/doc/install) and the [go-sql-driver for MySQL](https://github.com/go-sql-driver/mysql#installation) on your own computer. Depending on your platform, follow the steps in the appropriate section: --### [Windows](#tab/windows) --1. [Download](https://go.dev/dl/) and install Go for Microsoft Windows according to the [installation instructions](https://go.dev/doc/install). -2. Launch the command prompt from the start menu. -3. Make a folder for your project such. `mkdir %USERPROFILE%\go\src\mysqlgo`. -4. Change directory into the project folder, such as `cd %USERPROFILE%\go\src\mysqlgo`. -5. Set the environment variable for GOPATH to point to the source code directory. `set GOPATH=%USERPROFILE%\go`. -6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command. -- In summary, install Go, then run these commands in the command prompt: -- ```cmd - mkdir %USERPROFILE%\go\src\mysqlgo - cd %USERPROFILE%\go\src\mysqlgo - set GOPATH=%USERPROFILE%\go - go get github.com/go-sql-driver/mysql - ``` --### [Linux (Ubuntu)](#tab/ubuntu) --1. Launch the Bash shell. -2. Install Go by running `sudo apt-get install golang-go`. -3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`. -4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`. -5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session. -6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command. -- In summary, run these bash commands: -- ```bash - sudo apt-get install golang-go git -y - mkdir -p ~/go/src/mysqlgo/ - cd ~/go/src/mysqlgo/ - export GOPATH=~/go/ - go get github.com/go-sql-driver/mysql - ``` --### [Apple macOS](#tab/macos) --1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform. -2. Launch the Bash shell. -3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`. -4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`. -5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session. -6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command. -- In summary, install Go, then run these bash commands: -- ```bash - mkdir -p ~/go/src/mysqlgo/ - cd ~/go/src/mysqlgo/ - export GOPATH=~/go/ - go get github.com/go-sql-driver/mysql - ``` ----## Get connection information --Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Click the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-go/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: --## Build and run Go code --1. To write Golang code, you can use a simple text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE), try [Gogland](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/). -2. Paste the Go code from the sections below into text files, and then save them into your project folder with file extension \*.go (such as Windows path `%USERPROFILE%\go\src\mysqlgo\createtable.go` or Linux path `~/go/src/mysqlgo/createtable.go`). -3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and then replace the example values with your own values. -4. Launch the command prompt or Bash shell. Change directory into your project folder. For example, on Windows `cd %USERPROFILE%\go\src\mysqlgo\`. On Linux `cd ~/go/src/mysqlgo/`. Some of the IDE editors mentioned offer debug and runtime capabilities without requiring shell commands. -5. Run the code by typing the command `go run createtable.go` to compile the application and run it. -6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application. --## Connect, create table, and insert data --Use the following code to connect to the server, create a table, and load the data by using an **INSERT** SQL statement. --The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. --The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and it checks the connection by using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several DDL commands. The code also uses [Prepare()](http://go-database-sql.org/prepared.html) and Exec() to run prepared statements with different parameters to insert three rows. Each time, a custom checkError() method is used to check if an error occurred and panic to exit. --Replace the `host`, `database`, `user`, and `password` constants with your own values. --```Go -package main --import ( - "database/sql" - "fmt" -- _ "github.com/go-sql-driver/mysql" -) --const ( - host = "mydemoserver.mysql.database.azure.com" - database = "quickstartdb" - user = "myadmin@mydemoserver" - password = "yourpassword" -) --func checkError(err error) { - if err != nil { - panic(err) - } -} --func main() { -- // Initialize connection string. - var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database) -- // Initialize connection object. - db, err := sql.Open("mysql", connectionString) - checkError(err) - defer db.Close() -- err = db.Ping() - checkError(err) - fmt.Println("Successfully created connection to database.") -- // Drop previous table of same name if one exists. - _, err = db.Exec("DROP TABLE IF EXISTS inventory;") - checkError(err) - fmt.Println("Finished dropping table (if existed).") -- // Create table. - _, err = db.Exec("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);") - checkError(err) - fmt.Println("Finished creating table.") -- // Insert some data into table. - sqlStatement, err := db.Prepare("INSERT INTO inventory (name, quantity) VALUES (?, ?);") - res, err := sqlStatement.Exec("banana", 150) - checkError(err) - rowCount, err := res.RowsAffected() - fmt.Printf("Inserted %d row(s) of data.\n", rowCount) -- res, err = sqlStatement.Exec("orange", 154) - checkError(err) - rowCount, err = res.RowsAffected() - fmt.Printf("Inserted %d row(s) of data.\n", rowCount) -- res, err = sqlStatement.Exec("apple", 100) - checkError(err) - rowCount, err = res.RowsAffected() - fmt.Printf("Inserted %d row(s) of data.\n", rowCount) - fmt.Println("Done.") -} --``` --## Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. --The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. --The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Query()](https://go.dev/pkg/database/sql/#DB.Query) method to run the select command. Then it runs [Next()](https://go.dev/pkg/database/sql/#Rows.Next) to iterate through the result set and [Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) to parse the column values, saving the value into variables. Each time a custom checkError() method is used to check if an error occurred and panic to exit. --Replace the `host`, `database`, `user`, and `password` constants with your own values. --```Go -package main --import ( - "database/sql" - "fmt" -- _ "github.com/go-sql-driver/mysql" -) --const ( - host = "mydemoserver.mysql.database.azure.com" - database = "quickstartdb" - user = "myadmin@mydemoserver" - password = "yourpassword" -) --func checkError(err error) { - if err != nil { - panic(err) - } -} --func main() { -- // Initialize connection string. - var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database) -- // Initialize connection object. - db, err := sql.Open("mysql", connectionString) - checkError(err) - defer db.Close() -- err = db.Ping() - checkError(err) - fmt.Println("Successfully created connection to database.") -- // Variables for printing column data when scanned. - var ( - id int - name string - quantity int - ) -- // Read some data from the table. - rows, err := db.Query("SELECT id, name, quantity from inventory;") - checkError(err) - defer rows.Close() - fmt.Println("Reading data:") - for rows.Next() { - err := rows.Scan(&id, &name, &quantity) - checkError(err) - fmt.Printf("Data row = (%d, %s, %d)\n", id, name, quantity) - } - err = rows.Err() - checkError(err) - fmt.Println("Done.") -} -``` --## Update data --Use the following code to connect and update the data using a **UPDATE** SQL statement. --The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. --The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the update command. Each time a custom checkError() method is used to check if an error occurred and panic to exit. --Replace the `host`, `database`, `user`, and `password` constants with your own values. --```Go -package main --import ( - "database/sql" - "fmt" -- _ "github.com/go-sql-driver/mysql" -) --const ( - host = "mydemoserver.mysql.database.azure.com" - database = "quickstartdb" - user = "myadmin@mydemoserver" - password = "yourpassword" -) --func checkError(err error) { - if err != nil { - panic(err) - } -} --func main() { -- // Initialize connection string. - var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database) -- // Initialize connection object. - db, err := sql.Open("mysql", connectionString) - checkError(err) - defer db.Close() -- err = db.Ping() - checkError(err) - fmt.Println("Successfully created connection to database.") -- // Modify some data in table. - rows, err := db.Exec("UPDATE inventory SET quantity = ? WHERE name = ?", 200, "banana") - checkError(err) - rowCount, err := rows.RowsAffected() - fmt.Printf("Updated %d row(s) of data.\n", rowCount) - fmt.Println("Done.") -} -``` --## Delete data --Use the following code to connect and remove data using a **DELETE** SQL statement. --The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. --The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the delete command. Each time a custom checkError() method is used to check if an error occurred and panic to exit. --Replace the `host`, `database`, `user`, and `password` constants with your own values. --```Go -package main --import ( - "database/sql" - "fmt" - _ "github.com/go-sql-driver/mysql" -) --const ( - host = "mydemoserver.mysql.database.azure.com" - database = "quickstartdb" - user = "myadmin@mydemoserver" - password = "yourpassword" -) --func checkError(err error) { - if err != nil { - panic(err) - } -} --func main() { -- // Initialize connection string. - var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database) -- // Initialize connection object. - db, err := sql.Open("mysql", connectionString) - checkError(err) - defer db.Close() -- err = db.Ping() - checkError(err) - fmt.Println("Successfully created connection to database.") -- // Modify some data in table. - rows, err := db.Exec("DELETE FROM inventory WHERE name = ?", "orange") - checkError(err) - rowCount, err := rows.RowsAffected() - fmt.Printf("Deleted %d row(s) of data.\n", rowCount) - fmt.Println("Done.") -} -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli-interactive -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your database using Export and Import](../flexible-server/concepts-migrate-import-export.md) |
mysql | Connect Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md | - Title: 'Quickstart: Use Java and JDBC with Azure Database for MySQL' -description: Learn how to use Java and JDBC with an Azure Database for MySQL database. ------ Previously updated : 05/03/2023---# Quickstart: Use Java and JDBC with Azure Database for MySQL ----This article demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL](./index.yml). --JDBC is the standard Java API to connect to traditional relational databases. --In this article, we'll include two authentication methods: Microsoft Entra authentication and MySQL authentication. The **Passwordless** tab shows the Microsoft Entra authentication and the **Password** tab shows the MySQL authentication. --Microsoft Entra authentication is a mechanism for connecting to Azure Database for MySQL using identities defined in Microsoft Entra ID. With Microsoft Entra authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. --MySQL authentication uses accounts stored in MySQL. If you choose to use passwords as credentials for the accounts, these credentials will be stored in the `user` table. Because these passwords are stored in MySQL, you'll need to manage the rotation of the passwords by yourself. --## Prerequisites --- An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).-- [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.-- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).-- The [Apache Maven](https://maven.apache.org/) build tool.-- MySQL command line client. You can connect to your server using the [mysql.exe](https://dev.mysql.com/downloads/) command-line tool with Azure Cloud Shell. Alternatively, you can use the `mysql` command line in your local environment.--## Prepare the working environment --First, set up some environment variables. In [Azure Cloud Shell](https://shell.azure.com/), run the following commands: --### [Passwordless (Recommended)](#tab/passwordless) --```bash -export AZ_RESOURCE_GROUP=database-workshop -export AZ_DATABASE_SERVER_NAME=<YOUR_DATABASE_SERVER_NAME> -export AZ_DATABASE_NAME=demo -export AZ_LOCATION=<YOUR_AZURE_REGION> -export AZ_MYSQL_AD_NON_ADMIN_USERNAME=demo-non-admin -export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS> -export CURRENT_USERNAME=$(az ad signed-in-user show --query userPrincipalName -o tsv) -export CURRENT_USER_OBJECTID=$(az ad signed-in-user show --query id -o tsv) -``` --Replace the placeholders with the following values, which are used throughout this article: --- `<YOUR_DATABASE_SERVER_NAME>`: The name of your MySQL server, which should be unique across Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).--### [Password](#tab/password) --```bash -export AZ_RESOURCE_GROUP=database-workshop -export AZ_DATABASE_SERVER_NAME=<YOUR_DATABASE_SERVER_NAME> -export AZ_DATABASE_NAME=demo -export AZ_LOCATION=<YOUR_AZURE_REGION> -export AZ_MYSQL_ADMIN_USERNAME=demo -export AZ_MYSQL_ADMIN_PASSWORD=<YOUR_MYSQL_ADMIN_PASSWORD> -export AZ_MYSQL_NON_ADMIN_USERNAME=demo-non-admin -export AZ_MYSQL_NON_ADMIN_PASSWORD=<YOUR_MYSQL_NON_ADMIN_PASSWORD> -export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS> -``` --Replace the placeholders with the following values, which are used throughout this article: --- `<YOUR_DATABASE_SERVER_NAME>`: The name of your MySQL server, which should be unique across Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.-- `<YOUR_MYSQL_ADMIN_PASSWORD>` and `<YOUR_MYSQL_NON_ADMIN_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).----Next, create a resource group by using the following command: --```azurecli-interactive -az group create \ - --name $AZ_RESOURCE_GROUP \ - --location $AZ_LOCATION \ - --output tsv -``` --## Create an Azure Database for MySQL instance --### Create a MySQL server and set up admin user --The first thing you'll create is a managed MySQL server. --> [!NOTE] -> You can read more detailed information about creating MySQL servers in [Quickstart: Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md). --#### [Passwordless (Recommended)](#tab/passwordless) --If you're using Azure CLI, run the following command to make sure it has sufficient permission: --```azurecli-interactive -az login --scope https://graph.microsoft.com/.default -``` --Then, run the following command to create the server: --```azurecli-interactive -az mysql server create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_SERVER_NAME \ - --location $AZ_LOCATION \ - --sku-name B_Gen5_1 \ - --storage-size 5120 \ - --output tsv -``` --Next, run the following command to set the Microsoft Entra admin user: --```azurecli-interactive -az mysql server ad-admin create \ - --resource-group $AZ_RESOURCE_GROUP \ - --server-name $AZ_DATABASE_SERVER_NAME \ - --display-name $CURRENT_USERNAME \ - --object-id $CURRENT_USER_OBJECTID -``` --> [!IMPORTANT] -> When setting the administrator, a new user is added to the Azure Database for MySQL server with full administrator permissions. You can only create one Microsoft Entra admin per MySQL server. Selection of another user will overwrite the existing Microsoft Entra admin configured for the server. --This command creates a small MySQL server and sets the Active Directory admin to the signed-in user. --#### [Password](#tab/password) --```azurecli-interactive -az mysql server create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_SERVER_NAME \ - --location $AZ_LOCATION \ - --sku-name B_Gen5_1 \ - --storage-size 5120 \ - --admin-user $AZ_MYSQL_ADMIN_USERNAME \ - --admin-password $AZ_MYSQL_ADMIN_PASSWORD \ - --output tsv -``` --This command creates a small MySQL server. ----### Configure a firewall rule for your MySQL server --Azure Databases for MySQL instances are secured by default. These instances have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server. --Because you configured your local IP address at the beginning of this article, you can open the server's firewall by running the following command: --```azurecli-interactive -az mysql server firewall-rule create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_SERVER_NAME-database-allow-local-ip \ - --server $AZ_DATABASE_SERVER_NAME \ - --start-ip-address $AZ_LOCAL_IP_ADDRESS \ - --end-ip-address $AZ_LOCAL_IP_ADDRESS \ - --output tsv -``` --If you're connecting to your MySQL server from Windows Subsystem for Linux (WSL) on a Windows computer, you'll need to add the WSL host ID to your firewall. --Obtain the IP address of your host machine by running the following command in WSL: --```bash -cat /etc/resolv.conf -``` --Copy the IP address following the term `nameserver`, then use the following command to set an environment variable for the WSL IP Address: --```bash -AZ_WSL_IP_ADDRESS=<the-copied-IP-address> -``` --Then, use the following command to open the server's firewall to your WSL-based app: --```azurecli-interactive -az mysql server firewall-rule create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_SERVER_NAME-database-allow-local-ip-wsl \ - --server $AZ_DATABASE_SERVER_NAME \ - --start-ip-address $AZ_WSL_IP_ADDRESS \ - --end-ip-address $AZ_WSL_IP_ADDRESS \ - --output tsv -``` --### Configure a MySQL database --The MySQL server that you created earlier is empty. Use the following command to create a new database. --```azurecli-interactive -az mysql db create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_NAME \ - --server-name $AZ_DATABASE_SERVER_NAME \ - --output tsv -``` --### Create a MySQL non-admin user and grant permission --Next, create a non-admin user and grant all permissions to the database. --> [!NOTE] -> You can read more detailed information about creating MySQL users in [Create users in Azure Database for MySQL](./how-to-create-users.md). --#### [Passwordless (Recommended)](#tab/passwordless) --Create a SQL script called *create_ad_user.sql* for creating a non-admin user. Add the following contents and save it locally: --```bash -export AZ_MYSQL_AD_NON_ADMIN_USERID=$CURRENT_USER_OBJECTID --cat << EOF > create_ad_user.sql -SET aad_auth_validate_oids_in_tenant = OFF; --CREATE AADUSER '$AZ_MYSQL_AD_NON_ADMIN_USERNAME' IDENTIFIED BY '$AZ_MYSQL_AD_NON_ADMIN_USERID'; --GRANT ALL PRIVILEGES ON $AZ_DATABASE_NAME.* TO '$AZ_MYSQL_AD_NON_ADMIN_USERNAME'@'%'; --FLUSH privileges; --EOF -``` --Then, use the following command to run the SQL script to create the Microsoft Entra non-admin user: --```bash -mysql -h $AZ_DATABASE_SERVER_NAME.mysql.database.azure.com --user $CURRENT_USERNAME@$AZ_DATABASE_SERVER_NAME --enable-cleartext-plugin --password=$(az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken) < create_ad_user.sql -``` --Now use the following command to remove the temporary SQL script file: --```bash -rm create_ad_user.sql -``` --#### [Password](#tab/password) --Create a SQL script called *create_user.sql* for creating a non-admin user. Add the following contents and save it locally: --```bash -cat << EOF > create_user.sql --CREATE USER '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%' IDENTIFIED BY '$AZ_MYSQL_NON_ADMIN_PASSWORD'; --GRANT ALL PRIVILEGES ON $AZ_DATABASE_NAME.* TO '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%'; --FLUSH PRIVILEGES; --EOF -``` --Then, use the following command to run the SQL script to create the Microsoft Entra non-admin user: --```bash -mysql -h $AZ_DATABASE_SERVER_NAME.mysql.database.azure.com --user $AZ_MYSQL_ADMIN_USERNAME@$AZ_DATABASE_SERVER_NAME --enable-cleartext-plugin --password=$AZ_MYSQL_ADMIN_PASSWORD < create_user.sql -``` --Now use the following command to remove the temporary SQL script file: --```bash -rm create_user.sql -``` ----### Create a new Java project --Using your favorite IDE, create a new Java project using Java 8 or above. Create a *pom.xml* file in its root directory and add the following contents: --#### [Passwordless (Recommended)](#tab/passwordless) --```xml -<?xml version="1.0" encoding="UTF-8"?> -<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" - xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> - <modelVersion>4.0.0</modelVersion> - <groupId>com.example</groupId> - <artifactId>demo</artifactId> - <version>0.0.1-SNAPSHOT</version> - <name>demo</name> -- <properties> - <java.version>1.8</java.version> - <maven.compiler.source>1.8</maven.compiler.source> - <maven.compiler.target>1.8</maven.compiler.target> - </properties> -- <dependencies> - <dependency> - <groupId>mysql</groupId> - <artifactId>mysql-connector-java</artifactId> - <version>8.0.30</version> - </dependency> - <dependency> - <groupId>com.azure</groupId> - <artifactId>azure-identity-extensions</artifactId> - <version>1.0.0</version> - </dependency> - </dependencies> -</project> -``` --#### [Password](#tab/password) --```xml -<?xml version="1.0" encoding="UTF-8"?> -<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" - xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> - <modelVersion>4.0.0</modelVersion> - <groupId>com.example</groupId> - <artifactId>demo</artifactId> - <version>0.0.1-SNAPSHOT</version> - <name>demo</name> -- <properties> - <java.version>1.8</java.version> - <maven.compiler.source>1.8</maven.compiler.source> - <maven.compiler.target>1.8</maven.compiler.target> - </properties> -- <dependencies> - <dependency> - <groupId>mysql</groupId> - <artifactId>mysql-connector-java</artifactId> - <version>8.0.30</version> - </dependency> - </dependencies> -</project> -``` ----This file is an [Apache Maven](https://maven.apache.org/) file that configures your project to use Java 8 and a recent MySQL driver for Java. --### Prepare a configuration file to connect to Azure Database for MySQL --Run the following script in the project root directory to create a *src/main/resources/database.properties* file and add configuration details: --#### [Passwordless (Recommended)](#tab/passwordless) --```bash -mkdir -p src/main/resources && touch src/main/resources/database.properties --cat << EOF > src/main/resources/database.properties -url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin -user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} -EOF -``` --> [!NOTE] -> If you are using MysqlConnectionPoolDataSource class as the datasource in your application, please remove "defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin" in the url. --```bash -mkdir -p src/main/resources && touch src/main/resources/database.properties --cat << EOF > src/main/resources/database.properties -url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin -user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} -EOF -``` --#### [Password](#tab/password) --```bash -mkdir -p src/main/resources && touch src/main/resources/database.properties --cat << EOF > src/main/resources/database.properties -url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?useSSL=true&sslMode=REQUIRED&serverTimezone=UTC -user=${AZ_MYSQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} -password=${AZ_MYSQL_NON_ADMIN_PASSWORD} -EOF -``` ----> [!NOTE] -> The configuration property `url` has `?serverTimezone=UTC` appended to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, your Java server would not use the same date format as the database, which would result in an error. --### Create an SQL file to generate the database schema --Next, you'll use a *src/main/resources/schema.sql* file to create a database schema. Create that file, then add the following contents: --```bash -touch src/main/resources/schema.sql --cat << EOF > src/main/resources/schema.sql -DROP TABLE IF EXISTS todo; -CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN); -EOF -``` --## Code the application --### Connect to the database --Next, add the Java code that will use JDBC to store and retrieve data from your MySQL server. --Create a *src/main/java/DemoApplication.java* file and add the following contents: --```java -package com.example.demo; --import com.mysql.cj.jdbc.AbandonedConnectionCleanupThread; --import java.sql.*; -import java.util.*; -import java.util.logging.Logger; --public class DemoApplication { -- private static final Logger log; -- static { - System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n"); - log =Logger.getLogger(DemoApplication.class.getName()); - } -- public static void main(String[] args) throws Exception { - log.info("Loading application properties"); - Properties properties = new Properties(); - properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("database.properties")); -- log.info("Connecting to the database"); - Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties); - log.info("Database connection test: " + connection.getCatalog()); -- log.info("Create database schema"); - Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql")); - Statement statement = connection.createStatement(); - while (scanner.hasNextLine()) { - statement.execute(scanner.nextLine()); - } -- /* Prepare to store and retrieve data from the MySQL server. - Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true); - insertData(todo, connection); - todo = readData(connection); - todo.setDetails("congratulations, you have updated data!"); - updateData(todo, connection); - deleteData(todo, connection); - */ -- log.info("Closing database connection"); - connection.close(); - AbandonedConnectionCleanupThread.uncheckedShutdown(); - } -} -``` --This Java code will use the *database.properties* and the *schema.sql* files that you created earlier. After connecting to the MySQL server, you can create a schema to store your data. --In this file, you can see that we commented methods to insert, read, update and delete data. You'll implement those methods in the rest of this article, and you'll be able to uncomment them one after each other. --> [!NOTE] -> The database credentials are stored in the *user* and *password* properties of the *database.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument. --> [!NOTE] -> The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver command to destroy an internal thread when shutting down the application. You can safely ignore this line. --You can now execute this main class with your favorite tool: --- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.-- Using Maven, you can run the application with the following command: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.--The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection. You should see output similar to the following example in the console logs: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Closing database connection -``` --### Create a domain class --Create a new `Todo` Java class, next to the `DemoApplication` class, and add the following code: --```java -package com.example.demo; --public class Todo { -- private Long id; - private String description; - private String details; - private boolean done; -- public Todo() { - } -- public Todo(Long id, String description, String details, boolean done) { - this.id = id; - this.description = description; - this.details = details; - this.done = done; - } -- public Long getId() { - return id; - } -- public void setId(Long id) { - this.id = id; - } -- public String getDescription() { - return description; - } -- public void setDescription(String description) { - this.description = description; - } -- public String getDetails() { - return details; - } -- public void setDetails(String details) { - this.details = details; - } -- public boolean isDone() { - return done; - } -- public void setDone(boolean done) { - this.done = done; - } -- @Override - public String toString() { - return "Todo{" + - "id=" + id + - ", description='" + description + '\'' + - ", details='" + details + '\'' + - ", done=" + done + - '}'; - } -} -``` --This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script. --### Insert data into Azure Database for MySQL --In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database: --```java -private static void insertData(Todo todo, Connection connection) throws SQLException { - log.info("Insert data"); - PreparedStatement insertStatement = connection - .prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);"); -- insertStatement.setLong(1, todo.getId()); - insertStatement.setString(2, todo.getDescription()); - insertStatement.setString(3, todo.getDetails()); - insertStatement.setBoolean(4, todo.isDone()); - insertStatement.executeUpdate(); -} -``` --You can now uncomment the two following lines in the `main` method: --```java -Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true); -insertData(todo, connection); -``` --Executing the main class should now produce the following output: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Insert data -[INFO ] Closing database connection -``` --### Reading data from Azure Database for MySQL --Next, read the data previously inserted to validate that your code works correctly. --In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database: --```java -private static Todo readData(Connection connection) throws SQLException { - log.info("Read data"); - PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;"); - ResultSet resultSet = readStatement.executeQuery(); - if (!resultSet.next()) { - log.info("There is no data in the database!"); - return null; - } - Todo todo = new Todo(); - todo.setId(resultSet.getLong("id")); - todo.setDescription(resultSet.getString("description")); - todo.setDetails(resultSet.getString("details")); - todo.setDone(resultSet.getBoolean("done")); - log.info("Data read from the database: " + todo.toString()); - return todo; -} -``` --You can now uncomment the following line in the `main` method: --```java -todo = readData(connection); -``` --Executing the main class should now produce the following output: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Insert data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true} -[INFO ] Closing database connection -``` --### Updating data in Azure Database for MySQL --Next, update the data you previously inserted. --Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database: --```java -private static void updateData(Todo todo, Connection connection) throws SQLException { - log.info("Update data"); - PreparedStatement updateStatement = connection - .prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;"); -- updateStatement.setString(1, todo.getDescription()); - updateStatement.setString(2, todo.getDetails()); - updateStatement.setBoolean(3, todo.isDone()); - updateStatement.setLong(4, todo.getId()); - updateStatement.executeUpdate(); - readData(connection); -} -``` --You can now uncomment the two following lines in the `main` method: --```java -todo.setDetails("congratulations, you have updated data!"); -updateData(todo, connection); -``` --Executing the main class should now produce the following output: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Insert data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true} -[INFO ] Update data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true} -[INFO ] Closing database connection -``` --### Deleting data in Azure Database for MySQL --Finally, delete the data you previously inserted. --Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database: --```java -private static void deleteData(Todo todo, Connection connection) throws SQLException { - log.info("Delete data"); - PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;"); - deleteStatement.setLong(1, todo.getId()); - deleteStatement.executeUpdate(); - readData(connection); -} -``` --You can now uncomment the following line in the `main` method: --```java -deleteData(todo, connection); -``` --Executing the main class should now produce the following output: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Insert data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true} -[INFO ] Update data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true} -[INFO ] Delete data -[INFO ] Read data -[INFO ] There is no data in the database! -[INFO ] Closing database connection -``` --## Clean up resources --Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for MySQL. --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli-interactive -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md) |
mysql | Connect Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md | - Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL' -description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 05/03/2023---# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md). ----In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Linux, and Windows platforms. --This article assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL. --## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Database for MySQL server. [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install Node.js and the MySQL connector --Depending on your platform, follow the instructions in the appropriate section to install [Node.js](https://nodejs.org). Use npm to install the [mysql2](https://www.npmjs.com/package/mysql2) package and its dependencies into your project folder. --### [Windows](#tab/windows) --1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your desired Windows installer option. -2. Make a local project folder such as `nodejsmysql`. -3. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\` -4. Run the NPM tool to install the mysql2 library into the project folder. -- ```cmd - cd c:\nodejsmysql\ - "C:\Program Files\nodejs\npm" install mysql2 - "C:\Program Files\nodejs\npm" list - ``` --5. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released. --### [Linux (Ubuntu/Debian)](#tab/ubuntu) --1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js. -- ```bash - # Using Ubuntu - sudo curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash - - sudo apt-get install -y nodejs -- # Using Debian - sudo curl -sL https://deb.nodesource.com/setup_14.x | bash - - sudo apt-get install -y nodejs - ``` --2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql2 package into that folder. -- ```bash - mkdir nodejsmysql - cd nodejsmysql - npm install --save mysql2 - npm list - ``` --3. Verify the installation by checking npm list output text. The version number may vary as new patches are released. --### [Linux (RHEL/CentOS)](#tab/rhel) --1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js. -- **RHEL/CentOS 7.x** -- ```bash - sudo yum install -y rh-nodejs8 - scl enable rh-nodejs8 bash - ``` -- **RHEL/CentOS 8.x** -- ```bash - sudo yum install -y nodejs - ``` --1. Run the following commands to create a project folder `mysqlnodejs` and install the mysql2 package into that folder. -- ```bash - mkdir nodejsmysql - cd nodejsmysql - npm install --save mysql2 - npm list - ``` --1. Verify the installation by checking npm list output text. The version number may vary as new patches are released. --### [Linux (SUSE)](#tab/sles) --1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js. -- ```bash - sudo zypper install nodejs - ``` --1. Run the following commands to create a project folder `mysqlnodejs` and install the mysql2 package into that folder. -- ```bash - mkdir nodejsmysql - cd nodejsmysql - npm install --save mysql2 - npm list - ``` --1. Verify the installation by checking npm list output text. The version number may vary as new patches are released. --### [macOS](#tab/macos) --1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your macOS installer. --2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql2 package into that folder. -- ```bash - mkdir nodejsmysql - cd nodejsmysql - npm install --save mysql2 - npm list - ``` --3. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released. ----## Get connection information --Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Select the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-nodejs/server-name-azure-database-mysql.png" alt-text="Azure Database for MySQL server name"::: --## Running the code samples --1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js). -1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the server and database. -1. **Obtain SSL certificate**: Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) and save the certificate file to your local drive. -- **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to DigiCertGlobalRootCA.crt.pem. -- See the following links for certificates for servers in sovereign clouds: [Azure Government](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem), [Microsoft Azure operated by 21Vianet](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt). -1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file. -1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`. -1. To run the application, enter the node command followed by the file name, such as `node createtable.js`. -1. On Windows, if the node application isn't in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js` --## Connect, create table, and insert data --Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements. --The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) function is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) function is used to execute the SQL query against MySQL database. --```javascript -const mysql = require('mysql2'); -const fs = require('fs'); --var config = -{ - host: 'mydemoserver.mysql.database.azure.com', - user: 'myadmin@mydemoserver', - password: 'your_password', - database: 'quickstartdb', - port: 3306, - ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")} -}; --const conn = new mysql.createConnection(config); --conn.connect( - function (err) { - if (err) { - console.log("!!! Cannot connect !!! Error:"); - throw err; - } - else - { - console.log("Connection established."); - queryDatabase(); - } -}); --function queryDatabase(){ - conn.query('DROP TABLE IF EXISTS inventory;', function (err, results, fields) { - if (err) throw err; - console.log('Dropped inventory table if existed.'); - }) - conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);', - function (err, results, fields) { - if (err) throw err; - console.log('Created inventory table.'); - }) - conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150], - function (err, results, fields) { - if (err) throw err; - else console.log('Inserted ' + results.affectedRows + ' row(s).'); - }) - conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 154], - function (err, results, fields) { - if (err) throw err; - console.log('Inserted ' + results.affectedRows + ' row(s).'); - }) - conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100], - function (err, results, fields) { - if (err) throw err; - console.log('Inserted ' + results.affectedRows + ' row(s).'); - }) - conn.end(function (err) { - if (err) throw err; - else console.log('Done.') - }); -}; -``` --## Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. --The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query. --```javascript -const mysql = require('mysql2'); -const fs = require('fs'); --var config = -{ - host: 'mydemoserver.mysql.database.azure.com', - user: 'myadmin@mydemoserver', - password: 'your_password', - database: 'quickstartdb', - port: 3306, - ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")} -}; --const conn = new mysql.createConnection(config); --conn.connect( - function (err) { - if (err) { - console.log("!!! Cannot connect !!! Error:"); - throw err; - } - else { - console.log("Connection established."); - readData(); - } - }); --function readData(){ - conn.query('SELECT * FROM inventory', - function (err, results, fields) { - if (err) throw err; - else console.log('Selected ' + results.length + ' row(s).'); - for (i = 0; i < results.length; i++) { - console.log('Row: ' + JSON.stringify(results[i])); - } - console.log('Done.'); - }) - conn.end( - function (err) { - if (err) throw err; - else console.log('Closing connection.') - }); -}; -``` --## Update data --Use the following code to connect and update data by using an **UPDATE** SQL statement. --The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database. --```javascript -const mysql = require('mysql2'); -const fs = require('fs'); --var config = -{ - host: 'mydemoserver.mysql.database.azure.com', - user: 'myadmin@mydemoserver', - password: 'your_password', - database: 'quickstartdb', - port: 3306, - ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")} -}; --const conn = new mysql.createConnection(config); --conn.connect( - function (err) { - if (err) { - console.log("!!! Cannot connect !!! Error:"); - throw err; - } - else { - console.log("Connection established."); - updateData(); - } - }); --function updateData(){ - conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [200, 'banana'], - function (err, results, fields) { - if (err) throw err; - else console.log('Updated ' + results.affectedRows + ' row(s).'); - }) - conn.end( - function (err) { - if (err) throw err; - else console.log('Done.') - }); -}; -``` --## Delete data --Use the following code to connect and delete data by using a **DELETE** SQL statement. --The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database. --```javascript -const mysql = require('mysql2'); -const fs = require('fs'); --var config = -{ - host: 'mydemoserver.mysql.database.azure.com', - user: 'myadmin@mydemoserver', - password: 'your_password', - database: 'quickstartdb', - port: 3306, - ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")} -}; --const conn = new mysql.createConnection(config); --conn.connect( - function (err) { - if (err) { - console.log("!!! Cannot connect !!! Error:"); - throw err; - } - else { - console.log("Connection established."); - deleteData(); - } - }); --function deleteData(){ - conn.query('DELETE FROM inventory WHERE name = ?', ['orange'], - function (err, results, fields) { - if (err) throw err; - else console.log('Deleted ' + results.affectedRows + ' row(s).'); - }) - conn.end( - function (err) { - if (err) throw err; - else console.log('Done.') - }); -}; -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli-interactive -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your database using Export and Import](../flexible-server/concepts-migrate-import-export.md) |
mysql | Connect Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-php.md | - Title: 'Quickstart: Connect using PHP - Azure Database for MySQL' -description: This quickstart provides several PHP code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Quickstart: Use PHP to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. --## Prerequisites -For this quickstart you need: --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-- |Action| Connectivity method|How-to guide| - |: |: |: | - | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)| - | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)| - | **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) | --- [Create a database and non-admin user](./how-to-create-users.md?tabs=single-server)-- Install latest PHP version for your operating system- - [PHP on macOS](https://secure.php.net/manual/install.macosx.php) - - [PHP on Linux](https://secure.php.net/manual/install.unix.php) - - [PHP on Windows](https://secure.php.net/manual/install.windows.php) --> [!NOTE] -> We are using [MySQLi](https://www.php.net/manual/en/book.mysqli.php) library to manage connect and query the server in this quickstart. --## Get connection information -You can get the database server connection information from the Azure portal by following these steps: --1. Log in to the [Azure portal](https://portal.azure.com/). -2. Navigate to the Azure Databases for MySQL page. You can search for and select **Azure Database for MySQL**. --2. Select your MySQL server (such as **mydemoserver**). -3. In the **Overview** page, copy the fully qualified server name next to **Server name** and the admin user name next to **Server admin login name**. To copy the server name or host name, hover over it and select the **Copy** icon. --> [!IMPORTANT] -> - If you forgot your password, you can [reset the password](./how-to-create-manage-server-portal.md#update-admin-password). -> - Replace the **host, username, password,** and **db_name** parameters with your own values** --## Step 1: Connect to the server -SSL is enabled by default. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. This code calls: -- [mysqli_init](https://secure.php.net/manual/mysqli.init.php) to initialize MySQLi.-- [mysqli_ssl_set](https://www.php.net/manual/en/mysqli.ssl-set.php) to point to the SSL certificate path. This is required for your local environment but not required for App Service Web App or Azure Virtual machines.-- [mysqli_real_connect](https://secure.php.net/manual/mysqli.real-connect.php) to connect to MySQL.-- [mysqli_close](https://secure.php.net/manual/mysqli.close.php) to close the connection.---```php -$host = 'mydemoserver.mysql.database.azure.com'; -$username = 'myadmin@mydemoserver'; -$password = 'your_password'; -$db_name = 'your_database'; --//Initializes MySQLi -$conn = mysqli_init(); --mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootG2.crt.pem", NULL, NULL); --// Establish the connection -mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL); --//If connection failed, show the error -if (mysqli_connect_errno()) -{ - die('Failed to connect to MySQL: '.mysqli_connect_error()); -} -``` --## Step 2: Create a Table -Use the following code to connect. This code calls: -- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) to run the query.-```php -// Run the create table query -if (mysqli_query($conn, ' -CREATE TABLE Products ( -`Id` INT NOT NULL AUTO_INCREMENT , -`ProductName` VARCHAR(200) NOT NULL , -`Color` VARCHAR(50) NOT NULL , -`Price` DOUBLE NOT NULL , -PRIMARY KEY (`Id`) -); -')) { -printf("Table created\n"); -} -``` --## Step 3: Insert data -Use the following code to insert data by using an **INSERT** SQL statement. This code uses the methods: -- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared insert statement-- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) to bind the parameters for each inserted column value.-- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php)-- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) to close the statement by using method---```php -//Create an Insert prepared statement and run it -$product_name = 'BrandNewProduct'; -$product_color = 'Blue'; -$product_price = 15.5; -if ($stmt = mysqli_prepare($conn, "INSERT INTO Products (ProductName, Color, Price) VALUES (?, ?, ?)")) -{ - mysqli_stmt_bind_param($stmt, 'ssd', $product_name, $product_color, $product_price); - mysqli_stmt_execute($stmt); - printf("Insert: Affected %d rows\n", mysqli_stmt_affected_rows($stmt)); - mysqli_stmt_close($stmt); -} --``` --## Step 4: Read data -Use the following code to read the data by using a **SELECT** SQL statement. The code uses the method: -- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) execute the **SELECT** query-- [mysqli_fetch_assoc](https://secure.php.net/manual/mysqli-result.fetch-assoc.php) to fetch the resulting rows.--```php -//Run the Select query -printf("Reading data from table: \n"); -$res = mysqli_query($conn, 'SELECT * FROM Products'); -while ($row = mysqli_fetch_assoc($res)) - { - var_dump($row); - } --``` ---## Step 5: Delete data -Use the following code delete rows by using a **DELETE** SQL statement. The code uses the methods: -- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared delete statement-- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) binds the parameters-- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php) executes the prepared delete statement-- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) closes the statement--```php -//Run the Delete statement -$product_name = 'BrandNewProduct'; -if ($stmt = mysqli_prepare($conn, "DELETE FROM Products WHERE ProductName = ?")) { -mysqli_stmt_bind_param($stmt, 's', $product_name); -mysqli_stmt_execute($stmt); -printf("Delete: Affected %d rows\n", mysqli_stmt_affected_rows($stmt)); -mysqli_stmt_close($stmt); -} -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps -> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/> --> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md) - |
mysql | Connect Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-python.md | - Title: 'Quickstart: Connect using Python - Azure Database for MySQL' -description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Quickstart: Use Python to connect and query data in Azure Database for MySQL ----In this quickstart, you connect to an Azure Database for MySQL by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms. --## Prerequisites -For this quickstart you need: --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-- |Action| Connectivity method|How-to guide| - |: |: |: | - | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)| - | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)| - | **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) | --- [Create a database and non-admin user](./how-to-create-users.md)--## Install Python and the MySQL connector --Install Python and the MySQL connector for Python on your computer by using the following steps: --> [!NOTE] -> This quickstart is using [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/). --1. Download and install [Python 3.7 or above](https://www.python.org/downloads/) for your OS. Make sure to add Python to your `PATH`, because the MySQL connector requires that. - -2. Open a command prompt or `bash` shell, and check your Python version by running `python -V` with the uppercase V switch. - -3. The `pip` package installer is included in the latest versions of Python. Update `pip` to the latest version by running `pip install -U pip`. - - If `pip` isn't installed, you can download and install it with `get-pip.py`. For more information, see [Installation](https://pip.pypa.io/en/stable/installing/). - -4. Use `pip` to install the MySQL connector for Python and its dependencies: - - ```bash - pip install mysql-connector-python - ``` - -## Get connection information --Get the connection information you need to connect to Azure Database for MySQL from the Azure portal. You need the server name, database name, and login credentials. --1. Sign in to the [Azure portal](https://portal.azure.com/). - -1. In the portal search bar, search for and select the Azure Database for MySQL server you created, such as **mydemoserver**. - - :::image type="content" source="./media/connect-python/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: - -1. From the server's **Overview** page, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this page. - - :::image type="content" source="./media/connect-python/azure-database-for-mysql-server-overview-name-login.png" alt-text="Azure Database for MySQL server name 2"::: --## Running the Python code samples --For each code example in this article: --1. Create a new file in a text editor. -2. Add the code example to the file. In the code, replace the `<mydemoserver>`, `<myadmin>`, `<mypassword>`, and `<mydatabase>` placeholders with the values for your MySQL server and database. -1. SSL is enabled by default on Azure Database for MySQL servers. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. Replace the `ssl_ca` value in the code with path to this file on your computer. -1. Save the file in a project folder with a *.py* extension, such as *C:\pythonmysql\createtable.py* or */home/username/pythonmysql/createtable.py*. -1. To run the code, open a command prompt or `bash` shell and change directory into your project folder, for example `cd pythonmysql`. Type the `python` command followed by the file name, for example `python createtable.py`, and press Enter. - - > [!NOTE] - > On Windows, if *python.exe* is not found, you may need to add the Python path into your PATH environment variable, or provide the full path to *python.exe*, for example `C:\python27\python.exe createtable.py`. --## Step 1: Create a table and insert data --Use the following code to connect to the server and database, create a table, and load data by using an **INSERT** SQL statement.The code imports the mysql.connector library, and uses the method: -- [connect()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysql-connector-connect.html) function to connect to Azure Database for MySQL using the [arguments](https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html) in the config collection. -- [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. -- [cursor.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-close.html) when you are done using a cursor.-- [conn.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-close.html) to close the connection the connection.--```python -import mysql.connector -from mysql.connector import errorcode --# Obtain connection string information from the portal --config = { - 'host':'<mydemoserver>.mysql.database.azure.com', - 'user':'<myadmin>@<mydemoserver>', - 'password':'<mypassword>', - 'database':'<mydatabase>', - 'client_flags': [mysql.connector.ClientFlag.SSL], - 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem' -} --# Construct connection string --try: - conn = mysql.connector.connect(**config) - print("Connection established") -except mysql.connector.Error as err: - if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: - print("Something is wrong with the user name or password") - elif err.errno == errorcode.ER_BAD_DB_ERROR: - print("Database does not exist") - else: - print(err) -else: - cursor = conn.cursor() -- # Drop previous table of same name if one exists - cursor.execute("DROP TABLE IF EXISTS inventory;") - print("Finished dropping table (if existed).") -- # Create table - cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);") - print("Finished creating table.") -- # Insert some data into table - cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150)) - print("Inserted",cursor.rowcount,"row(s) of data.") - cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154)) - print("Inserted",cursor.rowcount,"row(s) of data.") - cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100)) - print("Inserted",cursor.rowcount,"row(s) of data.") -- # Cleanup - conn.commit() - cursor.close() - conn.close() - print("Done.") -``` --## Step 2: Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. --The code reads the data rows using the [fetchall()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchall.html) method, keeps the result set in a collection row, and uses a `for` iterator to loop over the rows. --```python -import mysql.connector -from mysql.connector import errorcode --# Obtain connection string information from the portal --config = { - 'host':'<mydemoserver>.mysql.database.azure.com', - 'user':'<myadmin>@<mydemoserver>', - 'password':'<mypassword>', - 'database':'<mydatabase>', - 'client_flags': [mysql.connector.ClientFlag.SSL], - 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem' -} --# Construct connection string --try: - conn = mysql.connector.connect(**config) - print("Connection established") -except mysql.connector.Error as err: - if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: - print("Something is wrong with the user name or password") - elif err.errno == errorcode.ER_BAD_DB_ERROR: - print("Database does not exist") - else: - print(err) -else: - cursor = conn.cursor() -- # Read data - cursor.execute("SELECT * FROM inventory;") - rows = cursor.fetchall() - print("Read",cursor.rowcount,"row(s) of data.") -- # Print all rows - for row in rows: - print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2]))) -- # Cleanup - conn.commit() - cursor.close() - conn.close() - print("Done.") -``` --## Step 3: Update data --Use the following code to connect and update the data by using an **UPDATE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. --```python -import mysql.connector -from mysql.connector import errorcode --# Obtain connection string information from the portal --config = { - 'host':'<mydemoserver>.mysql.database.azure.com', - 'user':'<myadmin>@<mydemoserver>', - 'password':'<mypassword>', - 'database':'<mydatabase>', - 'client_flags': [mysql.connector.ClientFlag.SSL], - 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem' -} --# Construct connection string --try: - conn = mysql.connector.connect(**config) - print("Connection established") -except mysql.connector.Error as err: - if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: - print("Something is wrong with the user name or password") - elif err.errno == errorcode.ER_BAD_DB_ERROR: - print("Database does not exist") - else: - print(err) -else: - cursor = conn.cursor() -- # Update a data row in the table - cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (300, "apple")) - print("Updated",cursor.rowcount,"row(s) of data.") -- # Cleanup - conn.commit() - cursor.close() - conn.close() - print("Done.") -``` --## Step 4: Delete data --Use the following code to connect and remove data by using a **DELETE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. --```python -import mysql.connector -from mysql.connector import errorcode --# Obtain connection string information from the portal --config = { - 'host':'<mydemoserver>.mysql.database.azure.com', - 'user':'<myadmin>@<mydemoserver>', - 'password':'<mypassword>', - 'database':'<mydatabase>', - 'client_flags': [mysql.connector.ClientFlag.SSL], - 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem' -} --# Construct connection string --try: - conn = mysql.connector.connect(**config) - print("Connection established") -except mysql.connector.Error as err: - if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: - print("Something is wrong with the user name or password") - elif err.errno == errorcode.ER_BAD_DB_ERROR: - print("Database does not exist") - else: - print(err) -else: - cursor = conn.cursor() -- # Delete a data row in the table - cursor.execute("DELETE FROM inventory WHERE name=%(param1)s;", {'param1':"orange"}) - print("Deleted",cursor.rowcount,"row(s) of data.") - - # Cleanup - conn.commit() - cursor.close() - conn.close() - print("Done.") -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps -> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/> --> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md) |
mysql | Connect Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-ruby.md | - Title: 'Quickstart: Connect using Ruby - Azure Database for MySQL' -description: This quickstart provides several Ruby code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 05/03/2023---# Quickstart: Use Ruby to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL using a [Ruby](https://www.ruby-lang.org) application and the [mysql2](https://rubygems.org/gems/mysql2) gem from Windows, Linux, and Mac platforms. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Ruby and that you are new to working with Azure Database for MySQL. --## Prerequisites --This quickstart uses the resources created in either of these guides as a starting point: --- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install Ruby --Install Ruby, Gem, and the MySQL2 library on your own computer. --### [Windows](#tab/windows) --1. Download and Install the 2.3 version of [Ruby](https://rubyinstaller.org/downloads/). -2. Launch a new command prompt (cmd) from the Start menu. -3. Change directory into the Ruby directory for version 2.3. `cd c:\Ruby23-x64\bin` -4. Test the Ruby installation by running the command `ruby -v` to see the version installed. -5. Test the Gem installation by running the command `gem -v` to see the version installed. -6. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`. --### [macOS](#tab/macos) --1. Install Ruby using Homebrew by running the command `brew install ruby`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/#homebrew). -2. Test the Ruby installation by running the command `ruby -v` to see the version installed. -3. Test the Gem installation by running the command `gem -v` to see the version installed. -4. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`. --### [Linux (Ubuntu/Debian)](#tab/ubuntu) --1. Install Ruby by running the command `sudo apt-get install ruby-full`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/). -2. Test the Ruby installation by running the command `ruby -v` to see the version installed. -3. Install the latest updates for Gem by running the command `sudo gem update --system`. -4. Test the Gem installation by running the command `gem -v` to see the version installed. -5. Install the gcc, make, and other build tools by running the command `sudo apt-get install build-essential`. -6. Install the MySQL client developer libraries by running the command `sudo apt-get install libmysqlclient-dev`. -7. Build the mysql2 module for Ruby using Gem by running the command `sudo gem install mysql2`. ----## Get connection information --Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Click the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-ruby/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: --## Run Ruby code --1. Paste the Ruby code from the sections below into text files, and then save the files into a project folder with file extension .rb (such as `C:\rubymysql\createtable.rb` or `/home/username/rubymysql/createtable.rb`). -2. To run the code, launch the command prompt or Bash shell. Change directory into your project folder `cd rubymysql` -3. Then type the Ruby command followed by the file name, such as `ruby createtable.rb` to run the application. -4. On the Windows OS, if the Ruby application is not in your path environment variable, you may need to use the full path to launch the node application, such as `"c:\Ruby23-x64\bin\ruby.exe" createtable.rb` --## Connect and create a table --Use the following code to connect and create a table by using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table. --The code uses a mysql2::client class to connect to MySQL server. Then it calls method ```query()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. Finally, call the ```close()``` to close the connection before terminating. --Replace the `host`, `database`, `username`, and `password` strings with your own values. --```ruby -require 'mysql2' --begin - # Initialize connection variables. - host = String('mydemoserver.mysql.database.azure.com') - database = String('quickstartdb') - username = String('myadmin@mydemoserver') - password = String('yourpassword') -- # Initialize connection object. - client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password) - puts 'Successfully created connection to database.' -- # Drop previous table of same name if one exists - client.query('DROP TABLE IF EXISTS inventory;') - puts 'Finished dropping table (if existed).' -- # Drop previous table of same name if one exists. - client.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);') - puts 'Finished creating table.' -- # Insert some data into table. - client.query("INSERT INTO inventory VALUES(1, 'banana', 150)") - client.query("INSERT INTO inventory VALUES(2, 'orange', 154)") - client.query("INSERT INTO inventory VALUES(3, 'apple', 100)") - puts 'Inserted 3 rows of data.' --# Error handling --rescue Exception => e - puts e.message --# Cleanup --ensure - client.close if client - puts 'Done.' -end -``` --## Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. --The code uses a mysql2::client class to connect to Azure Database for MySQL with ```new()```method. Then it calls method ```query()``` to run the SELECT commands. Then it calls method ```close()``` to close the connection before terminating. --Replace the `host`, `database`, `username`, and `password` strings with your own values. --```ruby -require 'mysql2' --begin - # Initialize connection variables. - host = String('mydemoserver.mysql.database.azure.com') - database = String('quickstartdb') - username = String('myadmin@mydemoserver') - password = String('yourpassword') -- # Initialize connection object. - client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password) - puts 'Successfully created connection to database.' -- # Read data - resultSet = client.query('SELECT * from inventory;') - resultSet.each do |row| - puts 'Data row = (%s, %s, %s)' % [row['id'], row['name'], row['quantity']] - end - puts 'Read ' + resultSet.count.to_s + ' row(s).' --# Error handling --rescue Exception => e - puts e.message --# Cleanup --ensure - client.close if client - puts 'Done.' -end -``` --## Update data --Use the following code to connect and update the data by using an **UPDATE** SQL statement. --The code uses a [mysql2::client](https://rubygems.org/gems/mysql2-client-general_log) class .new() method to connect to Azure Database for MySQL. Then it calls method ```query()``` to run the UPDATE commands. Then it calls method ```close()``` to close the connection before terminating. --Replace the `host`, `database`, `username`, and `password` strings with your own values. --```ruby -require 'mysql2' --begin - # Initialize connection variables. - host = String('mydemoserver.mysql.database.azure.com') - database = String('quickstartdb') - username = String('myadmin@mydemoserver') - password = String('yourpassword') -- # Initialize connection object. - client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password) - puts 'Successfully created connection to database.' -- # Update data - client.query('UPDATE inventory SET quantity = %d WHERE name = %s;' % [200, '\'banana\'']) - puts 'Updated 1 row of data.' --# Error handling --rescue Exception => e - puts e.message --# Cleanup --ensure - client.close if client - puts 'Done.' -end -``` --## Delete data --Use the following code to connect and read the data by using a **DELETE** SQL statement. --The code uses a [mysql2::client](https://rubygems.org/gems/mysql2/) class to connect to MySQL server, run the DELETE command and then close the connection to the server. --Replace the `host`, `database`, `username`, and `password` strings with your own values. --```ruby -require 'mysql2' --begin - # Initialize connection variables. - host = String('mydemoserver.mysql.database.azure.com') - database = String('quickstartdb') - username = String('myadmin@mydemoserver') - password = String('yourpassword') -- # Initialize connection object. - client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password) - puts 'Successfully created connection to database.' -- # Delete data - resultSet = client.query('DELETE FROM inventory WHERE name = %s;' % ['\'orange\'']) - puts 'Deleted 1 row.' --# Error handling ---rescue Exception => e - puts e.message --# Cleanup ---ensure - client.close if client - puts 'Done.' -end -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli-interactive -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your database using Export and Import](../flexible-server/concepts-migrate-import-export.md) --> [!div class="nextstepaction"] -> [Learn more about MySQL2 client](https://rubygems.org/gems/mysql2-client-general_log) |
mysql | Connect Workbench | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-workbench.md | - Title: "Quickstart: Connect - MySQL Workbench - Azure Database for MySQL" -description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL. --- Previously updated : 04/18/2023----- - mvc - - mode-other ---# Quickstart: Use MySQL Workbench to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL using the MySQL Workbench application. --## Prerequisites --This quickstart uses the resources created in either of these guides as a starting point: --- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install MySQL Workbench --Download and install MySQL Workbench on your computer from [the MySQL website](https://dev.mysql.com/downloads/workbench/). --## Get connection information --Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). --1. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**). --1. Select the server name. --1. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-php/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name" lightbox="./media/connect-php/1-server-overview-name-login.png"::: --## Connect to the server by using MySQL Workbench --To connect to Azure MySQL Server by using the GUI tool MySQL Workbench: --1. Launch the MySQL Workbench application on your computer. --1. In **Setup New Connection** dialog box, enter the following information on the **Parameters** tab: -- :::image type="content" source="./media/connect-workbench/2-setup-new-connection.png" alt-text="setup new connection" lightbox="./media/connect-workbench/2-setup-new-connection.png"::: -- | **Setting** | **Suggested value** | **Field description** | - | | | | - | Connection Name | Demo Connection | Specify a label for this connection. | - | Connection Method | Standard (TCP/IP) | Standard (TCP/IP) is sufficient. | - | Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you don't remember your server name. | - | Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. | - | Username | *server admin login name* | Type in the server admin login username supplied when you created the Azure Database for MySQL earlier. Our example username is myadmin@mydemoserver. Follow the steps in the previous section to get the connection information if you don't remember the username. The format is *username\@servername*. - | Password | your password | Select **Store in Vault...** button to save the password. | - -1. Select **Test Connection** to test if all parameters are correctly configured. --1. Select **OK** to save the connection. --1. In the listing of **MySQL Connections**, select the tile corresponding to your server, and then wait for the connection to be established. -- A new SQL tab opens with a blank editor where you can type your queries. -- > [!NOTE] - > By default, SSL connection security is required and enforced on your Azure Database for MySQL server. Although typically no additional configuration with SSL certificates is required for MySQL Workbench to connect to your server, we recommend binding the SSL CA certification with MySQL Workbench. For more information on how to download and bind the certification, see [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). If you need to disable SSL, visit the Azure portal and select the Connection security page to disable the Enforce SSL connection toggle button. --## Create a table, insert data, read data, update data, delete data --1. Copy and paste the sample SQL code into a blank SQL tab to illustrate some sample data. -- This code creates an empty database named quickstartdb, and then creates a sample table named inventory. It inserts some rows, then reads the rows. It changes the data with an update statement, and reads the rows again. Finally it deletes a row, and then reads the rows again. -- ```sql - -- Create a database - -- DROP DATABASE IF EXISTS quickstartdb; - CREATE DATABASE quickstartdb; - USE quickstartdb; - - -- Create a table and insert rows - DROP TABLE IF EXISTS inventory; - CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER); - INSERT INTO inventory (name, quantity) VALUES ('banana', 150); - INSERT INTO inventory (name, quantity) VALUES ('orange', 154); - INSERT INTO inventory (name, quantity) VALUES ('apple', 100); - - -- Read - SELECT * FROM inventory; - - -- Update - UPDATE inventory SET quantity = 200 WHERE id = 1; - SELECT * FROM inventory; - - -- Delete - DELETE FROM inventory WHERE id = 2; - SELECT * FROM inventory; - ``` -- The screenshot shows an example of the SQL code in SQL Workbench and the output after it has been run. -- :::image type="content" source="media/connect-workbench/3-workbench-sql-tab.png" alt-text="MySQL Workbench SQL Tab to run sample SQL code" lightbox="media/connect-workbench/3-workbench-sql-tab.png"::: --1. To run the sample SQL Code, select the lightening bolt icon in the toolbar of the **SQL File** tab. --1. Notice the three tabbed results in the **Result Grid** section in the middle of the page. --1. Notice the **Output** list at the bottom of the page. The status of each command is shown. --Now, you have connected to Azure Database for MySQL by using MySQL Workbench, and you have queried data using the SQL language. --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your database using Export and Import](../flexible-server/concepts-migrate-import-export.md) |
mysql | How To Alert On Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-alert-on-metric.md | - Title: Configure metric alerts - Azure portal - Azure Database for MySQL -description: This article describes how to configure and access metric alerts for Azure Database for MySQL from the Azure portal. ----- Previously updated : 06/20/2022---# Use the Azure portal to set up alerts on metrics for Azure Database for MySQL ----This article shows you how to set up Azure Database for MySQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services. --The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met. --You can configure an alert to do the following actions when it triggers: -* Send email notifications to the service administrator and co-administrators -* Send email to additional emails that you specify. -* Call a webhook --You can configure and get information about alert rules using: -* [Azure portal](../../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal) -* [Azure CLI](../../azure-monitor/alerts/alerts-metric.md#with-azure-cli) -* [Azure Monitor REST API](/rest/api/monitor/metricalerts) --## Create an alert rule on a metric from the Azure portal -1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for MySQL server you want to monitor. --2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown: -- :::image type="content" source="./media/how-to-alert-on-metric/2-alert-rules.png" alt-text="Select Alert Rules"::: --3. Select **Add metric alert** (+ icon). --4. The **Create rule** page opens as shown below. Fill in the required information: -- :::image type="content" source="./media/how-to-alert-on-metric/4-add-rule-form.png" alt-text="Add metric alert form"::: --5. Within the **Condition** section, select **Add condition**. --6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent". - - :::image type="content" source="./media/how-to-alert-on-metric/6-configure-signal-logic.png" alt-text="Select metric"::: --7. Configure the alert logic including the **Condition** (ex. "Greater than"), **Threshold** (ex. 85 percent), **Time Aggregation**, **Period** of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes"), and **Frequency**. - - Select **Done** when complete. -- :::image type="content" source="./media/how-to-alert-on-metric/7-set-threshold-time.png" alt-text="Select metric 2"::: --8. Within the **Action Groups** section, select **Create New** to create a new group to receive notifications on the alert. --9. Fill out the "Add action group" form with a name, short name, subscription, and resource group. --10. Configure an **Email/SMS/Push/Voice** action type. - - Choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications. - - Optionally, provide a valid URI in the **Webhook** field if you want it called when the alert fires. -- Select **OK** when completed. -- :::image type="content" source="./media/how-to-alert-on-metric/10-action-group-type.png" alt-text="Action group"::: --11. Specify an Alert rule name, Description, and Severity. -- :::image type="content" source="./media/how-to-alert-on-metric/11-name-description-severity.png" alt-text="Action group 2"::: --12. Select **Create alert rule** to create the alert. -- Within a few minutes, the alert is active and triggers as previously described. --## Manage your alerts -Once you have created an alert, you can select it and do the following actions: --* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert. -* **Edit** or **Delete** the alert rule. -* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications. ---## Next steps -* Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md). -* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive. |
mysql | How To Auto Grow Storage Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-cli.md | - Title: Auto grow storage - Azure CLI - Azure Database for MySQL -description: This article describes how you can enable auto grow storage using the Azure CLI in Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Auto-grow Azure Database for MySQL storage using the Azure CLI ----This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload. --The server [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply. --## Prerequisites --To complete this how-to guide: --- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Enable MySQL server storage auto-grow --Enable server auto-grow storage on an existing server with the following command: --```azurecli-interactive -az mysql server update --name mydemoserver --resource-group myresourcegroup --auto-grow Enabled -``` --Enable server auto-grow storage while creating a new server with the following command: --```azurecli-interactive -az mysql server create --resource-group myresourcegroup --name mydemoserver --auto-grow Enabled --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7 -``` --## Next steps --Learn about [how to create alerts on metrics](how-to-alert-on-metric.md). |
mysql | How To Auto Grow Storage Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-portal.md | - Title: Auto grow storage - Azure portal - Azure Database for MySQL -description: This article describes how you can enable auto grow storage for Azure Database for MySQL using Azure portal ----- Previously updated : 06/20/2022---# Auto grow storage in Azure Database for MySQL using the Azure portal ----This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload. --When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply. --## Prerequisites -To complete this how-to guide, you need: -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)--## Enable storage auto grow --Follow these steps to set MySQL server storage auto grow: --1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server. --2. On the MySQL server page, under **Settings** heading, click **Pricing tier** to open the Pricing tier page. --3. In the Auto-growth section, select **Yes** to enable storage auto grow. -- :::image type="content" source="./media/how-to-auto-grow-storage-portal/3-auto-grow.png" alt-text="Azure Database for MySQL - Settings_Pricing_tier - Auto-growth"::: --4. Click **OK** to save the changes. --5. A notification will confirm that auto grow was successfully enabled. -- :::image type="content" source="./media/how-to-auto-grow-storage-portal/5-auto-grow-success.png" alt-text="Azure Database for MySQL - auto-growth success"::: --## Next steps --Learn about [how to create alerts on metrics](how-to-alert-on-metric.md). |
mysql | How To Auto Grow Storage Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md | - Title: Auto grow storage - Azure PowerShell - Azure Database for MySQL -description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Auto grow storage in Azure Database for MySQL server using PowerShell ----This article describes how you can configure an Azure Database for MySQL server storage to grow -without impacting the workload. --Storage auto grow prevents your server from -[reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit) and -becoming read-only. For servers with 100 GB or less of provisioned storage, the size is increased by -5 GB when the free space is below 10%. For servers with more than 100 GB of provisioned storage, the -size is increased by 5% when the free space is below 10 GB. Maximum storage limits apply as -specified in the storage section of the -[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage). --> [!IMPORTANT] -> Remember that storage can only be scaled up, not down. --## Prerequisites --To complete this how-to guide, you need: --- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or- [Azure Cloud Shell](https://shell.azure.com/) in the browser -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)--> [!IMPORTANT] -> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az -> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`. -> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az -> PowerShell module releases and available natively from within Azure Cloud Shell. --If you choose to use PowerShell locally, connect to your Azure account using the -[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. --## Enable MySQL server storage auto grow --Enable server auto grow storage on an existing server with the following command: --```azurepowershell-interactive -Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -StorageAutogrow Enabled -``` --Enable server auto grow storage while creating a new server with the following command: --```azurepowershell-interactive -$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString -New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -StorageAutogrow Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password -``` --## Next steps --> [!div class="nextstepaction"] -> [How to create and manage read replicas in Azure Database for MySQL using PowerShell](how-to-read-replicas-powershell.md). |
mysql | How To Configure Audit Logs Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-cli.md | - Title: Access audit logs - Azure CLI - Azure Database for MySQL -description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure CLI. ------ Previously updated : 06/20/2022---# Configure and access audit logs in the Azure CLI ----You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) from the Azure CLI. --## Prerequisites --To step through this how-to guide: --- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Configure audit logging --> [!IMPORTANT] -> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted. --Enable and configure audit logging using the following steps: --1. Turn on audit logs by setting the **audit_logs_enabled** parameter to "ON". - ```azurecli-interactive - az mysql server configuration set --name audit_log_enabled --resource-group myresourcegroup --server mydemoserver --value ON - ``` --2. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter. - ```azurecli-interactive - az mysql server configuration set --name audit_log_events --resource-group myresourcegroup --server mydemoserver --value "ADMIN,CONNECTION" - ``` --3. Add any MySQL users to be excluded from logging by updating the **audit_log_exclude_users** parameter. Specify users by providing their MySQL user name. - ```azurecli-interactive - az mysql server configuration set --name audit_log_exclude_users --resource-group myresourcegroup --server mydemoserver --value "azure_superuser" - ``` --4. Add any specific MySQL users to be included for logging by updating the **audit_log_include_users** parameter. Specify users by providing their MySQL user name. -- ```azurecli-interactive - az mysql server configuration set --name audit_log_include_users --resource-group myresourcegroup --server mydemoserver --value "sampleuser" - ``` --## Next steps -- Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MySQL-- Learn how to configure audit logs in the [Azure portal](how-to-configure-audit-logs-portal.md) |
mysql | How To Configure Audit Logs Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-portal.md | - Title: Access audit logs - Azure portal - Azure Database for MySQL -description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure portal. ----- Previously updated : 06/20/2022---# Configure and access audit logs for Azure Database for MySQL in the Azure portal ----You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) and diagnostic settings from the Azure portal. --## Prerequisites --To step through this how-to guide, you need: --- [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)--## Configure audit logging -->[!IMPORTANT] -> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted. --Enable and configure audit logging. --1. Sign in to the [Azure portal](https://portal.azure.com/). --1. Select your Azure Database for MySQL server. --1. Under the **Settings** section in the sidebar, select **Server parameters**. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/server-parameters.png" alt-text="Server parameters"::: --1. Update the **audit_log_enabled** parameter to ON. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-enabled.png" alt-text="Enable audit logs"::: --1. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-events.png" alt-text="Audit log events"::: --1. Add any MySQL users to be included or excluded from logging by updating the **audit_log_exclude_users** and **audit_log_include_users** parameters. Specify users by providing their MySQL user name. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-exclude-users.png" alt-text="Audit log exclude users"::: --1. Once you have changed the parameters, you can click **Save**. Or you can **Discard** your changes. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/save-parameters.png" alt-text="Save"::: --## Set up diagnostic logs --1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings**. --1. Click on "+ Add diagnostic setting" --1. Provide a diagnostic setting name. --1. Specify which data sinks to send the audit logs (storage account, event hub, and/or Log Analytics workspace). --1. Select "MySqlAuditLogs" as the log type. --1. Once you've configured the data sinks to pipe the audit logs to, you can click **Save**. --1. Access the audit logs by exploring them in the data sinks you configured. It may take up to 10 minutes for the logs to appear. --## Next steps --- Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MySQL-- Learn how to configure audit logs in the [Azure CLI](how-to-configure-audit-logs-cli.md) |
mysql | How To Configure Private Link Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-cli.md | - Title: Private Link - Azure CLI - Azure Database for MySQL -description: Learn how to configure private link for Azure Database for MySQL from Azure CLI ------ Previously updated : 06/20/2022---# Create and manage Private Link for Azure Database for MySQL using CLI ----A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure CLI to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint. --> [!NOTE] -> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. ---- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Create a resource group --Before you can create any resource, you have to create a resource group to host the Virtual Network. Create a resource group with [az group create](/cli/azure/group). This example creates a resource group named *myResourceGroup* in the *westeurope* location: --```azurecli-interactive -az group create --name myResourceGroup --location westeurope -``` --## Create a Virtual Network --Create a Virtual Network with [az network vnet create](/cli/azure/network/vnet). This example creates a default Virtual Network named *myVirtualNetwork* with one subnet named *mySubnet*: --```azurecli-interactive -az network vnet create \ - --name myVirtualNetwork \ - --resource-group myResourceGroup \ - --subnet-name mySubnet -``` --## Disable subnet private endpoint policies --Azure deploys resources to a subnet within a virtual network, so you need to create or update the subnet to disable private endpoint [network policies](../../private-link/disable-private-endpoint-network-policy.md). Update a subnet configuration named *mySubnet* with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update): --```azurecli-interactive -az network vnet subnet update \ - --name mySubnet \ - --resource-group myResourceGroup \ - --vnet-name myVirtualNetwork \ - --disable-private-endpoint-network-policies true -``` --## Create the VM --Create a VM with az vm create. When prompted, provide a password to be used as the sign-in credentials for the VM. This example creates a VM named *myVm*: --```azurecli-interactive -az vm create \ - --resource-group myResourceGroup \ - --name myVm \ - --image Win2019Datacenter -``` --> [!NOTE] -> The public IP address of the VM. You use this address to connect to the VM from the internet in the next step. --## Create an Azure Database for MySQL server --Create an Azure Database for MySQL with the az mysql server create command. Remember that the name of your MySQL Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value: --```azurecli-interactive -# Create a server in the resource group --az mysql server create \ name mydemoserver \resource-group myResourcegroup \location westeurope \admin-user mylogin \admin-password <server_admin_password> \sku-name GP_Gen5_2-``` --> [!NOTE] -> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations: -> -> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] --## Create the Private Endpoint --Create a private endpoint for the MySQL server in your Virtual Network: --```azurecli-interactive -az network private-endpoint create \ - --name myPrivateEndpoint \ - --resource-group myResourceGroup \ - --vnet-name myVirtualNetwork \ - --subnet mySubnet \ - --private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforMySQL/servers" --query "id" -o tsv) \ - --group-id mysqlServer \ - --connection-name myConnection - ``` --## Configure the Private DNS Zone --Create a Private DNS Zone for MySQL server domain and create an association link with the Virtual Network. --```azurecli-interactive -az network private-dns zone create --resource-group myResourceGroup \ - --name "privatelink.mysql.database.azure.com" -az network private-dns link vnet create --resource-group myResourceGroup \ - --zone-name "privatelink.mysql.database.azure.com"\ - --name MyDNSLink \ - --virtual-network myVirtualNetwork \ - --registration-enabled false --# Query for the network interface ID -$networkInterfaceId=$(az network private-endpoint show --name myPrivateEndpoint --resource-group myResourceGroup --query 'networkInterfaces[0].id' -o tsv) --az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json -# Copy the content for privateIPAddress and FQDN matching the Azure database for MySQL name --# Create DNS records -az network private-dns record-set a create --name myserver --zone-name privatelink.mysql.database.azure.com --resource-group myResourceGroup -az network private-dns record-set a add-record --record-set-name myserver --zone-name privatelink.mysql.database.azure.com --resource-group myResourceGroup -a <Private IP Address> -``` --> [!NOTE] -> The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../../dns/dns-operations-recordsets-portal.md). --## Connect to a VM from the internet --Connect to the VM *myVm* from the internet as follows: --1. In the portal's search bar, enter *myVm*. --1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens. --1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer. --1. Open the *downloaded.rdp* file. -- 1. If prompted, select **Connect**. -- 1. Enter the username and password you specified when creating the VM. -- > [!NOTE] - > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM. --1. Select **OK**. --1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**. --1. Once the VM desktop appears, minimize it to go back to your local desktop. --## Access the MySQL server privately from the VM --1. In the Remote Desktop of *myVM*, open PowerShell. --2. Enter  `nslookup mydemomysqlserver.privatelink.mysql.database.azure.com`. -- You'll receive a message similar to this: -- ```azurepowershell - Server: UnKnown - Address: 168.63.129.16 - Non-authoritative answer: - Name: mydemomysqlserver.privatelink.mysql.database.azure.com - Address: 10.1.3.4 - ``` --3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation. --4. In **New connection**, enter or select this information: -- | Setting | Value | - | - | -- | - | Connection Name| Select the connection name of your choice.| - | Hostname | Select *mydemoserver.privatelink.mysql.database.azure.com* | - | Username | Enter username as *username@servername* which is provided during the MySQL server creation. | - | Password | Enter a password provided during the MySQL server creation. | - || --5. Select Connect. --6. Browse databases from left menu. --7. (Optionally) Create or query information from the MySQL database. --8. Close the remote desktop connection to myVm. --## Clean up resources --When no longer needed, you can use az group delete to remove the resource group and all the resources it has: --```azurecli-interactive -az group delete --name myResourceGroup --yes -``` --## Next steps --- Learn more about [What is Azure private endpoint](../../private-link/private-endpoint-overview.md)--<!-- Link references, to text, Within this same GitHub repo. --> -[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | How To Configure Private Link Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-portal.md | - Title: Private Link - Azure portal - Azure Database for MySQL -description: Learn how to configure private link for Azure Database for MySQL from Azure portal ----- Previously updated : 06/20/2022---# Create and manage Private Link for Azure Database for MySQL using Portal ----A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure portal to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --> [!NOTE] -> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. --## Sign in to Azure -Sign in to the [Azure portal](https://portal.azure.com). --## Create an Azure VM --In this section, you will create virtual network and the subnet to host the VM that is used to access your Private Link resource (a MySQL server in Azure). --### Create the virtual network -In this section, you will create a Virtual Network and the subnet to host the VM that is used to access your Private Link resource. --1. On the upper-left side of the screen, select **Create a resource** > **Networking** > **Virtual network**. -2. In **Create virtual network**, enter or select this information: -- | Setting | Value | - | - | -- | - | Name | Enter *MyVirtualNetwork*. | - | Address space | Enter *10.1.0.0/16*. | - | Subscription | Select your subscription.| - | Resource group | Select **Create new**, enter *myResourceGroup*, then select **OK**. | - | Location | Select **West Europe**.| - | Subnet - Name | Enter *mySubnet*. | - | Subnet - Address range | Enter *10.1.0.0/24*. | - ||| -3. Leave the rest as default and select **Create**. --### Create Virtual Machine --1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual Machine**. --2. In **Create a virtual machine - Basics**, enter or select this information: -- | Setting | Value | - | - | -- | - | **PROJECT DETAILS** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. You created this in the previous section. | - | **INSTANCE DETAILS** | | - | Virtual machine name | Enter *myVm*. | - | Region | Select **West Europe**. | - | Availability options | Leave the default **No infrastructure redundancy required**. | - | Image | Select **Windows Server 2019 Datacenter**. | - | Size | Leave the default **Standard DS1 v2**. | - | **ADMINISTRATOR ACCOUNT** | | - | Username | Enter a username of your choosing. | - | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).| - | Confirm Password | Reenter password. | - | **INBOUND PORT RULES** | | - | Public inbound ports | Leave the default **None**. | - | **SAVE MONEY** | | - | Already have a Windows license? | Leave the default **No**. | - ||| --1. Select **Next: Disks**. --1. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**. --1. In **Create a virtual machine - Networking**, select this information: -- | Setting | Value | - | - | -- | - | Virtual network | Leave the default **MyVirtualNetwork**. | - | Address space | Leave the default **10.1.0.0/24**.| - | Subnet | Leave the default **mySubnet (10.1.0.0/24)**.| - | Public IP | Leave the default **(new) myVm-ip**. | - | Public inbound ports | Select **Allow selected ports**. | - | Select inbound ports | Select **HTTP** and **RDP**.| - ||| ---1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. --1. When you see the **Validation passed** message, select **Create**. --## Create an Azure Database for MySQL --In this section, you will create an Azure Database for MySQL server in Azure. --1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **Azure Database for MySQL**. --1. In **Azure Database for MySQL** provide these information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. You created this in the previous section.| - | **Server details** | | - |Server name | Enter *myServer*. If this name is taken, create a unique name.| - | Admin username| Enter an administrator name of your choosing. | - | Password | Enter a password of your choosing. The password must be at least 8 characters long and meet the defined requirements. | - | Location | Select an Azure region where you want to want your MySQL Server to reside. | - |Version | Select the database version of the MySQL server that is required.| - | Compute + Storage| Select the pricing tier that is needed for the server based on the workload. | - ||| - -7. Select **OK**. -8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. -9. When you see the Validation passed message, select **Create**. -10. When you see the Validation passed message, select Create. --> [!NOTE] -> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations: -> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] --## Create a private endpoint --In this section, you will create a MySQL server and add a private endpoint to it. --1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Networking** > **Private Link**. --2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**. -- :::image type="content" source="media/concepts-data-access-and-security-private-link/private-link-overview.png" alt-text="Private Link overview"::: --1. In **Create a private endpoint - Basics**, enter or select this information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. You created this in the previous section.| - | **Instance Details** | | - | Name | Enter *myPrivateEndpoint*. If this name is taken, create a unique name. | - |Region|Select **West Europe**.| - ||| --5. Select **Next: Resource**. -6. In **Create a private endpoint - Resource**, enter or select this information: -- | Setting | Value | - | - | -- | - |Connection method | Select connect to an Azure resource in my directory.| - | Subscription| Select your subscription. | - | Resource type | Select **Microsoft.DBforMySQL/servers**. | - | Resource |Select *myServer*| - |Target sub-resource |Select *mysqlServer*| - ||| -7. Select **Next: Configuration**. -8. In **Create a private endpoint - Configuration**, enter or select this information: -- | Setting | Value | - | - | -- | - |**NETWORKING**| | - | Virtual network| Select *MyVirtualNetwork*. | - | Subnet | Select *mySubnet*. | - |**PRIVATE DNS INTEGRATION**|| - |Integrate with private DNS zone |Select **Yes**. | - |Private DNS Zone |Select *(New)privatelink.mysql.database.azure.com* | - ||| -- > [!NOTE] - > Use the predefined private DNS zone for your service or provide your preferred DNS zone name. Refer to the [Azure services DNS zone configuration](../../private-link/private-endpoint-dns.md) for details. --1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. -2. When you see the **Validation passed** message, select **Create**. -- :::image type="content" source="media/concepts-data-access-and-security-private-link/show-mysql-private-link.png" alt-text="Private Link created"::: -- > [!NOTE] - > The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../../dns/dns-operations-recordsets-portal.md). --## Connect to a VM using Remote Desktop (RDP) ---After you've created **myVm**, connect to it from the internet as follows: --1. In the portal's search bar, enter *myVm*. --1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens. --1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer. --1. Open the *downloaded.rdp* file. -- 1. If prompted, select **Connect**. -- 1. Enter the username and password you specified when creating the VM. -- > [!NOTE] - > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM. --1. Select **OK**. --1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**. --1. Once the VM desktop appears, minimize it to go back to your local desktop. --## Access the MySQL server privately from the VM --1. In the Remote Desktop of *myVM*, open PowerShell. --2. Enter `nslookup myServer.privatelink.mysql.database.azure.com`. -- You'll receive a message similar to this: - ```azurepowershell - Server: UnKnown - Address: 168.63.129.16 - Non-authoritative answer: - Name: myServer.privatelink.mysql.database.azure.com - Address: 10.1.3.4 - ``` - > [!NOTE] - > If public access is disabled in the firewall settings in Azure Database for MySQL - Single Server. These ping and telnet tests will succeed regardless of the firewall settings. Those tests will ensure the network connectivity. --3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation. --4. In **New connection**, enter or select this information: -- | Setting | Value | - | - | -- | - | Server type| Select **MySQL**.| - | Server name| Select *myServer.privatelink.mysql.database.azure.com* | - | User name | Enter username as username@servername which is provided during the MySQL server creation. | - |Password |Enter a password provided during the MySQL server creation. | - |SSL|Select **Required**.| - || --5. Select Connect. --6. Browse databases from left menu. --7. (Optionally) Create or query information from the MySQL server. --8. Close the remote desktop connection to myVm. --## Clean up resources -When you're done using the private endpoint, MySQL server, and the VM, delete the resource group and all of the resources it contains: --1. Enter *myResourceGroup* in the **Search** box at the top of the portal and select *myResourceGroup* from the search results. -2. Select **Delete resource group**. -3. Enter myResourceGroup for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. --## Next steps --In this how-to, you created a VM on a virtual network, an Azure Database for MySQL, and a private endpoint for private access. You connected to one VM from the internet and securely communicated to the MySQL server using Private Link. To learn more about private endpoints, see [What is Azure private endpoint](../../private-link/private-endpoint-overview.md). --<!-- Link references, to text, Within this same GitHub repo. --> -[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | How To Configure Server Logs In Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-cli.md | - Title: Access slow query logs - Azure CLI - Azure Database for MySQL -description: This article describes how to access the slow query logs in Azure Database for MySQL by using the Azure CLI. ------ Previously updated : 06/20/2022---# Configure and access slow query logs by using Azure CLI ----You can download the Azure Database for MySQL slow query logs by using Azure CLI, the Azure command-line utility. --## Prerequisites -To step through this how-to guide, you need: -- [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md)-- The [Azure CLI](/cli/azure/install-azure-cli) or Azure Cloud Shell in the browser--## Configure logging -You can configure the server to access the MySQL slow query log by taking the following steps: -1. Turn on slow query logging by setting the **slow\_query\_log** parameter to ON. -2. Select where to output the logs to using **log\_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**. To send logs only to Azure Monitor Logs, select **None** -3. Adjust other parameters, such as **long\_query\_time** and **log\_slow\_admin\_statements**. --To learn how to set the value of these parameters through Azure CLI, see [How to configure server parameters](how-to-configure-server-parameters-using-cli.md). --For example, the following CLI command turns on the slow query log, sets the long query time to 10 seconds, and then turns off the logging of the slow admin statement. Finally, it lists the configuration options for your review. -```azurecli-interactive -az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver --value ON -az mysql server configuration set --name log_output --resource-group myresourcegroup --server mydemoserver --value FILE -az mysql server configuration set --name long_query_time --resource-group myresourcegroup --server mydemoserver --value 10 -az mysql server configuration set --name log_slow_admin_statements --resource-group myresourcegroup --server mydemoserver --value OFF -az mysql server configuration list --resource-group myresourcegroup --server mydemoserver -``` --## List logs for Azure Database for MySQL server -If **log_output** is configured to "File", you can access logs directly from the server's local storage. To list the available slow query log files for your server, run the [az mysql server-logs list](/cli/azure/mysql/server-logs#az-mysql-server-logs-list) command. --You can list the log files for server **mydemoserver.mysql.database.azure.com** under the resource group **myresourcegroup**. Then direct the list of log files to a text file called **log\_files\_list.txt**. -```azurecli-interactive -az mysql server-logs list --resource-group myresourcegroup --server mydemoserver > log_files_list.txt -``` -## Download logs from the server -If **log_output** is configured to "File", you can download individual log files from your server with the [az mysql server-logs download](/cli/azure/mysql/server-logs#az-mysql-server-logs-download) command. --Use the following example to download the specific log file for the server **mydemoserver.mysql.database.azure.com** under the resource group **myresourcegroup** to your local environment. -```azurecli-interactive -az mysql server-logs download --name 20170414-mydemoserver-mysql.log --resource-group myresourcegroup --server mydemoserver -``` --## Next steps -- Learn about [slow query logs in Azure Database for MySQL](concepts-server-logs.md). |
mysql | How To Configure Server Logs In Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-portal.md | - Title: Access slow query logs - Azure portal - Azure Database for MySQL -description: This article describes how to configure and access the slow logs in Azure Database for MySQL from the Azure portal. ----- Previously updated : 06/20/2022---# Configure and access slow query logs from the Azure portal ----You can configure, list, and download the [Azure Database for MySQL slow query logs](concepts-server-logs.md) from the Azure portal. --## Prerequisites -The steps in this article require that you have [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md). --## Configure logging -Configure access to the MySQL slow query log. --1. Sign in to the [Azure portal](https://portal.azure.com/). --2. Select your Azure Database for MySQL server. --3. Under the **Monitoring** section in the sidebar, select **Server logs**. - :::image type="content" source="./media/how-to-configure-server-logs-in-portal/1-select-server-logs-configure.png" alt-text="Screenshot of Server logs options"::: --4. To see the server parameters, select **Click here to enable logs and configure log parameters**. --5. Turn **slow_query_log** to **ON**. --6. Select where to output the logs to using **log_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**. --7. Consider setting "long_query_time" which represents query time threshold for the queries that will be collected in the slow query log file, The minimum and default values of long_query_time are 0 and 10, respectively. --8. Adjust other parameters, such as log_slow_admin_statements to log administrative statements. By default, administrative statements are not logged, nor are queries that do not use indexes for lookups. --9. Select **Save**. -- :::image type="content" source="./media/how-to-configure-server-logs-in-portal/3-save-discard.png" alt-text="Screenshot of slow query log parameters and save."::: --From the **Server Parameters** page, you can return to the list of logs by closing the page. --## View list and download logs -After logging begins, you can view a list of available slow query logs, and download individual log files. --1. Open the Azure portal. --2. Select your Azure Database for MySQL server. --3. Under the **Monitoring** section in the sidebar, select **Server logs**. The page shows a list of your log files. -- :::image type="content" source="./media/how-to-configure-server-logs-in-portal/4-server-logs-list.png" alt-text="Screenshot of Server logs page, with list of logs highlighted"::: -- > [!TIP] - > The naming convention of the log is **mysql-slow-< your server name>-yyyymmddhh.log**. The date and time used in the file name is the time when the log was issued. Log files are rotated every 24 hours or 7.5 GB, whichever comes first. --4. If needed, use the search box to quickly narrow down to a specific log, based on date and time. The search is on the name of the log. --5. To download individual log files, select the down-arrow icon next to each log file in the table row. -- :::image type="content" source="./media/how-to-configure-server-logs-in-portal/5-download.png" alt-text="Screenshot of Server logs page, with down-arrow icon highlighted"::: --## Set up diagnostic logs --1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings** > **Add diagnostic settings**. -- :::image type="content" source="./media/how-to-configure-server-logs-in-portal/add-diagnostic-setting.png" alt-text="Screenshot of Diagnostic settings options"::: --2. Provide a diagnostic setting name. --3. Specify which data sinks to send the slow query logs (storage account, event hub, or Log Analytics workspace). --4. Select **MySqlSlowLogs** as the log type. --5. After you've configured the data sinks to pipe the slow query logs to, select **Save**. --6. Access the slow query logs by exploring them in the data sinks you configured. It can take up to 10 minutes for the logs to appear. --## Next steps -- See [Access slow query Logs in CLI](how-to-configure-server-logs-in-cli.md) to learn how to download slow query logs programmatically.-- Learn more about [slow query logs](concepts-server-logs.md) in Azure Database for MySQL.-- For more information about the parameter definitions and MySQL logging, see the MySQL documentation on [logs](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html). |
mysql | How To Configure Server Parameters Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-cli.md | - Title: Configure server parameters - Azure CLI - Azure Database for MySQL -description: This article describes how to configure the service parameters in Azure Database for MySQL using the Azure CLI command line utility. ----- Previously updated : 06/20/2022----# Configure server parameters in Azure Database for MySQL using the Azure CLI ----You can list, show, and update configuration parameters for an Azure Database for MySQL server by using Azure CLI, the Azure command-line utility. A subset of engine configurations is exposed at the server-level and can be modified. -->[!NOTE] -> Server parameters can be updated globally at the server-level, use the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), [PowerShell](./how-to-configure-server-parameters-using-powershell.md), or [Azure portal](./how-to-server-parameters.md) --## Prerequisites -To step through this how-to guide, you need: -- [An Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md)-- [Azure CLI](/cli/azure/install-azure-cli) command-line utility or use the Azure Cloud Shell in the browser.--## List server configuration parameters for Azure Database for MySQL server -To list all modifiable parameters in a server and their values, run the [az mysql server configuration list](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-list) command. --You can list the server configuration parameters for the server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup**. -```azurecli-interactive -az mysql server configuration list --resource-group myresourcegroup --server mydemoserver -``` -For the definition of each of the listed parameters, see the MySQL reference section on [Server System Variables](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html). --## Show server configuration parameter details -To show details about a particular configuration parameter for a server, run the [az mysql server configuration show](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-show) command. --This example shows details of the **slow\_query\_log** server configuration parameter for server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.** -```azurecli-interactive -az mysql server configuration show --name slow_query_log --resource-group myresourcegroup --server mydemoserver -``` -## Modify a server configuration parameter value -You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the MySQL server engine. To update the configuration, use the [az mysql server configuration set](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-set) command. --To update the **slow\_query\_log** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.** -```azurecli-interactive -az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver --value ON -``` -If you want to reset the value of a configuration parameter, omit the optional `--value` parameter, and the service applies the default value. For the example above, it would look like: -```azurecli-interactive -az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver -``` -This code resets the **slow\_query\_log** configuration to the default value **OFF**. --## Setting parameters not listed -If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server. --Update the **init\_connect** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup** to set values such as character set. |