Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Liveness | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/liveness.md | Last updated 11/06/2023 # Tutorial: Detect liveness in faces -Face Liveness detection can be used to determine if a face in an input video stream is real (live) or fake (spoof). It is a crucial building block in a biometric authentication system to prevent spoofing attacks from imposters trying to gain access to the system using a photograph, video, mask, or other means to impersonate another person. +Face Liveness detection can be used to determine if a face in an input video stream is real (live) or fake (spoofed). It's an important building block in a biometric authentication system to prevent imposters from gaining access to the system using a photograph, video, mask, or other means to impersonate another person. -The goal of liveness detection is to ensure that the system is interacting with a physically present live person at the time of authentication. Such systems have become increasingly important with the rise of digital finance, remote access control, and online identity verification processes. +The goal of liveness detection is to ensure that the system is interacting with a physically present live person at the time of authentication. Such systems are increasingly important with the rise of digital finance, remote access control, and online identity verification processes. -The liveness detection solution successfully defends against various spoof types ranging from paper printouts, 2d/3d masks, and spoof presentations on phones and laptops. Liveness detection is an active area of research, with continuous improvements being made to counteract increasingly sophisticated spoofing attacks over time. Continuous improvements will be rolled out to the client and the service components over time as the overall solution gets more robust to new types of attacks. +The Azure AI Face liveness detection solution successfully defends against various spoof types ranging from paper printouts, 2d/3d masks, and spoof presentations on phones and laptops. Liveness detection is an active area of research, with continuous improvements being made to counteract increasingly sophisticated spoofing attacks over time. Continuous improvements will be rolled out to the client and the service components over time as the overall solution gets more robust to new types of attacks. [!INCLUDE [liveness-sdk-gate](../includes/liveness-sdk-gate.md)] + ## Introduction The liveness solution integration involves two distinct components: a frontend mobile/web application and an app server/orchestrator. The liveness solution integration involves two distinct components: a frontend m Additionally, we combine face verification with liveness detection to verify whether the person is the specific person you designated. The following table help describe details of the liveness detection features: | Feature | Description |-| -- | -- | +| -- |--| | Liveness detection | Determine an input is real or fake, and only the app server has the authority to start the liveness check and query the result. | | Liveness detection with face verification | Determine an input is real or fake and verify the identity of the person based on a reference image you provided. Either the app server or the frontend application can provide a reference image. Only the app server has the authority to initial the liveness check and query the result. | --## Get started - This tutorial demonstrates how to operate a frontend application and an app server to perform [liveness detection](#perform-liveness-detection) and [liveness detection with face verification](#perform-liveness-detection-with-face-verification) across various language SDKs. -### Prerequisites ++## Prerequisites - Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) - Your Azure account must have a **Cognitive Services Contributor** role assigned in order for you to agree to the responsible AI terms and create a resource. To get this role assigned to your account, follow the steps in the [Assign roles](/azure/role-based-access-control/role-assignments-steps) documentation, or contact your administrator. This tutorial demonstrates how to operate a frontend application and an app serv - You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. - Access to the Azure AI Vision Face Client SDK for mobile (IOS and Android) and web. To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page. -### Setup frontend applications and app servers to perform liveness detection +## Set up frontend applications and app servers to perform liveness detection -We provide SDKs in different languages for frontend applications and app servers. See the following instructions to setup your frontend applications and app servers. +We provide SDKs in different languages for frontend applications and app servers. See the following instructions to set up your frontend applications and app servers. -#### Integrate liveness into frontend application +### Download SDK for frontend application -Once you have access to the SDK, follow instruction in the [azure-ai-vision-sdk](https://github.com/Azure-Samples/azure-ai-vision-sdk) GitHub repository to integrate the UI and the code into your native mobile application. The liveness SDK supports Java/Kotlin for Android mobile applications, Swift for iOS mobile applications and JavaScript for web applications: +Once you have access to the SDK, follow instructions in the [azure-ai-vision-sdk](https://github.com/Azure-Samples/azure-ai-vision-sdk) GitHub repository to integrate the UI and the code into your native mobile application. The liveness SDK supports Java/Kotlin for Android mobile applications, Swift for iOS mobile applications and JavaScript for web applications: - For Swift iOS, follow the instructions in the [iOS sample](https://aka.ms/azure-ai-vision-face-liveness-client-sdk-ios-readme) - For Kotlin/Java Android, follow the instructions in the [Android sample](https://aka.ms/liveness-sample-java) - For JavaScript Web, follow the instructions in the [Web sample](https://aka.ms/liveness-sample-web) -Once you've added the code into your application, the SDK handles starting the camera, guiding the end-user to adjust their position, composing the liveness payload, and calling the Azure AI Face cloud service to process the liveness payload. +Once you've added the code into your application, the SDK handles starting the camera, guiding the end-user in adjusting their position, composing the liveness payload, and calling the Azure AI Face cloud service to process the liveness payload. -#### Download Azure AI Face client library for an app server +### Download Azure AI Face client library for app server The app server/orchestrator is responsible for controlling the lifecycle of a liveness session. The app server has to create a session before performing liveness detection, and then it can query the result and delete the session when the liveness check is finished. We offer a library in various languages for easily implementing your app server. Follow these steps to install the package you want: - For C#, follow the instructions in the [dotnet readme](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/face/Azure.AI.Vision.Face/README.md) The app server/orchestrator is responsible for controlling the lifecycle of a li - For Python, follow the instructions in the [Python readme](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/face/azure-ai-vision-face/README.md) - For JavaScript, follow the instructions in the [JavaScript readme](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/face/ai-vision-face-rest/README.md) -##### Create environment variables +#### Create environment variables [!INCLUDE [create environment variables](../includes/face-environment-variables.md)] -### Perform liveness detection +## Perform liveness detection The high-level steps involved in liveness orchestration are illustrated below: The high-level steps involved in liveness orchestration are illustrated below: -1. The SDK then starts the camera, guides the user to position correctly and then prepares the payload to call the liveness detection service endpoint. +1. The SDK then starts the camera, guides the user to position correctly, and then prepares the payload to call the liveness detection service endpoint. 1. The SDK calls the Azure AI Vision Face service to perform the liveness detection. Once the service responds, the SDK notifies the frontend application that the liveness check has been completed. The high-level steps involved in liveness orchestration are illustrated below: -### Perform liveness detection with face verification +## Perform liveness detection with face verification Combining face verification with liveness detection enables biometric verification of a particular person of interest with an added guarantee that the person is physically present in the system. There are two parts to integrating liveness with verification: There are two parts to integrating liveness with verification: :::image type="content" source="../media/liveness/liveness-verify-diagram.jpg" alt-text="Diagram of the liveness-with-face-verification workflow of Azure AI Face." lightbox="../media/liveness/liveness-verify-diagram.jpg"::: -#### Select a good reference image +### Select a good reference image Use the following tips to ensure that your input images give the most accurate recognition results. -##### Technical requirements: +#### Technical requirements [!INCLUDE [identity-input-technical](../includes/identity-input-technical.md)] * You can utilize the `qualityForRecognition` attribute in the [face detection](../how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only `"high"` quality images are recommended for person enrollment and quality at or above `"medium"` is recommended for identification scenarios. -##### Composition requirements: +#### Composition requirements - Photo is clear and sharp, not blurry, pixelated, distorted, or damaged. - Photo is not altered to remove face blemishes or face appearance. - Photo must be in an RGB color supported format (JPEG, PNG, WEBP, BMP). Recommended Face size is 200 pixels x 200 pixels. Face sizes larger than 200 pixels x 200 pixels will not result in better AI quality, and no larger than 6 MB in size. Use the following tips to ensure that your input images give the most accurate r - Background should be uniform and plain, free of any shadows. - Face should be centered within the image and fill at least 50% of the image. -#### Set up the orchestration of liveness with verification. +### Set up the orchestration of liveness with verification. The high-level steps involved in liveness with verification orchestration are illustrated below: 1. Providing the verification reference image by either of the following two methods: The high-level steps involved in liveness with verification orchestration are il -### Clean up resources +## Clean up resources If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. |
ai-services | Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/model-customization.md | Begin by going to [Vision Studio](https://portal.vision.cognitive.azure.com/) an Then, sign in with your Azure account and select your Vision resource. If you don't have one, you can create one from this screen. -> [!IMPORTANT] -> To train a custom model in Vision Studio, your Azure subscription needs to be approved for access. Please request access using [this form](https://aka.ms/visionaipublicpreview). :::image type="content" source="../media/customization/select-resource.png" alt-text="Screenshot of the select resource screen."::: |
ai-services | Overview Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-identity.md | This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/identity-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time. * The [how-to guides](./how-to/identity-detect-faces.md) contain instructions for using the service in more specific or customized ways. * The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features.-* The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions. +* The [tutorials](./Tutorials/liveness.md) are longer guides that show you how to use this service as a component in broader business solutions. For a more structured approach, follow a Training module for Face. * [Detect and analyze faces with the Face service](/training/modules/detect-analyze-faces/) |
ai-studio | Deploy Models Mistral | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md | -Mistral AI offers two categories of models in [Azure AI Studio](https://ai.azure.com): +Mistral AI offers two categories of models in the [Azure AI Studio](https://ai.azure.com). These models are available in the [model catalog](model-catalog-overview.md): -* __Premium models__: Mistral Large and Mistral Small. These models are available as serverless APIs with pay-as-you-go token-based billing in the AI Studio model catalog. -* __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models are also available in the AI Studio model catalog and can be deployed to managed compute in your own Azure subscription. +* __Premium models__: Mistral Large and Mistral Small. These models can be deployed as serverless APIs with pay-as-you-go token-based billing. +* __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models can be deployed to managed computes in your own Azure subscription. -You can browse the Mistral family of models in the [Model Catalog](model-catalog-overview.md) by filtering on the Mistral collection. +You can browse the Mistral family of models in the model catalog by filtering on the Mistral collection. ## Mistral family of models Certain models in the model catalog can be deployed as a serverless API with pay ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for eligible models in the Mistral family is only available with hubs created in these regions: ++ - East US + - East US 2 + - North Central US + - South Central US + - West US + - West US 3 + - Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - > [!IMPORTANT] - > The serverless API model deployment offering for eligible models in the Mistral family is only available in hubs created in the **East US 2** and **Sweden Central** regions. - An [Azure AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). To create a deployment: :::image type="content" source="../media/deploy-monitor/mistral/mistral-large-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model as a serverless API." lightbox="../media/deploy-monitor/mistral/mistral-large-deploy-pay-as-you-go.png"::: -1. Select the project in which you want to deploy your model. To deploy the Mistral model, your project must be in the *EastUS2* or *Sweden Central* region. +1. Select the project in which you want to deploy your model. To use the serverless API model deployment offering, your project must belong to one of the regions listed in the [prerequisites](#prerequisites). 1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. 1. Select the **Pricing and terms** tab to learn about pricing for the selected model. 1. Select the **Subscribe and Deploy** button. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering. This step requires that your account has the **Azure AI Developer role** permissions on the resource group, as listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending. Currently, you can have only one deployment for each model within a project. You can consume Mistral family models by using the chat API. For more information on using the APIs, see the [reference](#reference-for-mistral-family-of-models-deployed-as-a-service) section. -### Reference for Mistral family of models deployed as a service +## Reference for Mistral family of models deployed as a service Mistral models accept both the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/chat/completions` and the native [Mistral Chat API](#mistral-chat-api) on `/v1/chat/completions`. Mistral models accept both the [Azure AI Model Inference API](../reference/refer The [Azure AI Model Inference API](../reference/reference-model-inference-api.md) schema can be found in the [reference for Chat Completions](../reference/reference-model-inference-chat-completions.md) article and an [OpenAPI specification can be obtained from the endpoint itself](../reference/reference-model-inference-api.md?tabs=rest#getting-started). -#### Mistral Chat API +### Mistral Chat API Use the method `POST` to send the request to the `/v1/chat/completions` route: The `messages` object has the following fields: | `role` | `string` | The role of the message's author. One of `system`, `user`, or `assistant`. | -#### Example +#### Request example __Body__ The `logprobs` object is a dictionary with the following fields: | `tokens` | `array` of `string` | Selected tokens. | | `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the probability. | -#### Example +#### Response example The following JSON is an example response: The following JSON is an example response: } } ```+ #### More inference examples -| **Sample Type** | **Sample Notebook** | -|-|-| -| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/mistral-large/webrequests-sample)| -| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/mistral-large/openaisdk) | -| LangChain | [langchain.ipynb](https://aka.ms/mistral-large/langchain-sample) | -| Mistral AI | [mistralai.ipynb](https://aka.ms/mistral-large/mistralai-sample) | -| LiteLLM | [litellm.ipynb](https://aka.ms/mistral-large/litellm-sample) +| **Sample Type** | **Sample Notebook** | +|-|-| +| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/mistral-large/webrequests-sample) | +| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/mistral-large/openaisdk) | +| LangChain | [langchain.ipynb](https://aka.ms/mistral-large/langchain-sample) | +| Mistral AI | [mistralai.ipynb](https://aka.ms/mistral-large/mistralai-sample) | +| LiteLLM | [litellm.ipynb](https://aka.ms/mistral-large/litellm-sample) | ## Cost and quotas Models deployed as a serverless API with pay-as-you-go billing are protected by - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
app-service | Overview Name Resolution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md | Your app uses DNS when making calls to dependent resources. Resources could be A If you aren't integrating your app with a virtual network and custom DNS servers aren't configured, your app uses [Azure DNS](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution). If you integrate your app with a virtual network, your app uses the DNS configuration of the virtual network. The default for virtual network is also to use Azure DNS. Through the virtual network, it's also possible to link to [Azure DNS private zones](../dns/private-dns-overview.md) and use that for private endpoint resolution or private domain name resolution. -If you configured your virtual network with a list of custom DNS servers, name resolution uses these servers. If your virtual network is using custom DNS servers and you're using private endpoints, you should read [this article](../private-link/private-endpoint-dns.md) carefully. You also need to consider that your custom DNS servers are able to resolve any public DNS records used by your app. Your DNS configuration needs to either forward requests to a public DNS server, include a public DNS server like Azure DNS in the list of custom DNS servers or specify an alternative server at the app level. +If you configured your virtual network with a list of custom DNS servers, name resolution in App Service will use up to five custom DNS servers. If your virtual network is using custom DNS servers and you're using private endpoints, you should read [this article](../private-link/private-endpoint-dns.md) carefully. You also need to consider that your custom DNS servers are able to resolve any public DNS records used by your app. Your DNS configuration needs to either forward requests to a public DNS server, include a public DNS server like Azure DNS in the list of custom DNS servers or specify an alternative server at the app level. When your app needs to resolve a domain name using DNS, the app sends a name resolution request to all configured DNS servers. If the first server in the list returns a response within the timeout limit, you get the result returned immediately. If not, the app waits for the other servers to respond within the timeout period and evaluates the DNS server responses in the order you configured the servers. If none of the servers respond within the timeout and you configured retry, you repeat the process. |
automation | Automation Tutorial Runbook Textual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md | Title: Tutorial - Create a PowerShell Workflow runbook in Azure Automation description: This tutorial teaches you to create, test, and publish a PowerShell Workflow runbook. Previously updated : 11/21/2022 Last updated : 07/04/2024 #Customer intent: As a developer, I want use workflow runbooks so that I can automate the parallel starting of VMs.-> This article is applicable for PowerShell 5.1; PowerShell 7.1 (preview) and PowerShell 7.2 don't support workflows. +> This article is applicable only for PowerShell 5.1. PowerShell 7+ versions do not support Workflows, and outdated runbooks cannot be updated. We recommend you to use PowerShell 7.2 textual runbooks for advanced features such as parallel job execution. [Learn more](../automation-runbook-types.md#limitations) about limitations of PowerShell Workflow runbooks. In this tutorial, you learn how to: |
azure-app-configuration | Use Key Vault References Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md | Your application uses the App Configuration client provider to retrieve Key Vaul Your application is responsible for authenticating properly to both App Configuration and Key Vault. The two services don't communicate directly. -This tutorial shows you how to implement Key Vault references in your code. It builds on the web app introduced in the quickstarts. Before you continue, finish [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) first. +This tutorial shows you how to implement Key Vault references in your code. It builds on the web app introduced in the ASP.NET core quickstart listed in the prerequisites below. Before you continue, complete this [quickstart](./quickstart-aspnet-core-app.md). You can use any code editor to do the steps in this tutorial. For example, [Visual Studio Code](https://code.visualstudio.com/) is a cross-platform code editor that's available for the Windows, macOS, and Linux operating systems. In this tutorial, you learn how to: ## Prerequisites -Before you start this tutorial, install the [.NET SDK 6.0 or later](https://dotnet.microsoft.com/download). -+Finish the quickstart: [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md). ## Create a vault |
azure-arc | Enable Virtual Hardware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-virtual-hardware.md | Title: Enable additional capabilities on Arc-enabled Server machines by linking to vCenter description: Enable additional capabilities on Arc-enabled Server machines by linking to vCenter. Previously updated : 03/13/2024 Last updated : 07/04/2024 Follow these steps [here](./quick-start-connect-vcenter-to-arc-using-script.md) Use the following az commands to link Arc-enabled Server machines to vCenter at scale. -**Create VMware resources from the specified Arc for Server machines in the vCenter** +**Create VMware resource from the specified Arc for Server machine in the vCenter** ```azurecli-interactive-az connectedvmware vm create-from-machines --resource-group contoso-rg --name contoso-vm --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter +az connectedvmware vm create-from-machines --resource-group contoso-rg --name contoso-vm --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` **Create VMware resources from all Arc for Server machines in the specified resource group belonging to that vCenter** ```azurecli-interactive-az connectedvmware vm create-from-machines --resource-group contoso-rg --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter +az connectedvmware vm create-from-machines --resource-group contoso-rg --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` **Create VMware resources from all Arc for Server machines in the specified subscription belonging to that vCenter** ```azurecli-interactive-az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter +az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` ### Required Parameters |
azure-functions | Functions Container Apps Hosting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md | Title: Azure Container Apps hosting of Azure Functions description: Learn about how you can use Azure Functions on Azure Container Apps to host and manage containerized function apps in Azure. Previously updated : 05/07/2024 Last updated : 07/04/2024 # Customer intent: As a cloud developer, I want to learn more about hosting my function apps in Linux containers managed by Azure Container Apps. -Azure Functions provides integrated support for developing, deploying, and managing containerized function apps on [Azure Container Apps](../container-apps/overview.md). Use Azure Container Apps to host your function app containers when you need to run your event-driven functions in Azure in the same environment as other microservices, APIs, websites, workflows, or any container hosted programs. Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and KEDA. +Azure Functions provides integrated support for developing, deploying, and managing containerized function apps on [Azure Container Apps](../container-apps/overview.md). Use Azure Container Apps to host your function app containers when you need to run your event-driven functions in Azure in the same environment as other microservices, APIs, websites, workflows, or any container hosted programs. Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and Kubernetes Event-driven Autoscaling (KEDA). You can write your function code in any [language stack supported by Functions](supported-languages.md). You can use the same Functions triggers and bindings with event-driven scaling. You can also use existing Functions client tools and the Azure portal to create containers, deploy function app containers to Container Apps, and configure continuous deployment. When you make changes to your functions code, you must rebuild and republish you Azure Functions currently supports the following methods of deploying a containerized function app to Azure Container Apps: ++ [Apache Maven](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details#properties-for-azure-container-apps-hosting-of-azure-functions)++ [ARM templates](/azure/templates/microsoft.web/sites?pivots=deployment-language-arm-template) + [Azure CLI](./functions-deploy-container-apps.md)-+ Azure portal -+ GitHub Actions -+ Azure Pipeline tasks -+ ARM templates -+ [Bicep templates](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/Biceptemplates) ++ [Azure Developer CLI (azd)](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/azdtemplates) + [Azure Functions Core Tools](functions-run-local.md#deploy-containers)-++ [Azure Pipeline tasks](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/AzurePipelineTasks)++ [Azure portal](https://aka.ms/funconacablade)++ [Bicep files](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/Biceptemplates)++ [GitHub Actions](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/GitHubActions)++ [Visual Studio Code](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/VSCode%20Sample) ## Virtual network integration |
azure-maps | Power Bi Visual Add Reference Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md | To use a hosted spatial dataset as a reference layer: 1. Navigate to the **Format** pane. 1. Expand the **Reference Layer** section. 1. Select **URL** from the **Type** drop-down list.-1. Select **Enter a URL** and enter a valid URL pointing to your hosted file. Hosted files must be a valid spatial dataset with a `.json`, `.geojson`, `.wkt`, `.kml` or `.shp` extension. +1. Select **Enter a URL** and enter a valid URL pointing to your hosted file. Hosted files must be a valid spatial dataset with a `.json`, `.geojson`, `.wkt`, `.kml` or `.shp` extension. After the link to the hosted file is added to the reference layer, the URL appears in the **Enter a URL** field. To remove the data from the visual simply delete the URL. - :::image type="content" source="./media/power-bi-visual/reference-layer-hosted.png" alt-text="Screenshot showing the reference layers section when hosting a file control."::: + :::image type="content" source="./media/power-bi-visual/reference-layer-hosted.png" alt-text="Screenshot showing the reference layers section when using the 'Enter a URL' input control."::: -Once the link to the hosted file is added to the reference layer, the URL appears in the **Enter a URL** field. To remove the data from the visual simply delete the URL. +1. Alternatively, you can create a dynamic URL using Data Analysis Expressions ([DAX]) based on fields, variables or other programmatic elements. By utilizing DAX, the URL will dynamically change based on filters, selections, or other user interactions and configurations. For more information, see [Expression-based titles in Power BI Desktop]. ++ :::image type="content" source="./media/power-bi-visual/reference-layer-hosted-dax.png" alt-text="Screenshot showing the reference layers section when using DAX for the URL input."::: Add more context to the map: [supported style properties]: spatial-io-add-simple-data-layer.md#default-supported-style-properties [Add a tile layer]: power-bi-visual-add-tile-layer.md [Show real-time traffic]: power-bi-visual-show-real-time-traffic.md+[DAX]: /dax/ +[Expression-based titles in Power BI Desktop]: /power-bi/create-reports/desktop-conditional-format-visual-titles |
azure-monitor | Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/ip-addresses.md | For more information on availability tests, see [Private availability testing](. | Purpose | URI | IP | Ports | | | | | |-| API |`api.applicationinsights.io`<br/>`api1.applicationinsights.io`<br/>`api2.applicationinsights.io`<br/>`api3.applicationinsights.io`<br/>`api4.applicationinsights.io`<br/>`api5.applicationinsights.io`<br/>`dev.applicationinsights.io`<br/>`dev.applicationinsights.microsoft.com`<br/>`dev.aisvc.visualstudio.com`<br/>`www.applicationinsights.io`<br/>`www.applicationinsights.microsoft.com`<br/>`www.aisvc.visualstudio.com`<br/>`api.loganalytics.io`<br/>`*.api.loganalytics.io`<br/>`dev.loganalytics.io`<br>`docs.loganalytics.io`<br/>`www.loganalytics.io` |20.37.52.188 <br/> 20.37.53.231 <br/> 20.36.47.130 <br/> 20.40.124.0 <br/> 20.43.99.158 <br/> 20.43.98.234 <br/> 13.70.127.61 <br/> 40.81.58.225 <br/> 20.40.160.120 <br/> 23.101.225.155 <br/> 52.139.8.32 <br/> 13.88.230.43 <br/> 52.230.224.237 <br/> 52.242.230.209 <br/> 52.173.249.138 <br/> 52.229.218.221 <br/> 52.229.225.6 <br/> 23.100.94.221 <br/> 52.188.179.229 <br/> 52.226.151.250 <br/> 52.150.36.187 <br/> 40.121.135.131 <br/> 20.44.73.196 <br/> 20.41.49.208 <br/> 40.70.23.205 <br/> 20.40.137.91 <br/> 20.40.140.212 <br/> 40.89.189.61 <br/> 52.155.118.97 <br/> 52.156.40.142 <br/> 23.102.66.132 <br/> 52.231.111.52 <br/> 52.231.108.46 <br/> 52.231.64.72 <br/> 52.162.87.50 <br/> 23.100.228.32 <br/> 40.127.144.141 <br/> 52.155.162.238 <br/> 137.116.226.81 <br/> 52.185.215.171 <br/> 40.119.4.128 <br/> 52.171.56.178 <br/> 20.43.152.45 <br/> 20.44.192.217 <br/> 13.67.77.233 <br/> 51.104.255.249 <br/> 51.104.252.13 <br/> 51.143.165.22 <br/> 13.78.151.158 <br/> 51.105.248.23 <br/> 40.74.36.208 <br/> 40.74.59.40 <br/> 13.93.233.49 <br/> 52.247.202.90 |80,443 | +| API |`api.applicationinsights.io`<br/>`api1.applicationinsights.io`<br/>`api2.applicationinsights.io`<br/>`api3.applicationinsights.io`<br/>`api4.applicationinsights.io`<br/>`api5.applicationinsights.io`<br/>`dev.applicationinsights.io`<br/>`dev.applicationinsights.microsoft.com`<br/>`dev.aisvc.visualstudio.com`<br/>`www.applicationinsights.io`<br/>`www.applicationinsights.microsoft.com`<br/>`www.aisvc.visualstudio.com`<br/>`api.loganalytics.io`<br/>`*.api.loganalytics.io`<br/>`dev.loganalytics.io`<br>`docs.loganalytics.io`<br/>`www.loganalytics.io`<br/>`api.loganalytics.azure.com` |20.37.52.188 <br/> 20.37.53.231 <br/> 20.36.47.130 <br/> 20.40.124.0 <br/> 20.43.99.158 <br/> 20.43.98.234 <br/> 13.70.127.61 <br/> 40.81.58.225 <br/> 20.40.160.120 <br/> 23.101.225.155 <br/> 52.139.8.32 <br/> 13.88.230.43 <br/> 52.230.224.237 <br/> 52.242.230.209 <br/> 52.173.249.138 <br/> 52.229.218.221 <br/> 52.229.225.6 <br/> 23.100.94.221 <br/> 52.188.179.229 <br/> 52.226.151.250 <br/> 52.150.36.187 <br/> 40.121.135.131 <br/> 20.44.73.196 <br/> 20.41.49.208 <br/> 40.70.23.205 <br/> 20.40.137.91 <br/> 20.40.140.212 <br/> 40.89.189.61 <br/> 52.155.118.97 <br/> 52.156.40.142 <br/> 23.102.66.132 <br/> 52.231.111.52 <br/> 52.231.108.46 <br/> 52.231.64.72 <br/> 52.162.87.50 <br/> 23.100.228.32 <br/> 40.127.144.141 <br/> 52.155.162.238 <br/> 137.116.226.81 <br/> 52.185.215.171 <br/> 40.119.4.128 <br/> 52.171.56.178 <br/> 20.43.152.45 <br/> 20.44.192.217 <br/> 13.67.77.233 <br/> 51.104.255.249 <br/> 51.104.252.13 <br/> 51.143.165.22 <br/> 13.78.151.158 <br/> 51.105.248.23 <br/> 40.74.36.208 <br/> 40.74.59.40 <br/> 13.93.233.49 <br/> 52.247.202.90 |80,443 | | Azure Pipeline annotations extension | `aigs1.aisvc.visualstudio.com` |dynamic|443 | ## Application Insights analytics |
communication-services | Incoming Audio Low Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/incoming-audio-low-volume.md | This value is derived from `audioLevel` in WebRTC Stats. [https://www.w3.org/TR/ A low `audioOutputLevel` value indicates that the volume sent by the sender is also low. ## How to mitigate or resolve-If the `audioOutputLevel` value is low, this is likely that the volume sent by the sender is also low. +If the `audioOutputLevel` value is low, it's likely that the volume sent by the sender is also low. To troubleshoot this issue, users should investigate why the audio input volume is low on the sender's side. This problem could be due to various factors, such as microphone settings, or hardware issues. -If the `audioOutputLevel` value appears normal, the issue may be related to system volume settings or speaker issues on the receiver's side. +The value of `audioOutputLevel` ranges from 0 - 65536. In practice, values lower than 60 can be considered quiet, and values lower than 150 are often considered low volume. + Users can check their device's volume settings and speaker output to ensure that they're set to an appropriate level.+If the `audioOutputLevel` value appears normal, the issue may be related to system volume settings or speaker issues on the receiver's side. ++For example, if the user uses Windows, they should check the volume mixer settings and apps volume settings. + ### Using Web Audio GainNode to increase the volume It may be possible to address this issue at the application layer using Web Audio GainNode. By using this feature with the raw audio stream, it's possible to increase the o You can also look to display a [volume level indicator](../../../../quickstarts/voice-video-calling/get-started-volume-indicator.md?pivots=platform-web) in your client user interface to let your users know what the current volume level is. +## References +### Troubleshooting process +Below is a flow diagram of the troubleshooting process for this issue. +++1. When a user reports experiencing low audio volume, the first thing to check is whether the volume of the incoming audio is low. The application can obtain this information by checking `audioOutputLevel` in the media stats. +2. If the `audioOutputLevel` value is constantly low, it indicates that the volume of audio sent by the speaking participant is low. In this case, ask the user to verify if the speaking participant has issues with their microphone device or input volume settings. +3. If the `audioOutputLevel` value isn't always low, the user may still experience low audio volume issue due to system volume settings. Ask the user to check their system volume settings. +4. If the user's system volume is set to a low value, the user should increase the volume in the settings. +5. In some systems that support app-specific volume settings, the audio volume output from the app may be low even if system volume isn't low. In this case, the user should check their volume setting of the app within the system. +6. If the volume setting of the app in the system is low, the user should increase it. +7. If you still can't determine why the audio output volume is low, ask the user to check their speaker device or select another audio output device. The issue may be due to a device problem and not related to the software or operating system. Not all platforms support speaker enumeration in the browser. For example, you can't select an audio output device through the JavaScript API in the Safari browser or in Chrome on Android. In these cases, you should configure the audio output device in the system settings. |
communication-services | Microphone Issue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/microphone-issue.md | For example, a hardware mute button of some headset models can trigger this even The application should listen to the [User Facing Diagnostics API](../../../../concepts/voice-video-calling/user-facing-diagnostics.md) events. The application should display a warning message when receiving events. By doing so, the user is aware of the issue and can troubleshoot by switching to a different microphone device or by unplugging and plugging in their current microphone device.++## References +### Troubleshooting process +If a user can't hear sound during a call, one possibility is that the speaking participant has an issue with their microphone. +If the speaking participant is using your application, you can follow this flow diagram to troubleshoot the issue. +++1. First, check if a microphone is available. The application can obtain this information by invoking `DeviceManager.getMicrophone` API or by detecting a `noMicrophoneDevicesEnumerated` UFD Bad event. +2. If no microphone device is available, prompt the user to plug in a microphone. +3. If a microphone is available but there's no outgoing audio, consider other possibilities such as permission issues, device issues, or network problems. +4. If permission is denied, refer to [The speaking participant doesn't grant the microphone permission](./microphone-permission.md) for more information. +5. If permission is granted, consider whether the issue is due to an external problem, such as `microphoneMuteUnexpectedly` UFD. +6. The `microphoneMuteUnexpectedly` UFD Bad event is triggered when the browser mutes the audio input track. The application can monitor this UFD but isn't able to detect the reason at JavaScript layer. You can still provide instructions in the app and ask if the user is using hardware mute button on their headset. +7. If the user releases the hardware mute and the `microphoneMuteUnexpectedly` UFD recovers, the issue is resolved. +8. If the user isn't using the hardware mute, ask the user to unplug and replug the microphone, or to select another microphone. Ensure the user hasn't muted the microphone at the system level. +9. No outgoing audio issue can also happen when there's a `microphoneNotFunctioning` UFD Bad event. +10. If there's no `microphoneNotFunctioning` UFD Bad event, consider other possibilities, such as network issues. +11. If there's a `networkReconnect` Bad UFD, outgoing audio may be temporarily lost due to a network disconnection. Refer to [There's a network issue in the call](./network-issue.md) for detailed information. +12. If there are no microphone-related events and no network-related events, create a support ticket for ACS team to investigate the issue. Refer to [Reporting an issue](../general-troubleshooting-strategies/report-issue.md). +13. If a `microphoneNotFunctioning` UFD Bad event occurs, and the user has no outgoing audio, they can try to recover the stream by using ACS [mute](/javascript/api/azure-communication-services/@azure/communication-calling/call?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-call-mute) and [unmute](/javascript/api/azure-communication-services/@azure/communication-calling/call?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-call-unmute). +14. If the `microphoneNotFunctioning` UFD doesn't recover after the user performs ACS mute and unmute, there might be an issue with the microphone device. Ask the user to unplug and replug the microphone or select another microphone. |
communication-services | Microphone Permission | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/microphone-permission.md | The listener should check for events with the value of `microphonePermissionDeni It's important to note that if the user revokes access permission during the call, this `microphonePermissionDenied` event also fires. ## How to mitigate or resolve-Your application should always call the `askDevicePermission` API after the `CallClient` is initialized. +Your application should always call the [askDevicePermission](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-askdevicepermission) API after the `CallClient` is initialized. This way gives the user a chance to grant the device permission if they didn't do so before or if the permission state is `prompt`.+The application can also show a warning message if the user denies the permission, so the user can fix it before joining a call. -It's also important to listen for the `microphonePermissionDenied` event. Display a warning message if the user revokes the permission during the call. By doing so, the user is aware of the issue and can adjust their browser or system settings accordingly. +It's also important to listen for the [microphonePermissionDenied](../references/ufd/microphone-permission-denied.md) UFD event. Display a warning message if the user revokes the permission during the call. By doing so, the user is aware of the issue and can adjust their browser or system settings accordingly. +## References +### Troubleshooting process +If a user can't hear sound during a call, one possibility is that the speaking participant hasn't granted microphone permission. +If the speaking participant is using your application, you can follow this flow diagram to troubleshoot the issue. ++1. Check if there's a `microphonePermissionDenied` Bad UFD event for the speaking participant. This usually indicates that the user has denied the permission or that the permission isn't requested. +2. If a `microphonePermissionDenied` Bad UFD event occurs, verify whether the app has called `askDevicePermission` API. +3. The app must call `askDevicePermission` if this API hasn't been invoked before the user joins the call. The app can offer a smoother user experience by determining the current state of permissions. For instance, it can display a message instructing the user to adjust their permissions if necessary. +4. If the app has called `askDevicePermission` API, but the user still gets a `microphonePermissionDenied` Bad UFD event. The user has to reset or grant the microphone permission in the browser. If they have confirmed that the permission is granted in the browser, they should check if the OS is blocking mic access to the browser. +5. If there's no `microphonePermissionDenied` Bad UFD, we need to consider other possibilities. For the speaking participant, there might be other potential reasons for issues with outgoing audio, such as network reconnection, or device issues. +6. If there's a `networkReconnect` Bad UFD, the outgoing audio may be temporarily lost due to a network disconnection. See [There's a network issue in the call](./network-issue.md) for detailed information. +7. If no `networkReconnect` Bad UFD occurs, there might be a problem on the speaking participant's microphone. See [The speaking participant's microphone has a problem](./microphone-issue.md) for detailed information. |
communication-services | Network Issue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/network-issue.md | so that the user is aware of the issue and understands that the audio loss is du However, if the network reconnection occurs at the sender's side, users on the receiving end are unable to know about it because currently the SDK doesn't support notifying receivers that the sender has network issues. +## References +### Troubleshooting process +If a user can't hear sound during a call, one possibility is that the speaking participant or the receiving end has network issues. ++Below is a flow diagram of the troubleshooting process for this issue. +++1. First, check if there's a `networkReconnect` UFD. The user may experience audio loss during the network reconnection. +2. The UFD can happen on either the sender's end or the receiver's end. In both cases, packets don't flow, so the user can't hear the audio. +3. If there's no `networkReconnect` UFD, consider other potential causes, such as permission issues or device problems. +4. If the permission is denied, refer to [The speaking participant doesn't grant the microphone permission](./microphone-permission.md) for more information. +5. The issue could also be due to device problems, refer to [The speaking participant's microphone has a problem](./microphone-issue.md). |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/overview.md | To establish a voice call with good quality, several factors must be considered. - The users granted the microphone permission - The users microphone is working properly - The network conditions are good enough on sending and receiving ends-- The audio output level is functioning properly+- The audio output device is functioning properly All of these factors are important from an end-to-end perspective. |
communication-services | Poor Quality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/poor-quality.md | and the user starts speaking, the user's audio input in the first few seconds ma leading to distortion in the sound. This can be observed by comparing the ref\_out.wav and input.wav files in aecdump files. In this case, reducing the volume of audio playing sound may help. +## References +### Troubleshooting process +Below is a flow diagram of the troubleshooting process for this issue. ++1. When a user reports experiencing poor audio quality during a call, the first thing to check is the source of the issue. It could be coming from the sender's side or the receiver's side. If other participants on different networks also have similar issues, it's very possible that the issue comes from the sender's side. +2. Check if there's `networkSendQuality` UFD Bad event on the sender's side. +3. If there's no `networkSendQuality` UFD Bad event on the sender's side, the poor audio could be due to device issues or audio distortion caused by the browser's audio processing module. Ask the user to collect diagnostic audio recordings from the browser. Refer to [How to collect diagnostic audio recordings](../references/how-to-collect-diagnostic-audio-recordings.md) +4. If there's a `networkSendQuality` UFD Bad event, the poor audio quality might be due to the sender's network issues. Check the sender's network. +5. If the user experiences poor audio quality but no other participants have the same issue, and there are only two participants in the call, still check the sender's network. +6. If the user experiences poor audio quality but no other participants have the same issue in a group call, the issue might be due to the receiver's network. Check for a `networkReceiveQuality` UFD Bad event on the receiver's end. +7. If there's a `networkReceiveQuality` UFD Bad event, check the receiver's network. +8. If you can't find a `networkReceiveQuality` UFD Bad event, check if other media stats metrics on the receiver's end are poor, such as packetsLost, jitter, etc. +9. If you can't determine why the audio quality on the receiver's end is poor, create a support ticket for the ACS team to investigate. |
communication-services | Speaker Issue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/audio-issues/speaker-issue.md | If the `audioOutputLevel` value isn't always low but the user can't hear audio, Speaker issues are considered external problems from the perspective of the ACS Calling SDK. Your application user interface should display a [volume level indicator](../../../../quickstarts/voice-video-calling/get-started-volume-indicator.md?pivots=platform-web) to let your users know what the current volume level of incoming audio is.+ If the incoming audio isn't silent, the user can know that the issue occurs in their speaker or output volume settings and can troubleshoot accordingly.++If the user uses Windows, they should also check the volume mixer settings and apps volume settings. +++If you're using Web Audio API in your application, you might also check if there's `AudioRenderer error with rendering audio code: 3` in the log. +This error occurs when there are too many AudioContext instances open at the same time, particularly if the application doesn't properly close the AudioContext or +if there's an AudioContext creation associated with the UI component refresh logic. ++## References +### Troubleshooting process +If a user can't hear sound during a call, one possibility is that the participant has an issue with their speaker. +The speaker issue isn't easily detected and usually requires users to check their system settings or their audio output devices. ++Below is a flow diagram of the troubleshooting process for this issue. +++1. When a user reports that they can't hear audio, the first thing we need to check is whether the incoming audio is silent. The application can obtain this information by checking `audioOutputLevel` in the media stats. +2. If the `audioOutputLevel` value is constantly 0, it indicates that the incoming audio is silent. In this case, ask the user to verify if the speaking participant is muted or experiencing other issues, such as permission issues, device problems, or network issues. +3. If the `audioOutputLevel` value isn't always 0, the user may still be unable to hear audio due to system volume settings. Ask the user to check their system volume settings. +4. If the user's system volume is set to 0 or very low, the user should increase the volume in the settings. +5. In some systems that support app-specific volume settings, the audio volume output from the app may be low even if system volume isn't low. In this case, the user should check their volume setting of the app within the system. +6. If the volume setting of the app in the system is 0 or very low, the user should increase it. +7. In certain cases, the audio element in the browser may fail to play or decode the audio, you can find an error message `AudioRenderer error with rendering audio code: 3` in the log. +8. A common case for the AudioRenderer error is that the app uses the Web Audio API but doesn't release AudioContext objects properly. Browsers have a limit on the number of AudioContext instances that can be open simultaneously. +9. If you still can't determine why the user can't hear sound during the call, ask the user to check their speaker device or select another audio output device. Note that not all platforms support speaker enumeration in the browser. For example, you can't select an audio output device through the JavaScript API in the Safari browser or in Chrome on Android. In these cases, you should configure the audio output device in the system settings. |
communication-services | Call Setup Takes Too Long | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/call-setup-issues/call-setup-takes-too-long.md | The application can calculate the delay between when the call is initiated and w If a user consistently experiences long call setup times, they should check their network for issues such as slow network speed, long round trip time, or high packet loss. These issues can affect call setup time because the signaling layer uses a `TCP` connection, and factors such as retransmissions can cause delays. Additionally, if the user suspects the delay comes from stream acquisition, they should check their devices. For example, they can choose a different audio input device.+If a user consistently experiences this issue and you're unable to determine the cause, you may consider filing a support ticket for further assistance. ++### Check the duration of stream acquisition +The stream acquisition is part of the call setup flow. You can get this information from webrtc-internals page. +To access the page, open a new tab and enter edge://webrtc-internals (Edge) or chrome://webrtc-internals (Chrome). +++Once you're on the webrtc-internals page, you can calculate the duration of the stream acquisition by comparing the timestamp of the getUserMedia call and the result. If the duration is abnormally long, you may need to check the devices. ++### Check the duration of HTTP requests +You can also check the Network tab of the Developer tools to see the size of requests and how long they take to finish. +If the issue is due to the long duration of the signaling request, you should be able to see some requests taking very long time from the network trace. ++If you need to file a support ticket, we may request the browser HAR file. +To learn how to collect a HAR file, see [Capture a browser trace for troubleshooting](../../../../../azure-portal/capture-browser-trace.md). |
communication-services | How To Collect Call Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/how-to-collect-call-info.md | + + Title: References - How to collect call info ++description: Learn how to collect call info. ++++ Last updated : 05/22/2024++++++# How to collect call info +When you report an issue, providing important call information can help us quickly locate the problematic area and gain a deeper understanding of the issue. ++* ACS resource ID +* call ID +* participant ID ++## How to get ACS resource ID ++You can get this information from [https://portal.azure.com](https://portal.azure.com). +++## How to get call ID and participant ID +The participant ID is important when there are multiple users in the same call. +```typescript +// call ID +call.id +// local participant ID +call.info.participantId ++``` +++ |
communication-services | Application Disposes Video Renderer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/application-disposes-video-renderer.md | -The [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API doesn't resolve immediately, as there are multiple underlying asynchronous operations involved in the video subscription process and thus this API response is an asynchronous response. +The [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API doesn't resolve immediately, as there are multiple underlying asynchronous operations involved in the video subscription process and thus this API response is an asynchronous response. -If your application disposes of the render object while the video subscription is in progress, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +If your application disposes of the render object while the video subscription is in progress, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. ## How to detect using the SDK |
communication-services | Create View Timeout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/create-view-timeout.md | the SDK detects this issue and throws an createView timeout error. This error is unexpected from SDK's perspective. This error indicates a discrepancy between signaling and media transport. ## How to detect using SDK-When there's a `create view timeout` issue, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +When there's a `create view timeout` issue, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. | Error | Details | ||-| When there's a `create view timeout` issue, the [`createView`](/javascript/api/% ## Reasons behind createView timeout failures and how to mitigate the issue ### The video sender's browser is in the background Some mobile devices don't send any video frames when the browser is in the background or a user locks the screen.-The [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API detects no incoming video frames and considers this situation a subscription failure, therefore, it throws a createView timeout error. +The [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API detects no incoming video frames and considers this situation a subscription failure, therefore, it throws a createView timeout error. No further detailed information is available because currently the SDK doesn't support notifying receivers that the sender's browser is in the background. Your application can implement its own detection mechanism and notify the participants in a call when the sender's browser goes back to foreground. The participants can subscribe the video again.-A feasible but less elegant approach for handling this createView timeout error is to continuously retry invoking the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API until it succeeds. +A feasible but less elegant approach for handling this createView timeout error is to continuously retry invoking the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API until it succeeds. ### The video sender dropped from the call unexpectedly Some users might end the call by terminating the browser process instead of by hanging up. If the video sender has network issues during the time other participants are su This error is unexpected on the video receiver's side. For example, if the sender experiences a temporary network disconnection, other participants are unable to receive video frames from the sender. -A workaround approach for handling this createView timeout error is to continuously retry invoking [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API until it succeeds when this network event is happening. +A workaround approach for handling this createView timeout error is to continuously retry invoking [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API until it succeeds when this network event is happening. ### The video receiver has network issues Similar to the sender's network issues, if a video receiver has network issues the video subscription may fail. |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/overview.md | After the SDK completes the handshake at the signaling layer with the server, it The browser performs video encoding and packetization at the RTP(Real-time Transport Protocol) layer for transmission. The other participants in the call receive notifications from the server, indicating the availability of a video stream from the sender. Your application can decide whether to subscribe to the video stream or not. -If your application subscribes to the video stream from the server (for example, using [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API), the server forwards the sender's video packets to the receiver. +If your application subscribes to the video stream from the server (for example, using [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API), the server forwards the sender's video packets to the receiver. The receiver's browser decodes and renders the incoming video. When you use ACS Web Calling SDK for video calls, the SDK and browser may adjust the video quality of the sender based on the available bandwidth. |
communication-services | Reaching Max Number Of Active Video Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/reaching-max-number-of-active-video-subscriptions.md | -If the number of active video subscriptions exceeds the limit, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +If the number of active video subscriptions exceeds the limit, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. | Error details | Details | |
communication-services | Remote Video Becomes Unavailable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/remote-video-becomes-unavailable.md | The SDK detects this change and throws an error. This error is expected from SDK's perspective as the remote endpoint stops sending the video. ## How to detect using the SDK-If the video becomes unavailable before the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API finishes, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +If the video becomes unavailable before the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API finishes, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. | error | Details | ||-| |
communication-services | Subscribing Video Not Available | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/video-issues/subscribing-video-not-available.md | Subscribing a video in this case results in failure. This error is expected from SDK's perspective as applications shouldn't subscribe to a video that is currently not available. ## How to detect using the SDK-If you subscribe to a video that is unavailable, the [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API throws an error. +If you subscribe to a video that is unavailable, the [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API throws an error. | error | Details | While the SDK throws an error in this scenario, applications should refrain from subscribing to a video when the remote video isn't available, as it doesn't satisfy the precondition. The recommended practice is to monitor the isAvailable change within the `isAvailable` event callback function and to subscribe to the video when `isAvailable` changes to `true`.-However, if there's asynchronous processing in the application layer, that might cause some delay before invoking [`createView`](/javascript/api/%40azure/communication-react/statefulcallclient?view=azure-node-latest&preserve-view=true#@azure-communication-react-statefulcallclient-createview) API. +However, if there's asynchronous processing in the application layer, that might cause some delay before invoking [`createView`](/javascript/api/azure-communication-services/@azure/communication-calling/videostreamrenderer?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-videostreamrenderer-createview) API. In such case, applications can check isAvailable again before invoking the createView API. |
connectors | Enable Stateful Affinity Built In Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/enable-stateful-affinity-built-in-connectors.md | To run these connector operations in stateful mode, you must enable this capabil 1. In the [Azure portal](https://portal.azure.com), open the Standard logic app resource where you want to enable stateful mode for these connector operations. -1. Enable virtual network integration for your logic app and add your logic app to the previously created subnet: +1. To enable virtual network integration for your logic app, and add your logic app to the previously created subnet, follow these steps: - 1. On your logic app menu resource, under **Settings**, select **Networking**. + 1. On the logic app menu resource, under **Settings**, select **Networking**. - 1. In the **Outbound Traffic** section, select **VNET integration** > **Add VNet**. + 1. In the **Outbound traffic configuration** section, next to **Virtual network integration**, select **Not configured** > **Add virtual network integration**. - 1. On the **Add VNet Integration** pane that opens, select your Azure subscription and your virtual network. + 1. On the **Add virtual network integration** pane that opens, select your Azure subscription and your virtual network. - 1. Under **Subnet**, select **Select existing**. From the **Subnet** list, select the subnet where you want to add your logic app. + 1. From the **Subnet** list, select the subnet where you want to add your logic app. - 1. When you're done, select **OK**. + 1. When you're done, select **Connect**, and return to the **Networking** page. - On the **Networking** page, the **VNet integration** option now appears set to **On**, for example: + The **Virtual network integration** property is now set to the selected virtual network and subnet, for example: - :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/enable-virtual-network-integration.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Networking page, VNet integration set to On."::: + :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/enable-virtual-network-integration.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Networking page with selected virtual network and subnet."::: For general information about enabling virtual network integration with your app, see [Enable virtual network integration in Azure App Service](../app-service/configure-vnet-integration-enable.md). Updates a resource by using the specified resource ID: #### Parameter values -| Element | Value | Description | -||--|-| +| Element | Value | +||--| | HTTP request method | **PATCH** | | <*resourceId*> | **subscriptions/{yourSubscriptionID}/resourcegroups/{yourResourceGroup}/providers/Microsoft.Web/sites/{websiteName}/config/web** | | <*yourSubscriptionId*> | The ID for your Azure subscription | Resource scale-in events might cause the loss of context for built-in connectors 1. On your logic app resource menu, under **Settings**, select **Scale out**. -1. Under **App Scale Out**, set **Enforce Scale Out Limit** to **Yes**, which shows the **Maximum Scale Out Limit**. +1. On the **Scale out** page, in the **App Scale out** section, follow these steps: ++ 1. Set **Enforce Scale Out Limit** to **Yes**, which shows the **Maximum Scale Out Limit**. -1. On the **Scale out** page, under **App Scale out**, set the number for **Always Ready Instances** to the same number as **Maximum Scale Out Limit** and **Maximum Burst**, which appears under **Plan Scale Out**, for example: + 1. Set **Always Ready Instances** to the same number as **Maximum Scale Out Limit** and **Maximum Burst**, which appears in the **Plan Scale out** section, for example: - :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/scale-in-settings.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Scale out page, and Always Ready Instances number set to match Maximum Scale Out Limit and Maximum Burst."::: + :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/scale-in-settings.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Scale out page, and Always Ready Instances number set to match Maximum Burst and Maximum Scale Out Limit."::: 1. When you're done, on the **Scale out** toolbar, select **Save**. |
container-apps | Dotnet Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dotnet-overview.md | For a chance to implement custom logic to determine the health of your applicati By default, Azure Container Apps automatically scales your ASP.NET Core apps based on the number of incoming HTTP requests. You can also configure custom autoscaling rules based on other metrics, such as CPU or memory usage. To learn more about scaling, see [Set scaling rules in Azure Container Apps](scale-app.md). +In .NET 8.0.4 and later, ASP.NET Core apps that use [data protection](/aspnet/core/security/data-protection/introduction) are automatically configured to keep protected data accessible to all replicas as the application scales. When your app begins to scale, a key manager handles the writing and sharing keys across multiple revisions. As the app is deployed, the environment variable `autoConfigureDataProtection` is automatically set `true` to enable this feature. For more information on this auto configuration, see [this GitHub pull request](https://github.com/Azure/azure-rest-api-specs/pull/28001). + Autoscaling changes the number of replicas of your app based on the rules you define. By default, Container Apps randomly routes incoming traffic to the replicas of your ASP.NET Core app. Since traffic can split among different replicas, your app should be stateless so your application doesn't experience state-related issues. Features such as anti-forgery, authentication, SignalR, Blazor Server, and Razor Pages depend on data protection require extra configuration to work correctly when scaling to multiple replicas. |
container-registry | Container Registry Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-cache.md | Artifact cache offers faster and more *reliable pull operations* through Azure C Artifact cache allows cached registries to be accessible over *private networks* for users to align with firewall configurations and compliance standards seamlessly. -Artifact cache addresses the challenge of anonymous pull limits imposed by public registries like Docker Hub. By allowing users to pull images from the local ACR, it circumvents these limits, ensuring *uninterrupted content delivery* from upstream sources and eliminating the concern of hitting pull limits. +Artifact cache addresses the challenge of pull limits imposed by public registries. We recommend users authenticate their cache rules with their upstream source credentials. Then pull images from the local ACR, to help mitigate rate limits. ## Terminology Artifact cache addresses the challenge of anonymous pull limits imposed by publi Artifact cache currently supports the following upstream registries: +>[!WARNING] +> We recommend customers to [create a credential set](container-registry-artifact-cache.md#create-new-credentials) when sourcing content from Docker hub. + | Upstream Registries | Support | Availability | |-|-|--| | Docker Hub | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | |
container-registry | Troubleshoot Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-cache.md | To resolve this issue, you need to follow these steps: Artifact cache currently supports the following upstream registries: +>[!WARNING] +> We recommend customers to [create a credential set](container-registry-artifact-cache.md#create-new-credentials) when sourcing content from Docker hub. + | Upstream Registries | Support | Availability | |-|-|--| | Docker Hub | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | |
cosmos-db | Analytical Store Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md | + > [!IMPORTANT] + > Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). ++To get started with Azure Synapse Link, please visit [“Getting started with Azure Synapse Link”](synapse-link.md) + Azure Cosmos DB analytical store is a fully isolated column store for enabling large-scale analytics against operational data in your Azure Cosmos DB, without any impact to your transactional workloads. Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional applications without having to deal with schema or index management. In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query performance. This article describes in detailed about analytical storage. When you enable analytical store on an Azure Cosmos DB container, a new column-s ## Column store for analytical workloads on operational data -Analytical workloads typically involve aggregations and sequential scans of selected fields. By storing the data in a column-major order, the analytical store allows a group of values for each field to be serialized together. This format reduces the IOPS required to scan or compute statistics over specific fields. It dramatically improves the query response times for scans over large data sets. +Analytical workloads typically involve aggregations and sequential scans of selected fields. The data analytical store is stored in a column-major order, allowing values of each field to be serialized together, where applicable. This format reduces the IOPS required to scan or compute statistics over specific fields. It dramatically improves the query response times for scans over large data sets. For example, if your operational tables are in the following format: :::image type="content" source="./media/analytical-store-introduction/sample-operational-data-table.png" alt-text="Example operational table" border="false"::: -The row store persists the above data in a serialized format, per row, on the disk. This format allows for faster transactional reads, writes, and operational queries, such as, "Return information about Product1". However, as the dataset grows large and if you want to run complex analytical queries on the data it can be expensive. For example, if you want to get "the sales trends for a product under the category named 'Equipment' across different business units and months", you need to run a complex query. Large scans on this dataset can get expensive in terms of provisioned throughput and can also impact the performance of the transactional workloads powering your real-time applications and services. +The row store persists the above data in a serialized format, per row, on the disk. This format allows for faster transactional reads, writes, and operational queries, such as, "Return information about Product 1". However, as the dataset grows large and if you want to run complex analytical queries on the data it can be expensive. For example, if you want to get "the sales trends for a product under the category named 'Equipment' across different business units and months", you need to run a complex query. Large scans on this dataset can get expensive in terms of provisioned throughput and can also impact the performance of the transactional workloads powering your real-time applications and services. Analytical store, which is a column store, is better suited for such queries because it serializes similar fields of data together and reduces the disk IOPS. At the end of each execution of the automatic sync process, your transactional d ## Scalability & elasticity -By using horizontal partitioning, Azure Cosmos DB transactional store can elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it's 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store. +Azure Cosmos DB transactional store uses horizontal partitioning to elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it's 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store. ## <a id="analytical-schema"></a>Automatically handle schema updates sql_results.show() ##### Using full fidelity schema with SQL -Considering the same documents of the Spark example above, customers can use the following syntax example: +You can use the following syntax example, with the same documents of the Spark example above: ```SQL SELECT rating,timestamp_string,timestamp_utc timestamp_utc float '$.timestamp.float64' WHERE timestamp is not null or timestamp_utc is not null ``` -Starting from the query above, customers can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. Customers can also hide complex datatype structures by using views. +You can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. You can also hide complex datatype structures by using views. ```SQL create view MyView as WHERE timestamp_string is not null ``` -##### Working with the MongoDB `_id` field +##### Working with MongoDB `_id` field -the MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below: +MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below: -###### Working with the MongoDB `_id` field in Spark +###### Working with MongoDB `_id` field in Spark The example below works on Spark 2.x and 3.x versions: val dfConverted = df.withColumn("objectId", col("_id.objectId")).withColumn("con display(dfConverted) ``` -###### Working with the MongoDB `_id` field in SQL +###### Working with MongoDB `_id` field in SQL ```SQL SELECT TOP 100 id=CAST(_id as VARBINARY(1000)) It's possible to use full fidelity Schema for API for NoSQL accounts, instead of * Currently, if you enable Synapse Link in your NoSQL API account using the Azure portal, it will be enabled as well-defined schema. * Currently, if you want to use full fidelity schema with NoSQL or Gremlin API accounts, you have to set it at account level in the same CLI or PowerShell command that will enable Synapse Link at account level. * Currently Azure Cosmos DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts have full fidelity schema representation type.-* Full Fidelity schema data types map mentioned above isn't valid for NoSQL API accounts, that use JSON datatypes. As an example, `float` and `integer` values are represented as `num` in analytical store. +* Full Fidelity schema data types map mentioned above isn't valid for NoSQL API accounts that use JSON datatypes. As an example, `float` and `integer` values are represented as `num` in analytical store. * It's not possible to reset the schema representation type, from well-defined to full fidelity or vice-versa. * Currently, containers schemas in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account. * Containers or graphs created before Synapse Link was enabled with full fidelity schema at account level will have well-defined schema. Data tiering refers to the separation of data between storage infrastructures op After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure `transactional TTL` property to have records automatically deleted from the transactional store after a certain time period. Similarly, the `analytical TTL` allows you to manage the lifecycle of data retained in the analytical store, independent from the transactional store. By enabling analytical store and configuring transactional and analytical `TTL` properties, you can seamlessly tier and define the data retention period for the two stores. > [!NOTE]-> When `analytical TTL` is bigger than `transactional TTL`, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL` in analytical store. If your container data may need an update or a delete at some point in time in the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future. +> When `analytical TTL` is set to a value larger than `transactional TTL` value, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL` in analytical store. If your container data may need an update or a delete at some point in time in the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future. > [!NOTE] > If your scenario doesn't demand physical deletes, you can adopt a logical delete/update approach. Insert in transactional store another version of the same document that only exists in analytical store but needs a logical delete/update. Maybe with a flag indicating that it's a delete or an update of an expired document. Both versions of the same document will co-exist in analytical store, and your application should only consider the last one. After the analytical store is enabled, based on the data retention needs of the Analytical store relies on Azure Storage and offers the following protection against physical failure: * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.- * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year. + * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. You need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year. -For more information about Azure Storage durability, click [here](/azure/storage/common/storage-redundancy). +For more information about Azure Storage durability, see [this link.](/azure/storage/common/storage-redundancy) ## Backup Synapse Link, and analytical store by consequence, has different compatibility l * Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account. * Synapse Link for database accounts using continuous backup mode is GA.-* Continuous backup mode for Synapse Link enabled accounts is in public preview. Currently, customers that disabled Synapse Link from containers can't migrate to continuous backup. +* Continuous backup mode for Synapse Link enabled accounts is in public preview. Currently, you can't migrate to continuous backup if you disabled Synapse Link on any of your collections in a Cosmos DB account. ### Backup policies Analytical store partitioning is completely independent of partitioning in The analytical store is optimized to provide scalability, elasticity, and performance for analytical workloads without any dependency on the compute run-times. The storage technology is self-managed to optimize your analytics workloads without manual efforts. -By decoupling the analytical storage system from the analytical compute system, data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics. As of today, Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store. +Data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics. Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store. > [!NOTE] > You can only read from analytical store using Azure Synapse Analytics runtimes. And the opposite is also true, Azure Synapse Analytics runtimes can only read from analytical store. Only the auto-sync process can change data in analytical store. You can write data back to Azure Cosmos DB transactional store using Azure Synapse Analytics Spark pool, using the built-in Azure Cosmos DB OLTP SDK. |
cosmos-db | Analytics And Business Intelligence Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytics-and-business-intelligence-overview.md | + + Title: Analytics and BI ++description: Review Azure Cosmos DB options to enable large-scale analytics and BI reporting on your operational data. ++++ Last updated : 07/01/2024+++# Analytics and Business Intelligence (BI) on your Azure Cosmos DB data ++Azure Cosmos DB offers various options to enable large-scale analytics and BI reporting on your operational data. ++To get meaningful insights on your Azure Cosmos DB data, you may need to query across multiple partitions, collections, or databases. In some cases, you might combine this data with other data sources in your organization such as Azure SQL Database, Azure Data Lake Storage Gen2 etc. You might also query with aggregate functions such as sum, count etc. Such queries need heavy computational power, which likely consumes more request units (RUs) and as a result, these queries might potentially affect your mission critical workload performance. ++To isolate transactional workloads from the performance impact of complex analytical queries, database data is ingested nightly to a central location using complex Extract-Transform-Load (ETL) pipelines. Such ETL-based analytics are complex, costly with delayed insights on business data. ++Azure Cosmos DB addresses these challenges by providing no-ETL, cost-effective analytics offerings. ++## No-ETL, near real-time analytics on Azure Cosmos DB +Azure Cosmos DB offers no-ETL, near real-time analytics on your data without affecting the performance of your transactional workloads or request units (RUs). These offerings remove the need for complex ETL pipelines, making your Azure Cosmos DB data seamlessly available to analytics engines. With reduced latency to insights, you can provide enhanced customer experience and react more quickly to changes in market conditions or business environment. Here are some sample [scenarios](synapse-link-use-cases.md) you can achieve with quick insights into your data. + + You can enable no-ETL analytics and BI reporting on Azure Cosmos DB using the following options: ++* Mirroring your data into Microsoft Fabric +* Enabling Azure Synapse Link to access data from Azure Synapse Analytics + ++### Option 1: Mirroring your Azure Cosmos DB data into Microsoft Fabric ++Mirroring enables you to seamlessly bring your Azure Cosmos DB database data into Microsoft Fabric. With no-ETL, you can get rich business insights on your Azure Cosmos DB data using FabricΓÇÖs built-in analytics, BI, and AI capabilities. ++Your Cosmos DB operational data is incrementally replicated into Fabric OneLake in near real-time. Data in OneLake is stored in open-source Delta Parquet format and made available to all analytical engines in Fabric. With open access, you can use it with various Azure services such as Azure Databricks, Azure HDInsight, and more. OneLake also helps unify your data estate for your analytical needs. Mirrored data can be joined with any other data in OneLake, such as Lakehouses, Warehouses or shortcuts. You can also join Azure Cosmos DB data with other mirrored database sources such as Azure SQL Database, Snowflake. +You can query across Azure Cosmos DB collections or databases mirrored into OneLake. ++With Mirroring in Fabric, you don't need to piece together different services from multiple vendors. Instead, you can enjoy a highly integrated, end-to-end, and easy-to-use product that is designed to simplify your analytics needs. +You can use T-SQL to run complex aggregate queries and Spark for data exploration. You can seamlessly access the data in notebooks, use data science to build machine learning models, and build Power BI reports using Direct Lake powered by rich Copilot integration. +++If you're looking for analytics on your operational data in Azure Cosmos DB, mirroring provides: +* No-ETL, cost-effective near real-time analytics on Azure Cosmos DB data without affecting your request unit (RU) consumption +* Ease of bringing data across various sources into Fabric OneLake. +* Improved query performance of SQL engine handling delta tables, with V-order optimizations +* Improved cold start time for Spark engine with deep integration with ML/notebooks +* One-click integration with Power BI with Direct Lake and Copilot +* Richer app integration to access queries and views with GraphQL +* Open access to and from other services such as Azure Databricks ++To get started with mirroring, visit ["Get started with mirroring tutorial"](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context). +++### Option 2: Azure Synapse Link to access data from Azure Synapse Analytics +Azure Synapse Link for Azure Cosmos DB creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics, enabling no-ETL, near real-time analytics on your operational data. +Transactional data is seamlessly synced to Analytical store, which stores the data in columnar format optimized for analytics. ++Azure Synapse Analytics can access this data in Analytical store, without further movement, using Azure Synapse Link. Business analysts, data engineers, and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real time business intelligence, analytics, and machine learning pipelines. ++The following image shows the Azure Synapse Link integration with Azure Cosmos DB and Azure Synapse Analytics: +++ > [!IMPORTANT] + > Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). ++To get started with Azure Synapse Link, visit ["Getting started with Azure Synapse Link"](synapse-link.md). +++## Real-time analytics and BI on Azure Cosmos DB: Other options +There are a few other options to enable real-time analytics on Azure Cosmos DB data: +* Using [change feed](nosql/changefeed-ecommerce-solution.md) +* Using [Spark connector directly on Azure Cosmos DB](nosql/tutorial-spark-connector.md) +* Using Power BI connector directly on Azure Cosmos DB ++While these options are included for completeness and work well with single partition queries in real-time, these methods have the following challenges for analytical queries: +* Performance impact on your workload: ++ Analytical queries tend to be complex and consume significant compute capacity. When these queries are run against your Azure Cosmos DB data directly, you might experience performance degradation on your transactional queries. +* Cost impact: + + When analytical queries are run directly against your database or collections, they increase the need for request units allocated, as analytical queries tend to be complex and need more computation power. Increased RU usage will likely lead to significant cost impact over time, if you run aggregate queries. ++Instead of these options, we recommend that you use Mirroring in Microsoft Fabric or Azure Synapse Link, which provide no-ETL analytics, without affecting transactional workload performance or request units. ++## Related content ++* [Mirroring Azure Cosmos DB overview](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context) ++* [Getting started with mirroring](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context) ++* [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) ++* [Working with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) ++ |
cosmos-db | Configure Synapse Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md | + > [!IMPORTANT] + > Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). + Azure Synapse Link is available for Azure Cosmos DB SQL API or for Azure Cosmos DB API for Mongo DB accounts. And it is in preview for Gremlin API, with activation via CLI commands. Use the following steps to run analytical queries with the Azure Synapse Link for Azure Cosmos DB: * [Enable Azure Synapse Link for your Azure Cosmos DB accounts](#enable-synapse-link) |
cosmos-db | Synapse Link Use Cases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-use-cases.md | Title: Near real-time analytics use cases with Azure Synapse Link for Azure Cosmos DB -description: Learn how Azure Synapse Link for Azure Cosmos DB is used in Supply chain analytics, forecasting, reporting, real-time personalization, and IOT predictive maintenance. + Title: Near real-time analytics use cases for Azure Cosmos DB +description: Learn how real-time analytics is used in Supply chain analytics, forecasting, reporting, real-time personalization, and IOT predictive maintenance. - Previously updated : 09/29/2022-+ Last updated : 06/25/2024+ -# Azure Synapse Link for Azure Cosmos DB: Near real-time analytics use cases +# Azure Cosmos DB: No-ETL analytics use cases [!INCLUDE[NoSQL, MongoDB, Gremlin](includes/appliesto-nosql-mongodb-gremlin.md)] -[Azure Synapse Link](synapse-link.md) for Azure Cosmos DB is a cloud native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data. Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics. +Azure Cosmos DB provides various analytics options for no-ETL, near real-time analytics over operational data. You can enable analytics on your Azure Cosmos DB data using following options: +* Mirroring Azure Cosmos DB in Microsoft Fabric +* Azure Synapse Link for Azure Cosmos DB -You might be curious to understand what industry use cases can leverage this cloud native HTAP capability for near real-time analytics over operational data. Here are three common use cases for Azure Synapse Link for Azure Cosmos DB: +To learn more about these options, see ["Analytics and BI on your Azure Cosmos DB data."](analytics-and-business-intelligence-overview.md) ++> [!IMPORTANT] +> Mirroring Azure Cosmos DB in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). ++No-ETL, near real-time analytics can open up various possibilities for your businesses. Here are three sample scenarios: * Supply chain analytics, forecasting & reporting * Real-time personalization * Predictive maintenance, anomaly detection in IOT scenarios -> [!NOTE] -> Azure Synapse Link for Azure Cosmos DB targets the scenario where enterprise teams are looking to run near real-time analytics. These analytics are run without ETL over operational data generated across transactional applications built on Azure Cosmos DB. This does not replace the need for a separate data warehouse when there are traditional data warehouse requirements such as workload management, high concurrency, persistence aggregates across multiple data sources. --> [!NOTE] -> Synapse Link for Gremlin API is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md). - ## Supply chain analytics, forecasting & reporting Research studies show that embedding big data analytics in supply chain operations leads to improvements in order-to-cycle delivery times and supply chain efficiency. Manufacturers are onboarding to cloud-native technologies to break out of constraints of legacy Enterprise Resource Planning (ERP) and Supply Chain Management (SCM) systems. With supply chains generating increasing volumes of operational data every minute (order, shipment, transaction data), manufacturers need an operational database. This operational database should scale to handle the data volumes as well as an analytical platform to get to a level of real-time contextual intelligence to stay ahead of the curve. -The following architecture shows the power of leveraging Azure Cosmos DB as the cloud-native operational database and Synapse Link in supply chain analytics: +The following architecture shows the power of using Azure Cosmos DB as the cloud-native operational database in supply chain analytics: -Based on previous architecture, you can achieve the following use cases with Synapse Link for Azure Cosmos DB: +Based on previous architecture, you can achieve the following use cases: * **Prepare & train predictive pipeline:** Generate insights over the operational data across the supply chain using machine learning translates. This way you can lower inventory, operations costs, and reduce the order-to-delivery times for customers. - Synapse Link allows you to analyze the changing operational data in Azure Cosmos DB without any manual ETL processes. It saves you from additional cost, latency, and operational complexity. Synapse Link enables data engineers and data scientists to build robust predictive pipelines: + Mirroring and Synapse Link allow you to analyze the changing operational data in Azure Cosmos DB without any manual ETL processes. These offerings save you from additional cost, latency, and operational complexity. They enable data engineers and data scientists to build robust predictive pipelines: - * Query operational data from Azure Cosmos DB analytical store by leveraging native integration with Apache Spark pools in Azure Synapse Analytics. You can query the data in an interactive notebook or scheduled remote jobs without complex data engineering. + * Query operational data from Azure Cosmos DB by using native integration with Apache Spark pools in Microsoft Fabric or Azure Synapse Analytics. You can query the data in an interactive notebook or scheduled remote jobs without complex data engineering. - * Build Machine Learning (ML) models with Spark ML algorithms and Azure ML integration in Azure Synapse Analytics. + * Build Machine Learning (ML) models with Spark ML algorithms and Azure Machine Learning (AML) integration in Microsoft Fabric or Azure Synapse Analytics. * Write back the results after model inference into Azure Cosmos DB for operational near-real-time scoring. * **Operational reporting:** Supply chain teams need flexible and custom reports over real-time, accurate operational data. These reports are required to obtain a snapshot view of supply chain effectiveness, profitability, and productivity. It allows data analysts and other key stakeholders to constantly reevaluate the business and identify areas to tweak to reduce operational costs. - Synapse Link for Azure Cosmos DB enables rich business intelligence (BI)/reporting scenarios: + Mirroring and Synapse Link for Azure Cosmos DB enable rich business intelligence (BI)/reporting scenarios: - * Query operational data from Azure Cosmos DB analytical store by using native integration with serverless SQL pool and full expressiveness of T-SQL language. + * Query operational data from Azure Cosmos DB by using native integration with full expressiveness of T-SQL language. - * Model and publish auto refreshing BI dashboards over Azure Cosmos DB through serverless SQL pool support for familiar BI tools. For example, Azure Analysis Services, Power BI Premium, etc. + * Model and publish auto refreshing BI dashboards over Azure Cosmos DB through Power BI integrated in Microsoft Fabric or Azure Synapse Analytics. The following is some guidance for data integration for batch & streaming data into Azure Cosmos DB: -* **Batch data integration & orchestration:** With supply chains getting more complex, supply chain data platforms need to integrate with variety of data sources and formats. Azure Synapse comes built-in with the same data integration engine and experiences as Azure Data Factory. This integration allows data engineers to create rich data pipelines without a separate orchestration engine: +* **Batch data integration & orchestration:** With supply chains getting more complex, supply chain data platforms need to integrate with variety of data sources and formats. Microsoft Fabric and Azure Synapse come built-in with the same data integration engine and experiences as Azure Data Factory. This integration allows data engineers to create rich data pipelines without a separate orchestration engine: - * Move data from 85+ supported data sources to [Azure Cosmos DB with Azure Data Factory](../data-factory/connector-azure-cosmos-db.md). + * Move data from 85+ supported data sources to [Azure Cosmos DB with Azure Data Factory](../data-factory/connector-azure-cosmos-db.md). * Write code-free ETL pipelines to Azure Cosmos DB including [relational-to-hierarchical and hierarchical-to-hierarchical mappings with mapping data flows](../data-factory/how-to-sqldb-to-cosmosdb.md). The following is some guidance for data integration for batch & streaming data i Retailers today must build secure and scalable e-commerce solutions that meet the demands of both customers and business. These e-commerce solutions need to engage customers through customized products and offers, process transactions quickly and securely, and focus on fulfillment and customer service. Azure Cosmos DB along with the latest Synapse Link for Azure Cosmos DB allows retailers to generate personalized recommendations for customers in real time. They use low-latency and tunable consistency settings for immediate insights as shown in the following architecture: --Synapse Link for Azure Cosmos DB use case: --* **Prepare & train predictive pipeline:** You can generate insights over the operational data across your business units or customer segments using Synapse Spark and machine learning models. This translates to personalized delivery to target customer segments, predictive end-user experiences and targeted marketing to fit your end-user requirements. +* **Prepare & train predictive pipeline:** You can generate insights over the operational data across your business units or customer segments using Fabric or Synapse Spark and machine learning models. This translates to personalized delivery to target customer segments, predictive end-user experiences, and targeted marketing to fit your end-user requirements. +) ## IOT predictive maintenance Industrial IOT innovations have drastically reduced downtimes of machinery and increased overall efficiency across all fields of industry. One of such innovations is predictive maintenance analytics for machinery at the edge of the cloud. -The following is an architecture leveraging the cloud native HTAP capabilities of Azure Synapse Link for Azure Cosmos DB in IoT predictive maintenance: -+The following is an architecture using the cloud native HTAP capabilities in IoT predictive maintenance: -Synapse Link for Azure Cosmos DB use cases: * **Prepare & train predictive pipeline:** The historical operational data from IoT device sensors could be used to train predictive models such as anomaly detectors. These anomaly detectors are then deployed back to the edge for real-time monitoring. Such a virtuous loop allows for continuous retraining of the predictive models. -* **Operational reporting:** With the growth of digital twin initiatives, companies are collecting vast amounts of operational data from large number of sensors to build a digital copy of each machine. This data powers BI needs to understand trends over historical data in addition to real-time applications over recent hot data. --## Sample scenario: HTAP for Azure Cosmos DB --For nearly a decade, Azure Cosmos DB has been used by thousands of customers for mission critical applications that require elastic scale, turnkey global distribution, multi-region write replication for low latency and high availability of both reads & writes in their transactional workloads. - -The following list shows an overview of the various workload patterns that are supported with operational data using Azure Cosmos DB: --* Real-time apps & services -* Event stream processing -* BI dashboards -* Big data analytics -* Machine learning --Azure Synapse Link enables Azure Cosmos DB to not just power transactional workloads but also perform near real-time analytical workloads over historical operational data. It happens with no ETL requirements and guaranteed performance isolation from the transactional workloads. --The following image shows workload patterns using Azure Cosmos DB: --Let us take the example of an e-commerce company CompanyXYZ with global operations across 20 countries/regions to illustrate the benefits of choosing Azure Cosmos DB as the single real-time database powering both transactional and analytical requirements of an inventory management platform. +* **Operational reporting:** With the growth of digital twin initiatives, companies are collecting vast amounts of operational data from large number of sensors to build a digital copy of each machine. This data powers BI needs to understand trends over historical data in addition to recent hot data. -* CompanyXYZ's core business depends on the inventory management system ΓÇô hence availability & reliability are core pillar requirements. Benefits of using Azure Cosmos DB: +## Related content - * By virtue of deep integration with Azure infrastructure, and transparent multi-region writes, global replication, Azure Cosmos DB provides industry-leading [99.999% high availability](high-availability.md) against regional outages. -* CompanyXYZ's supply chain partners may be in separate geographic locations but they may have to see a single view of the product inventory across the globe to support their local operations. This includes the need to be able to read updates made by other supply chain partners in real time. As well as being able to make updates without worrying about conflicts with other partners at high throughput. Benefits of using Azure Cosmos DB: -- * With its unique multi-region writes replication protocol and latch-free, write-optimized transactional store, Azure Cosmos DB guarantees less than 10-ms latencies for both indexed reads and writes at the 99th percentile globally. -- * High throughput ingestion of both batch & streaming data feeds with [real-time indexing](index-policy.md) in transactional store. -- * Azure Cosmos DB transactional store provides three more options than the two extremes of strong and eventual consistency levels to achieve the [availability vs performance tradeoffs](./consistency-levels.md) closest to the business need. --* CompanyXYZ's supply chain partners have highly fluctuating traffic patterns ranging from hundreds to millions of requests/s and thus the inventory management platform needs to deal with unexpected burstiness in traffic. Benefits of using Azure Cosmos DB: -- * Azure Cosmos DB's transactional store supports elastic scalability of storage and throughput using horizontal partitioning. Containers and databases configured in Autopilot mode can automatically and instantly scale the provisioned throughput based on the application needs without impacting the availability, latency, throughput, or performance of the workload globally. --* CompanyXYZ needs to establish a secure analytics platform to house system-wide historical inventory data to enable analytics and insights across supply chain partner, business units and functions. The analytics platform needs to enable collaboration across the system, traditional BI/reporting use cases, advanced analytics use cases and predictive intelligent solutions over the operational inventory data. Benefits of using Synapse Link for Azure Cosmos DB: -- * By using [Azure Cosmos DB analytical store](analytical-store-introduction.md), a fully isolated column store, Synapse Link enables no Extract-Transform-Load (ETL) analytics in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) against globally distributed operational data at scale. Business analysts, data engineers and data scientists can now use Synapse Spark or Synapse SQL in an interoperable manner to run near real-time business intelligence, analytics, and machine learning pipelines without impacting the performance of their transactional workloads on Azure Cosmos DB. See the [benefits of Synapse Link in Azure Cosmos DB](synapse-link.md) for more details. --## Next steps --To learn more, see the following docs: +* [Mirroring Azure Cosmos DB overview](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context) +* [Getting started with mirroring](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context) + * [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) -* [Azure Cosmos DB Analytical Store](analytical-store-introduction.md) - * [Working with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) -* [Frequently asked questions about Azure Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.yml) --* [Apache Spark in Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-concepts.md) --* [Serverless SQL pool runtime support in Azure Synapse Analytics](../synapse-analytics/sql/on-demand-workspace-overview.md) |
cosmos-db | Synapse Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md | Azure Synapse Link for Azure Cosmos DB is a cloud-native hybrid transactional an [Azure Cosmos DB analytical store](analytical-store-introduction.md), a fully isolated column store, can be used with Azure Synapse Link to enable Extract-Transform-Load (ETL) analytics in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) against your operational data at scale. Business analysts, data engineers, and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real time business intelligence, analytics, and machine learning pipelines. You can analyze real time data without affecting the performance of your transactional workloads on Azure Cosmos DB. +> [!IMPORTANT] +> Mirroring Azure Cosmos DB in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). + The following image shows the Azure Synapse Link integration with Azure Cosmos DB and Azure Synapse Analytics: :::image type="content" source="./media/synapse-link/synapse-analytics-cosmos-db-architecture.png" alt-text="Architecture diagram for Azure Synapse Analytics integration with Azure Cosmos DB" border="false"::: |
data-factory | How To Create Custom Event Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-custom-event-trigger.md | Title: Create custom event triggers in Azure Data Factory -description: Learn how to create a trigger in Azure Data Factory that runs a pipeline in response to a custom event published to Event Grid. +description: Learn how to create a trigger in Azure Data Factory that runs a pipeline in response to a custom event published to Azure Event Grid. Last updated 01/05/2024 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -Event-driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require Azure Data Factory customers to trigger pipelines when certain events occur. Data Factory native integration with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) now covers [custom topics](../event-grid/custom-topics.md). You send events to an event grid topic. Data Factory subscribes to the topic, listens, and then triggers pipelines accordingly. --> [!NOTE] -> The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more information, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the `Microsoft.EventGrid/eventSubscriptions/` action. This action is part of the [EventGrid EventSubscription Contributor](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) built-in role. +Event-driven architecture is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require Azure Data Factory customers to trigger pipelines when certain events occur. Data Factory native integration with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) now covers [custom topics](../event-grid/custom-topics.md). You send events to an Event Grid topic. Data Factory subscribes to the topic, listens, and then triggers pipelines accordingly. +The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more information, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the `Microsoft.EventGrid/eventSubscriptions/` action. This action is part of the [EventGrid EventSubscription Contributor](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) built-in role. > [!IMPORTANT]-> If you are using this feature in Azure Synapse Analytics, please ensure that your subscription is also registered with Data Factory resource provider, or otherwise you will get an error stating that _the creation of an "Event Subscription" failed_. -+> If you're using this feature in Azure Synapse Analytics, ensure that your subscription is also registered with a Data Factory resource provider. Otherwise, you get a message stating that "the creation of an Event Subscription failed." -If you combine pipeline parameters and a custom event trigger, you can parse and reference custom `data` payloads in pipeline runs. Because the `data` field in a custom event payload is a free-form, JSON key-value structure, you can control event-driven pipeline runs. +If you combine pipeline parameters and a custom event trigger, you can parse and reference custom `data` payloads in pipeline runs. Because the `data` field in a custom event payload is a freeform, JSON key-value structure, you can control event-driven pipeline runs. > [!IMPORTANT]-> If a key referenced in parameterization is missing in the custom event payload, `trigger run` will fail. You'll get an error that states the expression cannot be evaluated because property `keyName` doesn't exist. In this case, **no** `pipeline run` will be triggered by the event. +> If a key referenced in parameterization is missing in the custom event payload, `trigger run` fails. You get a message that states the expression can't be evaluated because the `keyName` property doesn't exist. In this case, **no** `pipeline run` is triggered by the event. ## Set up a custom topic in Event Grid To use the custom event trigger in Data Factory, you need to *first* set up a [custom topic in Event Grid](../event-grid/custom-topics.md). -Go to Azure Event Grid and create the topic yourself. For more information on how to create the custom topic, see Azure Event Grid [portal tutorials](../event-grid/custom-topics.md#azure-portal-tutorials) and [CLI tutorials](../event-grid/custom-topics.md#azure-cli-tutorials). +Go to Event Grid and create the topic yourself. For more information on how to create the custom topic, see Event Grid [portal tutorials](../event-grid/custom-topics.md#azure-portal-tutorials) and [Azure CLI tutorials](../event-grid/custom-topics.md#azure-cli-tutorials). > [!NOTE]-> The workflow is different from Storage Event Trigger. Here, Data Factory doesn't set up the topic for you. +> The workflow is different from a storage event trigger. Here, Data Factory doesn't set up the topic for you. -Data Factory expects events to follow the [Event Grid event schema](../event-grid/event-schema.md). Make sure event payloads have the following fields: +Data Factory expects events to follow the [Event Grid event schema](../event-grid/event-schema.md). Make sure that event payloads have the following fields: ```json [ Data Factory expects events to follow the [Event Grid event schema](../event-gri ## Use Data Factory to create a custom event trigger -1. Go to Azure Data Factory and sign in. +1. Go to Data Factory and sign in. 1. Switch to the **Edit** tab. Look for the pencil icon. 1. Select **Trigger** on the menu and then select **New/Edit**. -1. On the **Add Triggers** page, select **Choose trigger**, and then select **+New**. +1. On the **Add Triggers** page, select **Choose trigger**, and then select **+ New**. -1. Select **Custom events** for **Type**. +1. Under **Type**, select **Custom events**. - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-1-creation.png" alt-text="Screenshot of Author page to create a new custom event trigger in Data Factory UI." lightbox="media/how-to-create-custom-event-trigger/custom-event-1-creation-expanded.png"::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-1-creation.png" alt-text="Screenshot that shows creating a new custom event trigger in the Data Factory UI." lightbox="media/how-to-create-custom-event-trigger/custom-event-1-creation-expanded.png"::: -1. Select your custom topic from the Azure subscription dropdown or manually enter the event topic scope. +1. Select your custom topic from the Azure subscription dropdown list or manually enter the event topic scope. > [!NOTE]- > To create or modify a custom event trigger in Data Factory, you need to use an Azure account with appropriate role-based access control (Azure RBAC). No additional permission is required. The Data Factory service principal does *not* require special permission to your Event Grid. For more information about access control, see the [Role-based access control](#role-based-access-control) section. + > To create or modify a custom event trigger in Data Factory, you need to use an Azure account with appropriate Azure role-based access control (Azure RBAC). No other permission is required. The Data Factory service principal does *not* require special permission to your Event Grid. For more information about access control, see the [Role-based access control](#role-based-access-control) section. -1. The **Subject begins with** and **Subject ends with** properties allow you to filter for trigger events. Both properties are optional. +1. The `Subject begins with` and `Subject ends with` properties allow you to filter for trigger events. Both properties are optional. -1. Use **+ New** to add **Event Types** to filter on. The list of custom event triggers uses an OR relationship. When a custom event with an `eventType` property that matches one on the list, a pipeline run is triggered. The event type is case insensitive. For example, in the following screenshot, the trigger matches all `copycompleted` or `copysucceeded` events that have a subject that begins with *factories*. +1. Use **+ New** to add **Event types** to filter on. The list of custom event triggers uses an OR relationship. When a custom event with an `eventType` property matches one on the list, a pipeline run is triggered. The event type is case insensitive. For example, in the following screenshot, the trigger matches all `copycompleted` or `copysucceeded` events that have a subject that begins with *factories*. - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-2-properties.png" alt-text="Screenshot of Edit Trigger page to explain Event Types and Subject filtering in Data Factory UI."::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-2-properties.png" alt-text="Screenshot that shows the Edit trigger page to explain Event types and Subject filtering in the Data Factory UI."::: ++1. A custom event trigger can parse and send a custom `data` payload to your pipeline. You create the pipeline parameters and then fill in the values on the **Parameters** page. Use the format `@triggerBody().event.data._keyName_` to parse the data payload and pass values to the pipeline parameters. ++ For a detailed explanation, see: -1. A custom event trigger can parse and send a custom `data` payload to your pipeline. You create the pipeline parameters, and then fill in the values on the **Parameters** page. Use the format `@triggerBody().event.data._keyName_` to parse the data payload and pass values to the pipeline parameters. - - For a detailed explanation, see the following articles: - [Reference trigger metadata in pipelines](how-to-use-trigger-parameterization.md) - [System variables in custom event trigger](control-flow-system-variables.md#custom-event-trigger-scope) - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-4-trigger-values.png" alt-text="Screenshot of pipeline parameters settings."::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-4-trigger-values.png" alt-text="Screenshot that shows pipeline parameters settings."::: - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-3-parameters.png" alt-text="Screenshot of the parameters page to reference data payload in custom event."::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-3-parameters.png" alt-text="Screenshot that shows the parameters page to reference data payload in a custom event."::: -1. After you've entered the parameters, select **OK**. +1. After you enter the parameters, select **OK**. ## Advanced filtering -Custom event trigger supports advanced filtering capabilities, similar to [Event Grid Advanced Filtering](../event-grid/event-filtering.md#advanced-filtering). These conditional filters allow pipelines to trigger based upon the _values_ of event payload. For instance, you may have a field in the event payload, named _Department_, and pipeline should only trigger if _Department_ equals to _Finance_. You may also specify complex logic, such as _date_ field in list [1, 2, 3, 4, 5], _month_ field __not__ in list [11, 12], _tag_ field contains any of ['Fiscal Year 2021', 'FiscalYear2021', 'FY2021']. +Custom event triggers support advanced filtering capabilities, similar to [Event Grid advanced filtering](../event-grid/event-filtering.md#advanced-filtering). These conditional filters allow pipelines to trigger based on the _values_ of the event payload. For instance, you might have a field in the event payload named _Department_, and the pipeline should only trigger if _Department_ equals _Finance_. You might also specify complex logic, such as the _date_ field in list [1, 2, 3, 4, 5], the _month_ field *not* in the list [11, 12], and if the _tag_ field contains [Fiscal Year 2021, FiscalYear2021, or FY2021]. - :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-5-advanced-filters.png" alt-text="Screenshot of setting advanced filters for customer event trigger"::: + :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-5-advanced-filters.png" alt-text="Screenshot that shows setting advanced filters for a customer event trigger."::: -As of today custom event trigger supports a __subset__ of [advanced filtering operators](../event-grid/event-filtering.md#advanced-filtering) in Event Grid. Following filter conditions are supported: +As of today, custom event triggers support a *subset* of [advanced filtering operators](../event-grid/event-filtering.md#advanced-filtering) in Event Grid. The following filter conditions are supported: -* NumberIn -* NumberNotIn -* NumberLessThan -* NumberGreaterThan -* NumberLessThanOrEquals -* NumberGreaterThanOrEquals -* BoolEquals -* StringContains -* StringBeginsWith -* StringEndsWith -* StringIn -* StringNotIn +* `NumberIn` +* `NumberNotIn` +* `NumberLessThan` +* `NumberGreaterThan` +* `NumberLessThanOrEquals` +* `NumberGreaterThanOrEquals` +* `BoolEquals` +* `StringContains` +* `StringBeginsWith` +* `StringEndsWith` +* `StringIn` +* `StringNotIn` -Select **+New** to add new filter conditions. +Select **+ New** to add new filter conditions. -Additionally, custom event triggers obey the [same limitations as Event Grid](../event-grid/event-filtering.md#limitations), including: +Custom event triggers also obey the [same limitations as Event Grid](../event-grid/event-filtering.md#limitations), such as: -* 5 advanced filters and 25 filter values across all the filters per custom event trigger -* 512 characters per string value -* 5 values for in and not in operators -* keys cannot have `.` (dot) character in them, for example, `john.doe@contoso.com`. Currently, there's no support for escape characters in keys. +* 5 advanced filters and 25 filter values across all the filters per custom event trigger. +* 512 characters per string value. +* 5 values for `in` and `not in` operators. +* Keys can't have the `.` (dot) character in them, for example, `john.doe@contoso.com`. Currently, there's no support for escape characters in keys. * The same key can be used in more than one filter. -Data Factory relies upon the latest _GA_ version of [Event Grid API](../event-grid/whats-new.md). As new API versions get to GA stage, Data Factory will expand its support for more advanced filtering operators. +Data Factory relies on the latest general availability (GA) version of the [Event Grid API](../event-grid/whats-new.md). As new API versions get to the GA stage, Data Factory expands its support for more advanced filtering operators. ## JSON schema -The following table provides an overview of the schema elements that are related to custom event triggers: +The following table provides an overview of the schema elements that are related to custom event triggers. | JSON element | Description | Type | Allowed values | Required | ||-||||-| `scope` | The Azure Resource Manager resource ID of the Event Grid topic. | String | Azure Resource Manager ID | Yes | +| `scope` | The Azure Resource Manager resource ID of the Event Grid topic. | String | Azure Resource Manager ID | Yes. | | `events` | The type of events that cause this trigger to fire. | Array of strings | | Yes, at least one value is expected. |-| `subjectBeginsWith` | The `subject` field must begin with the provided pattern for the trigger to fire. For example, _factories_ only fire the trigger for event subjects that start with *factories*. | String | | No | -| `subjectEndsWith` | The `subject` field must end with the provided pattern for the trigger to fire. | String | | No | -| `advancedFilters` | List of JSON blobs, each specifying a filter condition. Each blob specifies `key`, `operatorType`, and `values`. | List of JSON blob | | No | +| `subjectBeginsWith` | The `subject` field must begin with the provided pattern for the trigger to fire. For example, *factories* only fire the trigger for event subjects that start with *factories*. | String | | No. | +| `subjectEndsWith` | The `subject` field must end with the provided pattern for the trigger to fire. | String | | No. | +| `advancedFilters` | List of JSON blobs, each specifying a filter condition. Each blob specifies `key`, `operatorType`, and `values`. | List of JSON blobs | | No. | ## Role-based access control -Azure Data Factory uses Azure role-based access control (RBAC) to prohibit unauthorized access. To function properly, Data Factory requires access to: +Data Factory uses Azure RBAC to prohibit unauthorized access. To function properly, Data Factory requires access to: + - Listen to events. - Subscribe to updates from events. - Trigger pipelines linked to custom events. -To successfully create or update a custom event trigger, you need to sign in to Data Factory with an Azure account that has appropriate access. Otherwise, the operation will fail with an _Access Denied_ error. +To successfully create or update a custom event trigger, you need to sign in to Data Factory with an Azure account that has appropriate access. Otherwise, the operation fails with the message "Access Denied." -Data Factory doesn't require special permission to your Event Grid. You also do *not* need to assign special Azure RBAC role permission to the Data Factory service principal for the operation. +Data Factory doesn't require special permission to your instance of Event Grid. You also do *not* need to assign special Azure RBAC role permission to the Data Factory service principal for the operation. Specifically, you need `Microsoft.EventGrid/EventSubscriptions/Write` permission on `/subscriptions/####/resourceGroups//####/providers/Microsoft.EventGrid/topics/someTopics`. -- When authoring in the data factory (in the development environment for instance), the Azure account signed in needs to have the above permission-- When publishing through [CI/CD](continuous-integration-delivery.md), the account used to publish the ARM template into the testing or production factory needs to have the above permission.+- When you author in the data factory (in the development environment, for instance), the Azure account signed in needs to have the preceding permission. +- When you publish through [continuous integration and continuous delivery](continuous-integration-delivery.md), the account used to publish the Azure Resource Manager template into the testing or production factory needs to have the preceding permission. ## Related content |
data-factory | How To Create Event Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md | Title: Create event-based triggers- -description: Learn how to create a trigger in an Azure Data Factory or Azure Synapse Analytics that runs a pipeline in response to an event. ++description: Learn how to create a trigger in Azure Data Factory or Azure Synapse Analytics that runs a pipeline in response to an event. Last updated 05/24/2024 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article describes the Storage Event Triggers that you can create in your Data Factory or Synapse pipelines. +This article describes the storage event triggers that you can create in your Azure Data Factory or Azure Synapse Analytics pipelines. -Event-driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require customers to trigger pipelines that are triggered from events on a storage account, such as the arrival or deletion of a file in Azure Blob Storage account. Data Factory and Synapse pipelines natively integrate with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/), which lets you trigger pipelines on such events. +Event-driven architecture is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require customers to trigger pipelines that are triggered from events on an Azure Storage account, such as the arrival or deletion of a file in Azure Blob Storage account. Data Factory and Azure Synapse Analytics pipelines natively integrate with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/), which lets you trigger pipelines on such events. ## Storage event trigger considerations -There are several things to consider when using storage event triggers: +Consider the following points when you use storage event triggers: -- The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more info, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the *Microsoft.EventGrid/eventSubscriptions/** action. This action is part of the EventGrid EventSubscription Contributor built-in role.-- If you're using this feature in Azure Synapse Analytics, ensure that you also register your subscription with the Data Factory resource provider. Otherwise you get an error stating that _the creation of an "Event Subscription" failed_.-- If the blob storage account resides behind a [private endpoint](../storage/common/storage-private-endpoints.md) and blocks public network access, you need to configure network rules to allow communications from blob storage to Azure Event Grid. You can either grant storage access to trusted Azure services, such as Event Grid, following [Storage documentation](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services), or configure private endpoints for Event Grid that map to VNet address space, following [Event Grid documentation](../event-grid/configure-private-endpoints.md)-- The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. If you're working with SFTP Storage Events you need to specify the SFTP Data API under the filtering section too. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account.-- To create a new or modify an existing Storage Event Trigger, the Azure account used to log into the service and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on the storage account. No other permissions are required: Service Principal for the Azure Data Factory and Azure Synapse does _not_ need special permission to either the Storage account or Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.-- If you applied an ARM lock to your Storage Account, it might impact the blob trigger's ability to create or delete blobs. A **ReadOnly** lock prevents both creation and deletion, while a **DoNotDelete** lock prevents deletion. Ensure you account for these restrictions to avoid any issues with your triggers.-- File arrival triggers are not recommended as a triggering mechanism from data flow sinks. Data flows perform a number of file renaming and partition file shuffling tasks in the target folder that can inadvertently trigger a file arrival event before the complete processing of your data.+- The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more information, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the `Microsoft.EventGrid/eventSubscriptions/` action. This action is part of the `EventGrid EventSubscription Contributor` built-in role. +- If you're using this feature in Azure Synapse Analytics, ensure that you also register your subscription with the Data Factory resource provider. Otherwise, you get a message stating that "the creation of an Event Subscription failed." +- If the Blob Storage account resides behind a [private endpoint](../storage/common/storage-private-endpoints.md) and blocks public network access, you need to configure network rules to allow communications from Blob Storage to Event Grid. You can either grant storage access to trusted Azure services, such as Event Grid, following [Storage documentation](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services), or configure private endpoints for Event Grid that map to a virtual network address space, following [Event Grid documentation](../event-grid/configure-private-endpoints.md). +- The storage event trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. If you're working with Secure File Transfer Protocol (SFTP) storage events, you need to specify the SFTP Data API under the filtering section too. Because of an Event Grid limitation, Data Factory only supports a maximum of 500 storage event triggers per storage account. +- To create a new storage event trigger or modify an existing one, the Azure account you use to sign in to the service and publish the storage event trigger must have appropriate role-based access control (Azure RBAC) permission on the storage account. No other permissions are required. Service principal for the Azure Data Factory and Azure Synapse Analytics does _not_ need special permission to either the storage account or Event Grid. For more information about access control, see the [Role-based access control](#role-based-access-control) section. +- If you applied an Azure Resource Manager lock to your storage account, it might affect the blob trigger's ability to create or delete blobs. A `ReadOnly` lock prevents both creation and deletion, while a `DoNotDelete` lock prevents deletion. Ensure that you account for these restrictions to avoid any issues with your triggers. +- We don't recommend file arrival triggers as a triggering mechanism from data flow sinks. Data flows perform a number of file renaming and partition file shuffling tasks in the target folder that can inadvertently trigger a file arrival event before the complete processing of your data. -## Create a trigger with UI +## Create a trigger with the UI -This section shows you how to create a storage event trigger within the Azure Data Factory and Synapse pipeline User Interface. +This section shows you how to create a storage event trigger within the Azure Data Factory and Azure Synapse Analytics pipeline user interface (UI). -1. Switch to the **Edit** tab in Data Factory, or the **Integrate** tab in Azure Synapse. +1. Switch to the **Edit** tab in Data Factory or the **Integrate** tab in Azure Synapse Analytics. -1. Select **Trigger** on the menu, then select **New/Edit**. +1. On the menu, select **Trigger**, and then select **New/Edit**. -1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**. +1. On the **Add Triggers** page, select **Choose trigger**, and then select **+ New**. -1. Select trigger type **Storage Event** +1. Select the trigger type **Storage events**. # [Azure Data Factory](#tab/data-factory)- :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1.png" alt-text="Screenshot of Author page to create a new storage event trigger in Data Factory UI." ::: - # [Azure Synapse](#tab/synapse-analytics) - :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot of Author page to create a new storage event trigger in the Azure Synapse UI."::: + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1.png" alt-text="Screenshot that shows creating a new storage event trigger in the Data Factory UI." ::: + # [Azure Synapse Analytics](#tab/synapse-analytics) + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot that shows creating a new storage event trigger in the Azure Synapse Analytics UI."::: -1. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is required, but be mindful that selecting all containers can lead to a large number of events. +1. Select your storage account from the Azure subscription dropdown list or manually by using its storage account resource ID. Choose the container on which you want the events to occur. Container selection is required, but selecting all containers can lead to a large number of events. -1. The **Blob path begins with** and **Blob path ends with** properties allow you to specify the containers, folders, and blob names for which you want to receive events. Your storage event trigger requires at least one of these properties to be defined. You can use variety of patterns for both **Blob path begins with** and **Blob path ends with** properties, as shown in the examples later in this article. +1. The `Blob path begins with` and `Blob path begins with` properties allow you to specify the containers, folders, and blob names for which you want to receive events. Your storage event trigger requires at least one of these properties to be defined. You can use various patterns for both `Blob path begins with` and `Blob path begins with` properties, as shown in the examples later in this article. - * **Blob path begins with:** The blob path must start with a folder path. Valid values include `2018/` and `2018/april/shoes.csv`. This field can't be selected if a container isn't selected. - * **Blob path ends with:** The blob path must end with a file name or extension. Valid values include `shoes.csv` and `.csv`. Container and folder names, when specified, they must be separated by a `/blobs/` segment. For example, a container named 'orders' can have a value of `/orders/blobs/2018/april/shoes.csv`. To specify a folder in any container, omit the leading '/' character. For example, `april/shoes.csv` triggers an event on any file named `shoes.csv` in folder a called 'april' in any container. - * Note that Blob path **begins with** and **ends with** are the only pattern matching allowed in Storage Event Trigger. Other types of wildcard matching aren't supported for the trigger type. + * `Blob path begins with`: The blob path must start with a folder path. Valid values include `2018/` and `2018/april/shoes.csv`. This field can't be selected if a container isn't selected. + * `Blob path begins with`: The blob path must end with a file name or extension. Valid values include `shoes.csv` and `.csv`. Container and folder names, when specified, must be separated by a `/blobs/` segment. For example, a container named `orders` can have a value of `/orders/blobs/2018/april/shoes.csv`. To specify a folder in any container, omit the leading `/` character. For example, `april/shoes.csv` triggers an event on any file named `shoes.csv` in a folder called `april` in any container. + + Note that `Blob path begins with` and `Blob path ends with` are the only pattern matching allowed in a storage event trigger. Other types of wildcard matching aren't supported for the trigger type. -1. Select whether your trigger responds to a **Blob created** event, **Blob deleted** event, or both. In your specified storage location, each event triggers the Data Factory and Synapse pipelines associated with the trigger. +1. Select whether your trigger responds to a **Blob created** event, a **Blob deleted** event, or both. In your specified storage location, each event triggers the Data Factory and Azure Synapse Analytics pipelines associated with the trigger. - :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-2.png" alt-text="Screenshot of storage event trigger creation page."::: + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-2.png" alt-text="Screenshot that shows a storage event trigger creation page."::: 1. Select whether or not your trigger ignores blobs with zero bytes. -1. After you configure your trigger, click on **Next: Data preview**. This screen shows the existing blobs matched by your storage event trigger configuration. Make sure you have specific filters. Configuring filters that are too broad can match a large number of files created/deleted and may significantly impact your cost. Once your filter conditions are verified, click **Finish**. +1. After you configure your trigger, select **Next: Data preview**. This screen shows the existing blobs matched by your storage event trigger configuration. Make sure you have specific filters. Configuring filters that are too broad can match a large number of files that were created or deleted and might significantly affect your cost. After your filter conditions are verified, select **Finish**. - :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-3.png" alt-text="Screenshot of storage event trigger preview page."::: + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-3.png" alt-text="Screenshot that shows the storage event trigger preview page."::: -1. To attach a pipeline to this trigger, go to the pipeline canvas and click **Trigger** and select **New/Edit**. When the side nav appears, click on the **Choose trigger...** dropdown and select the trigger you created. Click **Next: Data preview** to confirm the configuration is correct and then **Next** to validate the Data preview is correct. +1. To attach a pipeline to this trigger, go to the pipeline canvas and select **Trigger** > **New/Edit**. When the side pane appears, select the **Choose trigger** dropdown list and select the trigger you created. Select **Next: Data preview** to confirm that the configuration is correct. Then select **Next** to validate that the data preview is correct. -1. If your pipeline has parameters, you can specify them on the trigger runs parameter side nav. The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. For detailed explanation, see [Reference Trigger Metadata in Pipelines](how-to-use-trigger-parameterization.md) +1. If your pipeline has parameters, you can specify them on the **Trigger Run Parameters** side pane. The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After you map the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. For a detailed explanation, see [Reference trigger metadata in pipelines](how-to-use-trigger-parameterization.md). - :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-4.png" alt-text="Screenshot of storage event trigger mapping properties to pipeline parameters."::: + :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-4.png" alt-text="Screenshot that shows storage event trigger mapping properties to pipeline parameters."::: - In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively. + In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The `folderPath` and `fileName` properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path _sample-data/event-testing_, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile`, respectively. -1. Click **Finish** once you're done. +1. After you're finished, select **Finish**. ## JSON schema -The following table provides an overview of the schema elements that are related to storage event triggers: +The following table provides an overview of the schema elements that are related to storage event triggers. -| **JSON Element** | **Description** | **Type** | **Allowed Values** | **Required** | +| JSON element | Description | Type | Allowed values | Required | | - | | -- | | |-| **scope** | The Azure Resource Manager resource ID of the Storage Account. | String | Azure Resource Manager ID | Yes | -| **events** | The type of events that cause this trigger to fire. | Array | Microsoft.Storage.BlobCreated, Microsoft.Storage.BlobDeleted | Yes, any combination of these values. | -| **blobPathBeginsWith** | The blob path must begin with the pattern provided for the trigger to fire. For example, `/records/blobs/december/` only fires the trigger for blobs in the `december` folder under the `records` container. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | -| **blobPathEndsWith** | The blob path must end with the pattern provided for the trigger to fire. For example, `december/boxes.csv` only fires the trigger for blobs named `boxes` in a `december` folder. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | -| **ignoreEmptyBlobs** | Whether or not zero-byte blobs triggers a pipeline run. By default, this is set to true. | Boolean | true or false | No | +| scope | The Azure Resource Manager resource ID of the storage account. | String | Azure Resource Manager ID | Yes. | +| events | The type of events that cause this trigger to fire. | Array | `Microsoft.Storage.BlobCreated`, `Microsoft.Storage.BlobDeleted` | Yes, any combination of these values. | +| `blobPathBeginsWith` | The blob path must begin with the pattern provided for the trigger to fire. For example, `/records/blobs/december/` only fires the trigger for blobs in the `december` folder under the `records` container. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | +| `blobPathEndsWith` | The blob path must end with the pattern provided for the trigger to fire. For example, `december/boxes.csv` only fires the trigger for blobs named `boxes` in a `december` folder. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | +| `ignoreEmptyBlobs` | Whether or not zero-byte blobs triggers a pipeline run. By default, this is set to `true`. | Boolean | true or false | No. | ## Examples of storage event triggers This section provides examples of storage event trigger settings. > [!IMPORTANT]-> You have to include the `/blobs/` segment of the path, as shown in the following examples, whenever you specify container and folder, container and file, or container, folder, and file. For **blobPathBeginsWith**, the UI automatically adds `/blobs/` between the folder and container name in the trigger JSON. +> You have to include the `/blobs/` segment of the path, as shown in the following examples, whenever you specify container and folder, container and file, or container, folder, and file. For `blobPathBeginsWith`, the UI automatically adds `/blobs/` between the folder and container name in the trigger JSON. | Property | Example | Description | ||||-| **Blob path begins with** | `/containername/` | Receives events for any blob in the container. | -| **Blob path begins with** | `/containername/blobs/foldername/` | Receives events for any blobs in the `containername` container and `foldername` folder. | -| **Blob path begins with** | `/containername/blobs/foldername/subfoldername/` | You can also reference a subfolder. | -| **Blob path begins with** | `/containername/blobs/foldername/file.txt` | Receives events for a blob named `file.txt` in the `foldername` folder under the `containername` container. | -| **Blob path ends with** | `file.txt` | Receives events for a blob named `file.txt` in any path. | -| **Blob path ends with** | `/containername/blobs/file.txt` | Receives events for a blob named `file.txt` under container `containername`. | -| **Blob path ends with** | `foldername/file.txt` | Receives events for a blob named `file.txt` in `foldername` folder under any container. | +| `Blob path begins with` | `/containername/` | Receives events for any blob in the container. | +| `Blob path begins with` | `/containername/blobs/foldername/` | Receives events for any blobs in the `containername` container and `foldername` folder. | +| `Blob path begins with` | `/containername/blobs/foldername/subfoldername/` | You can also reference a subfolder. | +| `Blob path begins with` | `/containername/blobs/foldername/file.txt` | Receives events for a blob named `file.txt` in the `foldername` folder under the `containername` container. | +| `Blob path ends with` | `file.txt` | Receives events for a blob named `file.txt` in any path. | +| `Blob path ends with` | `/containername/blobs/file.txt` | Receives events for a blob named `file.txt` under the container `containername`. | +| `Blob path ends with` | `foldername/file.txt` | Receives events for a blob named `file.txt` in the `foldername` folder under any container. | ## Role-based access control -Azure Data Factory and Synapse pipelines use Azure role-based access control (Azure RBAC) to ensure that unauthorized access to listen to, subscribe to updates from, and trigger pipelines linked to blob events, are strictly prohibited. +Data Factory and Azure Synapse Analytics pipelines use Azure role-based access control (Azure RBAC) to ensure that unauthorized access to listen to, subscribe to updates from, and trigger pipelines linked to blob events are strictly prohibited. -* To successfully create a new or update an existing Storage Event Trigger, the Azure account signed into the service needs to have appropriate access to the relevant storage account. Otherwise, the operation fails with _Access Denied_. -* Azure Data Factory and Azure Synapse need no special permission to your Event Grid, and you do _not_ need to assign special RBAC permission to the Data Factory or Azure Synapse service principal for the operation. +* To successfully create a new storage event trigger or update an existing one, the Azure account signed in to the service needs to have appropriate access to the relevant storage account. Otherwise, the operation fails with the message "Access Denied." +* Data Factory and Azure Synapse Analytics need no special permission to your Event Grid instance, and you do *not* need to assign special RBAC permission to the Data Factory or Azure Synapse Analytics service principal for the operation. -Any of following RBAC settings works for storage event trigger: +Any of the following RBAC settings work for storage event triggers: * Owner role to the storage account * Contributor role to the storage account-* _Microsoft.EventGrid/EventSubscriptions/Write_ permission to storage account _/subscriptions/####/resourceGroups/####/providers/Microsoft.Storage/storageAccounts/storageAccountName_ +* `Microsoft.EventGrid/EventSubscriptions/Write` permission to the storage account `/subscriptions/####/resourceGroups/####/providers/Microsoft.Storage/storageAccounts/storageAccountName` +Specifically: -Specifically, +- When you author in the data factory (in the development environment for instance), the Azure account signed in needs to have the preceding permission. +- When you publish through [continuous integration and continuous delivery](continuous-integration-delivery.md), the account used to publish the Azure Resource Manager template into the testing or production factory needs to have the preceding permission. -- When you author in the data factory (in the development environment for instance), the Azure account signed in needs to have the above permission-- When you publish through [CI/CD](continuous-integration-delivery.md), the account used to publish the ARM template into the testing or production factory needs to have the above permission.+To understand how the service delivers the two promises, let's take a step back and peek behind the scenes. Here are the high-level workflows for integration between Data Factory/Azure Synapse Analytics, Storage, and Event Grid. -In order to understand how the service delivers the two promises, let's take back a step and take a peek behind the scenes. Here are the high-level work flows for integration between Azure Data Factory/Azure Synapse, Storage, and Event Grid. +### Create a new storage event trigger -### Create a new Storage Event Trigger +This high-level workflow describes how Data Factory interacts with Event Grid to create a storage event trigger. The data flow is the same in Azure Synapse Analytics, with Azure Synapse Analytics pipelines taking the role of the data factory in the following diagram. -This high-level work flow describes how Azure Data Factory interacts with Event Grid to create a Storage Event Trigger. The data flow is the same in Azure Synapse, with Synapse pipelines taking the role of the Data Factory in the following diagram. +Two noticeable callouts from the workflows: -Two noticeable call outs from the work flows: --* Azure Data Factory and Azure Synapse make _no_ direct contact with Storage account. Request to create a subscription is instead relayed and processed by Event Grid. Hence, the service needs no permission to Storage account for this step. --* Access control and permission checking happen within the service. Before the service sends a request to subscribe to storage event, it checks the permission for the user. More specifically, it checks whether the Azure account signed in and attempting to create the Storage Event trigger has appropriate access to the relevant storage account. If the permission check fails, trigger creation also fails. +* Data Factory and Azure Synapse Analytics make _no_ direct contact with the storage account. The request to create a subscription is instead relayed and processed by Event Grid. The service needs no permission to access the storage account for this step. +* Access control and permission checking happen within the service. Before the service sends a request to subscribe to a storage event, it checks the permission for the user. More specifically, it checks whether the Azure account that's signed in and attempting to create the storage event trigger has appropriate access to the relevant storage account. If the permission check fails, trigger creation also fails. ### Storage event trigger pipeline run -This high-level work flow describes how storage event trigger pipelines run through Event Grid. For Azure Synapse the data flow is the same, with Synapse pipelines taking the role of the Data Factory in the diagram below. +This high-level workflow describes how storage event trigger pipelines run through Event Grid. For Azure Synapse Analytics, the data flow is the same, with Azure Synapse Analytics pipelines taking the role of Data Factory in the following diagram. -There are three noticeable call outs in the workflow related to Event triggering pipelines within the service: +Three noticeable callouts in the workflow are related to event triggering pipelines within the service: -* Event Grid uses a Push model that relays the message as soon as possible when storage drops the message into the system. This is different from messaging system, such as Kafka where a Pull system is used. -* Event Trigger serves as an active listener to the incoming message and it properly triggers the associated pipeline. -* Storage Event Trigger itself makes no direct contact with Storage account +* Event Grid uses a Push model that relays the message as soon as possible when storage drops the message into the system. This approach is different from a messaging system, such as Kafka, where a Pull system is used. +* The event trigger serves as an active listener to the incoming message and it properly triggers the associated pipeline. +* The storage event trigger itself makes no direct contact with the storage account. - * That said, if you have a Copy or other activity inside the pipeline to process the data in Storage account, the service makes direct contact with Storage, using the credentials stored in the Linked Service. Ensure that Linked Service is set up appropriately - * However, if you make no reference to the Storage account in the pipeline, you don't need to grant permission to the service to access Storage account + * If you have a Copy activity or another activity inside the pipeline to process the data in the storage account, the service makes direct contact with the storage account by using the credentials stored in the linked service. Ensure that the linked service is set up appropriately. + * If you make no reference to the storage account in the pipeline, you don't need to grant permission to the service to access the storage account. ## Related content -* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). -* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md) +* For more information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). +* To reference trigger metadata in a pipeline, see [Reference trigger metadata in pipeline runs](how-to-use-trigger-parameterization.md). |
data-factory | How To Create Tumbling Window Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md | Last updated 01/05/2024 This article provides steps to create, start, and monitor a tumbling window trigger. For general information about triggers and the supported types, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md). -Tumbling window triggers are a type of trigger that fires at a periodic time interval from a specified start time, while retaining state. Tumbling windows are a series of fixed-sized, non-overlapping, and contiguous time intervals. A tumbling window trigger has a one-to-one relationship with a pipeline and can only reference a singular pipeline. Tumbling window trigger is a more heavy weight alternative for schedule trigger offering a suite of features for complex scenarios([dependency on other tumbling window triggers](#tumbling-window-trigger-dependency), [rerunning a failed job](tumbling-window-trigger-dependency.md#monitor-dependencies) and [set user retry for pipelines](#user-assigned-retries-of-pipelines)). To further understand the difference between schedule trigger and tumbling window trigger, please visit [here](concepts-pipeline-execution-triggers.md#trigger-type-comparison). +Tumbling window triggers are a type of trigger that fires at a periodic time interval from a specified start time, while retaining state. Tumbling windows are a series of fixed-sized, nonoverlapping, and contiguous time intervals. A tumbling window trigger has a one-to-one relationship with a pipeline and can only reference a singular pipeline. -## Azure Data Factory and Synapse portal experience +A tumbling window trigger is a more heavyweight alternative for a schedule trigger. It offers a suite of features for complex scenarios like ([dependency on other tumbling window triggers](#tumbling-window-trigger-dependency), [rerunning a failed job](tumbling-window-trigger-dependency.md#monitor-dependencies), and [setting user retry for pipelines](#user-assigned-retries-of-pipelines)). To further understand the difference between a schedule trigger and a tumbling window trigger, see [Trigger type comparison](concepts-pipeline-execution-triggers.md#trigger-type-comparison). -1. To create a tumbling window trigger in the Azure portal, select the **Triggers** tab, and then select **New**. -1. After the trigger configuration pane opens, select **Tumbling Window**, and then define your tumbling window trigger properties. -1. When you're done, select **Save**. +## Azure Data Factory and Azure Synapse portal experience ++1. To create a tumbling window trigger in the Azure portal, select the **Triggers** tab, and then select **New**. +1. After the trigger configuration pane opens, select **Tumbling window**. Then define your tumbling window trigger properties. +1. When you're finished, select **Save**. # [Azure Data Factory](#tab/data-factory) # [Azure Synapse](#tab/synapse-analytics) A tumbling window has the following trigger type properties: } ``` -The following table provides a high-level overview of the major JSON elements that are related to recurrence and scheduling of a tumbling window trigger: +The following table provides a high-level overview of the major JSON elements that are related to recurrence and scheduling of a tumbling window trigger. | JSON element | Description | Type | Allowed values | Required | |: |: |: |: |: |-| **type** | The type of the trigger. The type is the fixed value "TumblingWindowTrigger". | String | "TumblingWindowTrigger" | Yes | -| **runtimeState** | The current state of the trigger run time.<br/>**Note**: This element is \<readOnly>. | String | "Started," "Stopped," "Disabled" | Yes | -| **frequency** | A string that represents the frequency unit (minutes, hours, or months) at which the trigger recurs. If the **startTime** date values are more granular than the **frequency** value, the **startTime** dates are considered when the window boundaries are computed. For example, if the **frequency** value is hourly and the **startTime** value is 2017-09-01T10:10:10Z, the first window is (2017-09-01T10:10:10Z, 2017-09-01T11:10:10Z). | String | "Minute," "Hour", "Month" | Yes | -| **interval** | A positive integer that denotes the interval for the **frequency** value, which determines how often the trigger runs. For example, if the **interval** is 3 and the **frequency** is "hour," the trigger recurs every 3 hours. <br/>**Note**: The minimum window interval is 5 minutes. | Integer | A positive integer. | Yes | -| **startTime**| The first occurrence, which can be in the past. The first trigger interval is (**startTime**, **startTime** + **interval**). | DateTime | A DateTime value. | Yes | -| **endTime**| The last occurrence, which can be in the past. | DateTime | A DateTime value. | Yes | -| **delay** | The amount of time to delay the start of data processing for the window. The pipeline run is started after the expected execution time plus the amount of **delay**. The **delay** defines how long the trigger waits past the due time before triggering a new run. The **delay** doesnΓÇÖt alter the window **startTime**. For example, a **delay** value of 00:10:00 implies a delay of 10 minutes. | Timespan<br/>(hh:mm:ss) | A timespan value where the default is 00:00:00. | No | -| **maxConcurrency** | The number of simultaneous trigger runs that are fired for windows that are ready. For example, to back fill hourly runs for yesterday results in 24 windows. If **maxConcurrency** = 10, trigger events are fired only for the first 10 windows (00:00-01:00 - 09:00-10:00). After the first 10 triggered pipeline runs are complete, trigger runs are fired for the next 10 windows (10:00-11:00 - 19:00-20:00). Continuing with this example of **maxConcurrency** = 10, if there are 10 windows ready, there are 10 total pipeline runs. If there's only 1 window ready, there's only 1 pipeline run. | Integer | An integer between 1 and 50. | Yes | -| **retryPolicy: Count** | The number of retries before the pipeline run is marked as "Failed." | Integer | An integer, where the default is 0 (no retries). | No | -| **retryPolicy: intervalInSeconds** | The delay between retry attempts specified in seconds. | Integer | The number of seconds, where the default is 30. The minimum value is 30. | No | -| **dependsOn: type** | The type of TumblingWindowTriggerReference. Required if a dependency is set. | String | "TumblingWindowTriggerDependencyReference", "SelfDependencyTumblingWindowTriggerReference" | No | -| **dependsOn: size** | The size of the dependency tumbling window. | Timespan<br/>(hh:mm:ss) | A positive timespan value where the default is the window size of the child trigger | No | -| **dependsOn: offset** | The offset of the dependency trigger. | Timespan<br/>(hh:mm:ss) | A timespan value that must be negative in a self-dependency. If no value specified, the window is the same as the trigger itself. | Self-Dependency: Yes<br/>Other: No | +| `type` | The type of the trigger. The `type` is the fixed value `TumblingWindowTrigger`. | `String` | `TumblingWindowTrigger` | Yes | +| `runtimeState` | The current state of the trigger run time.<br/>This element is \<readOnly>. | `String` | `Started`, `Stopped`, `Disabled` | Yes | +| `frequency` | A string that represents the frequency unit (minutes, hours, or months) at which the trigger recurs. If the `startTime` date values are more granular than the `frequency` value, the `startTime` dates are considered when the window boundaries are computed. For example, if the `frequency` value is `hourly` and the `startTime` value is 2017-09-01T10:10:10Z, the first window is (2017-09-01T10:10:10Z, 2017-09-01T11:10:10Z). | `String` | `Minute`, `Hour`, `Month` | Yes | +| `interval` | A positive integer that denotes the interval for the `frequency` value, which determines how often the trigger runs. For example, if the `interval` is `3` and the `frequency` is `hour`, the trigger recurs every 3 hours. <br/>The minimum window interval is 5 minutes. | `Integer` | A positive integer. | Yes | +| `startTime`| The first occurrence, which can be in the past. The first trigger interval is (`startTime`, `startTime + interval`). | `DateTime` | A `DateTime` value. | Yes | +| `endTime`| The last occurrence, which can be in the past. | `DateTime` | A `DateTime` value. | Yes | +| `delay` | The amount of time to delay the start of data processing for the window. The pipeline run is started after the expected execution time plus the amount of delay. The delay defines how long the trigger waits past the due time before triggering a new run. The delay doesn't alter the window `startTime`. For example, a `delay` value of 00:10:00 implies a delay of 10 minutes. | `Timespan`<br/>(hh:mm:ss) | A `timespan` value where the default is `00:00:00`. | No | +| `maxConcurrency` | The number of simultaneous trigger runs that are fired for windows that are ready. For example, to backfill hourly runs for yesterday results in 24 windows. If `maxConcurrency` = 10, trigger events are fired only for the first 10 windows (00:00-01:00 - 09:00-10:00). After the first 10 triggered pipeline runs are complete, trigger runs are fired for the next 10 windows (10:00-11:00 - 19:00-20:00). Continuing with this example of `maxConcurrency` = 10, if there are 10 windows ready, there are 10 total pipeline runs. If only one window is ready, only one pipeline runs. | `Integer` | An integer between 1 and 50. | Yes | +| `retryPolicy: Count` | The number of retries before the pipeline run is marked as `Failed`. | `Integer` | An integer, where the default is 0 (no retries). | No | +| `retryPolicy: intervalInSeconds` | The delay between retry attempts specified in seconds. | `Integer` | The number of seconds, where the default is 30. The minimum value is `30`. | No | +| `dependsOn: type` | The type of `TumblingWindowTriggerReference`. Required if a dependency is set. | `String` | `TumblingWindowTriggerDependencyReference`, `SelfDependencyTumblingWindowTriggerReference` | No | +| `dependsOn: size` | The size of the dependency tumbling window. | `Timespan`<br/>(hh:mm:ss) | A positive `timespan` value where the default is the window size of the child trigger. | No | +| `dependsOn: offset` | The offset of the dependency trigger. | `Timespan`<br/>(hh:mm:ss) | A `timespan` value that must be negative in a self-dependency. If no value is specified, the window is the same as the trigger itself. | Self-Dependency: Yes<br/>Other: No | > [!NOTE]-> After a tumbling window trigger is published, **interval** and **frequency** can't be edited. +> After a tumbling window trigger is published, the `interval` and `frequency` values can't be edited. ### WindowStart and WindowEnd system variables -You can use the **WindowStart** and **WindowEnd** system variables of the tumbling window trigger in your **pipeline** definition (that is, for part of a query). Pass the system variables as parameters to your pipeline in the **trigger** definition. The following example shows you how to pass these variables as parameters: +You can use the `WindowStart` and `WindowEnd` system variables of the tumbling window trigger in your **pipeline** definition (that is, for part of a query). Pass the system variables as parameters to your pipeline in the **trigger** definition. The following example shows you how to pass these variables as parameters. ```json { You can use the **WindowStart** and **WindowEnd** system variables of the tumbli } ``` -To use the **WindowStart** and **WindowEnd** system variable values in the pipeline definition, use your "MyWindowStart" and "MyWindowEnd" parameters, accordingly. +To use the `WindowStart` and `WindowEnd` system variable values in the pipeline definition, use your `MyWindowStart` and `MyWindowEnd` parameters, accordingly. ### Execution order of windows in a backfill scenario -If the startTime of trigger is in the past, then based on this formula, M=(CurrentTime- TriggerStartTime)/TumblingWindowSize, the trigger will generate {M} backfill(past) runs in parallel, honoring trigger concurrency, before executing the future runs. The order of execution for windows is deterministic, from oldest to newest intervals. Currently, this behavior can't be modified. +If the trigger `startTime` is in the past, then based on the formula M=(CurrentTime- TriggerStartTime)/TumblingWindowSize, the trigger generates {M} backfill(past) runs in parallel, honoring trigger concurrency, before executing the future runs. The order of execution for windows is deterministic, from oldest to newest intervals. Currently, this behavior can't be modified. > [!NOTE]-> Be aware that in this scenario, all runs from the selected startTime will be run before executing future runs. If you need to backfill a long period of time, doing an intial historical load is recommended. +> In this scenario, all runs from the selected `startTime` are run before executing future runs. If you need to backfill a long period of time, we recommend doing an initial historical load. ### Existing TriggerResource elements -The following points apply to update of existing **TriggerResource** elements: +The following points apply to updating existing `TriggerResource` elements: -* The value for the **frequency** element (or window size) of the trigger along with **interval** element cannot be changed once the trigger is created. This is required for proper functioning of triggerRun reruns and dependency evaluations -* If the value for the **endTime** element of the trigger changes (added or updated), the state of the windows that are already processed is *not* reset. The trigger honors the new **endTime** value. If the new **endTime** value is before the windows that are already executed, the trigger stops. Otherwise, the trigger stops when the new **endTime** value is encountered. +* The value for the `frequency` element (or window size) of the trigger along with the `interval` element can't be changed after the trigger is created. This restriction is required for proper functioning of `triggerRun` reruns and dependency evaluations. +* If the value for the `endTime` element of the trigger changes (by adding or updating), the state of the windows that are already processed is *not* reset. The trigger honors the new `endTime` value. If the new `endTime` value is before the windows that are already executed, the trigger stops. Otherwise, the trigger stops when the new `endTime` value is encountered. -### User assigned retries of pipelines +### User-assigned retries of pipelines -In case of pipeline failures, tumbling window trigger can retry the execution of the referenced pipeline automatically, using the same input parameters, without the user intervention. This can be specified using the property "retryPolicy" in the trigger definition. +In the case of pipeline failures, a tumbling window trigger can retry the execution of the referenced pipeline automatically by using the same input parameters, without user intervention. Use the `retryPolicy` property in the trigger definition to specify this action. ### Tumbling window trigger dependency If you want to make sure that a tumbling window trigger is executed only after the successful execution of another tumbling window trigger in the data factory, [create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md). -### Cancel tumbling window run +### Cancel a tumbling window run -You can cancel runs for a tumbling window trigger, if the specific window is in _Waiting_, _Waiting on Dependency_, or _Running_ state +You can cancel runs for a tumbling window trigger if the specific window is in a **Waiting**, **Waiting on dependency**, or **Running** state: -* If the window is in **Running** state, cancel the associated _Pipeline Run_, and the trigger run will be marked as _Canceled_ afterwards -* If the window is in **Waiting** or **Waiting on Dependency** state, you can cancel the window from Monitoring: +* If the window is in a **Running** state, cancel the associated **Pipeline Run**, and the trigger run is marked as **Canceled** afterwards. +* If the window is in a **Waiting** or **Waiting on dependency** state, you can cancel the window from **Monitoring**. # [Azure Data Factory](#tab/data-factory) # [Azure Synapse](#tab/synapse-analytics) -You can also rerun a canceled window. The rerun will take the _latest_ published definitions of the trigger, and dependencies for the specified window will be _re-evaluated_ upon rerun +You can also rerun a canceled window. The rerun takes the _latest_ published definitions of the trigger. Dependencies for the specified window are _reevaluated_ upon rerun. # [Azure Data Factory](#tab/data-factory) # [Azure Synapse](#tab/synapse-analytics) -## Sample for Azure PowerShell and Azure CLI +## Sample for Azure PowerShell and the Azure CLI # [Azure PowerShell](#tab/azure-powershell) This section shows you how to use Azure PowerShell to create, start, and monitor ### Prerequisites -- **Azure subscription**. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. --- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell). --- **Azure Data Factory**. Follow the instructions in [Create an Azure Data Factory using PowerShell](./quickstart-create-data-factory-powershell.md) to create a data factory and a pipeline.+- **Azure subscription**: If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. +- **Azure PowerShell**: Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell). +- **Azure Data Factory**: Follow the instructions in [Create an Azure Data Factory by using PowerShell](./quickstart-create-data-factory-powershell.md) to create a data factory and a pipeline. -### Sample Code +### Sample code 1. Create a JSON file named **MyTrigger.json** in the C:\ADFv2QuickStartPSH\ folder with the following content: > [!IMPORTANT]- > Before you save the JSON file, set the value of the **startTime** element to the current UTC time. Set the value of the **endTime** element to one hour past the current UTC time. + > Before you save the JSON file, set the value of the `startTime` element to the current Coordinated Universal Time (UTC) time. Set the value of the `endTime` element to one hour past the current UTC time. ```json { This section shows you how to use Azure PowerShell to create, start, and monitor } ``` -2. Create a trigger by using the [Set-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/set-azdatafactoryv2trigger) cmdlet: +1. Create a trigger by using the [Set-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/set-azdatafactoryv2trigger) cmdlet: ```powershell Set-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" -DefinitionFile "C:\ADFv2QuickStartPSH\MyTrigger.json" ``` -3. Confirm that the status of the trigger is **Stopped** by using the [Get-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/get-azdatafactoryv2trigger) cmdlet: +1. Confirm that the status of the trigger is **Stopped** by using the [Get-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/get-azdatafactoryv2trigger) cmdlet: ```powershell Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" ``` -4. Start the trigger by using the [Start-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/start-azdatafactoryv2trigger) cmdlet: +1. Start the trigger by using the [Start-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/start-azdatafactoryv2trigger) cmdlet: ```powershell Start-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" ``` -5. Confirm that the status of the trigger is **Started** by using the [Get-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/get-azdatafactoryv2trigger) cmdlet: +1. Confirm that the status of the trigger is **Started** by using the [Get-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/get-azdatafactoryv2trigger) cmdlet: ```powershell Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" ``` -6. Get the trigger runs in Azure PowerShell by using the [Get-AzDataFactoryV2TriggerRun](/powershell/module/az.datafactory/get-azdatafactoryv2triggerrun) cmdlet. To get information about the trigger runs, execute the following command periodically. Update the **TriggerRunStartedAfter** and **TriggerRunStartedBefore** values to match the values in your trigger definition: +1. Get the trigger runs in Azure PowerShell by using the [Get-AzDataFactoryV2TriggerRun](/powershell/module/az.datafactory/get-azdatafactoryv2triggerrun) cmdlet. To get information about the trigger runs, execute the following command periodically. Update the `TriggerRunStartedAfter` and `TriggerRunStartedBefore` values to match the values in your trigger definition: ```powershell Get-AzDataFactoryV2TriggerRun -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -TriggerName "MyTrigger" -TriggerRunStartedAfter "2017-12-08T00:00:00" -TriggerRunStartedBefore "2017-12-08T01:00:00" This section shows you how to use Azure PowerShell to create, start, and monitor # [Azure CLI](#tab/azure-cli) -This section shows you how to use Azure CLI to create, start, and monitor a trigger. +This section shows you how to use the Azure CLI to create, start, and monitor a trigger. ### Prerequisites [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -- Follow the instructions in [Create an Azure Data Factory using Azure CLI](./quickstart-create-data-factory-azure-cli.md) to create a data factory and a pipeline.+- Follow the instructions in [Create an Azure Data Factory by using the Azure CLI](./quickstart-create-data-factory-azure-cli.md) to create a data factory and a pipeline. -### Sample Code +### Sample code 1. In your working directory, create a JSON file named **MyTrigger.json** with the trigger's properties. For this sample, use the following content: > [!IMPORTANT]- > Before you save the JSON file, set the value of **referenceName** to your pipeline name. Set the value of the **startTime** element to the current UTC time. Set the value of the **endTime** element to one hour past the current UTC time. + > Before you save the JSON file, set the value of `referenceName` to your pipeline name. Set the value of the `startTime` element to the current UTC time. Set the value of the `endTime` element to one hour past the current UTC time. ```json { This section shows you how to use Azure CLI to create, start, and monitor a trig } ``` -2. Create a trigger by using the [az datafactory trigger create](/cli/azure/datafactory/trigger#az-datafactory-trigger-create) command: +1. Create a trigger by using the [az datafactory trigger create](/cli/azure/datafactory/trigger#az-datafactory-trigger-create) command: > [!IMPORTANT]- > For this step and all subsequent steps replace `ResourceGroupName` with your resource group name. Replace `DataFactoryName` with your data factory's name. + > For this step and all subsequent steps, replace `ResourceGroupName` with your resource group name. Replace `DataFactoryName` with your data factory's name. ```azurecli az datafactory trigger create --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --name "MyTrigger" --properties @MyTrigger.json ``` -3. Confirm that the status of the trigger is **Stopped** by using the [az datafactory trigger show](/cli/azure/datafactory/trigger#az-datafactory-trigger-show) command: +1. Confirm that the status of the trigger is **Stopped** by using the [az datafactory trigger show](/cli/azure/datafactory/trigger#az-datafactory-trigger-show) command: ```azurecli az datafactory trigger show --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --name "MyTrigger" ``` -4. Start the trigger by using the [az datafactory trigger start](/cli/azure/datafactory/trigger#az-datafactory-trigger-start) command: +1. Start the trigger by using the [az datafactory trigger start](/cli/azure/datafactory/trigger#az-datafactory-trigger-start) command: ```azurecli az datafactory trigger start --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --name "MyTrigger" ``` -5. Confirm that the status of the trigger is **Started** by using the [az datafactory trigger show](/cli/azure/datafactory/trigger#az-datafactory-trigger-show) command: +1. Confirm that the status of the trigger is **Started** by using the [az datafactory trigger show](/cli/azure/datafactory/trigger#az-datafactory-trigger-show) command: ```azurecli az datafactory trigger show --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --name "MyTrigger" ``` -6. Get the trigger runs in Azure CLI by using the [az datafactory trigger-run query-by-factory](/cli/azure/datafactory/trigger-run#az-datafactory-trigger-run-query-by-factory) command. To get information about the trigger runs, execute the following command periodically. Update the **last-updated-after** and **last-updated-before** values to match the values in your trigger definition: +1. Get the trigger runs in the Azure CLI by using the [az datafactory trigger-run query-by-factory](/cli/azure/datafactory/trigger-run#az-datafactory-trigger-run-query-by-factory) command. To get information about the trigger runs, execute the following command periodically. Update the `last-updated-after` and `last-updated-before` values to match the values in your trigger definition: ```azurecli az datafactory trigger-run query-by-factory --resource-group "ResourceGroupName" --factory-name "DataFactoryName" --filters operand="TriggerName" operator="Equals" values="MyTrigger" --last-updated-after "2017-12-08T00:00:00Z" --last-updated-before "2017-12-08T01:00:00Z" To monitor trigger runs and pipeline runs in the Azure portal, see [Monitor pipe ## Related content -* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). -* [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md). -* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md) +* [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json) +* [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md) +* [Reference trigger metadata in pipeline runs](how-to-use-trigger-parameterization.md) |
data-factory | How To Use Trigger Parameterization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-trigger-parameterization.md | Title: Pass trigger information to pipeline -description: Learn how to reference trigger metadata in pipeline +description: Learn how to reference trigger metadata in pipelines. Last updated 05/15/2024 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article describes how trigger metadata, such as trigger start time, can be used in pipeline run. +This article describes how trigger metadata, such as the trigger start time, can be used in a pipeline run. -Pipeline sometimes needs to understand and reads metadata from trigger that invokes it. For instance, with Tumbling Window Trigger run, based upon window start and end time, pipeline will process different data slices or folders. In Azure Data Factory, we use Parameterization and [System Variable](control-flow-system-variables.md) to pass meta data from trigger to pipeline. +A pipeline sometimes needs to understand and read metadata from the trigger that invokes it. For instance, with a tumbling window trigger run, based on the window start and end time, the pipeline processes different data slices or folders. In Azure Data Factory, we use parameterization and [system variables](control-flow-system-variables.md) to pass metadata from triggers to pipelines. -This pattern is especially useful for [Tumbling Window Trigger](how-to-create-tumbling-window-trigger.md), where trigger provides window start and end time, and [Custom Event Trigger](how-to-create-custom-event-trigger.md), where trigger parse and process values in [custom defined _data_ field](../event-grid/event-schema.md). +This pattern is especially useful for [tumbling window triggers](how-to-create-tumbling-window-trigger.md), where the trigger provides the window start and end time, and [custom event triggers](how-to-create-custom-event-trigger.md), where the trigger parses and processes values in a [custom-defined *data* field](../event-grid/event-schema.md). > [!NOTE]-> Different trigger type provides different meta data information. For more information, see [System Variable](control-flow-system-variables.md) +> Different trigger types provide different metadata information. For more information, see [System variables](control-flow-system-variables.md). ## Data Factory UI -This section shows you how to pass meta data information from trigger to pipeline, within the Azure Data Factory User Interface. +This section shows you how to pass metadata information from triggers to pipelines, within the Data Factory user interface (UI). -1. Go to the **Authoring Canvas** and edit a pipeline +1. Go to the **Authoring Canvas** and edit a pipeline. -1. Select on the blank canvas to bring up pipeline settings. DonΓÇÖt select any activity. You may need to pull up the setting panel from the bottom of the canvas, as it may have been collapsed +1. Select the blank canvas to bring up pipeline settings. Don't select any activity. You might need to pull up the setting pane from the bottom of the canvas because it might be collapsed. -1. Select **Parameters** section and select **+ New** to add parameters +1. Select the **Parameters** tab and select **+ New** to add parameters. - :::image type="content" source="media/how-to-use-trigger-parameterization/01-create-parameter.png" alt-text="Screen shot of pipeline setting showing how to define parameters in pipeline."::: + :::image type="content" source="media/how-to-use-trigger-parameterization/01-create-parameter.png" alt-text="Screenshot that shows a pipeline setting showing how to define parameters in a pipeline."::: -1. Add triggers to pipeline, by clicking on **+ Trigger**. +1. Add triggers to the pipeline by selecting **+ Trigger**. -1. Create or attach a trigger to the pipeline, and select **OK** +1. Create or attach a trigger to the pipeline and select **OK**. -1. After selecting **OK**, another **New trigger** page is presented with a list of the parameters specified for the pipeline, as shown in the following screenshot. On that page, fill in trigger meta data for each parameter. Use format defined in [System Variable](control-flow-system-variables.md) to retrieve trigger information. You don't need to fill in the information for all parameters, just the ones that will assume trigger metadata values. For instance, here we assign trigger run start time to *parameter_1*. +1. After you select **OK**, another **New trigger** page appears with a list of the parameters specified for the pipeline, as shown in the following screenshot. On that page, fill in the trigger metadata for each parameter. Use the format defined in [System variables](control-flow-system-variables.md) to retrieve trigger information. You don't need to fill in the information for all parameters. Just fill in the ones that will assume trigger metadata values. For instance, here we assign the trigger run start time to `parameter_1`. - :::image type="content" source="media/how-to-use-trigger-parameterization/02-pass-in-system-variable.png" alt-text="Screenshot of trigger definition page showing how to pass trigger information to pipeline parameters."::: + :::image type="content" source="media/how-to-use-trigger-parameterization/02-pass-in-system-variable.png" alt-text="Screenshot that shows the Trigger Run Parameters page showing how to pass trigger information to pipeline parameters."::: -1. To use the values in pipeline, utilize parameters _@pipeline().parameters.parameterName_, __not__ system variable, in pipeline definitions. For instance, in our case, to read trigger start time, we'll reference @pipeline().parameters.parameter_1. +1. To use the values in the pipeline, utilize parameters, like `@pipeline().parameters.parameterName`, *not* system variables, in pipeline definitions. For instance, in this case, to read the trigger start time, we reference `@pipeline().parameters.parameter_1`. ## JSON schema -To pass in trigger information to pipeline runs, both the trigger and the pipeline json need to be updated with _parameters_ section. +To pass in trigger information to pipeline runs, both the trigger and the pipeline JSON need to be updated with the `parameters` section. ### Pipeline definition -Under **properties** section, add parameter definitions to **parameters** section +Under the `properties` section, add parameter definitions to the `parameters` section. ```json { Under **properties** section, add parameter definitions to **parameters** sectio ### Trigger definition -Under **pipelines** section, assign parameter values in **parameters** section. You don't need to fill in the information for all parameters, just the ones that will assume trigger metadata values. +Under the `pipelines` section, assign parameter values in the `parameters` section. You don't need to fill in the information for all parameters. Just fill in the ones that will assume trigger metadata values. ```json { Under **pipelines** section, assign parameter values in **parameters** section. } ``` -### Use trigger information in pipeline +### Use trigger information in a pipeline -To use the values in pipeline, utilize parameters _@pipeline().parameters.parameterName_, __not__ system variable, in pipeline definitions. +To use the values in a pipeline, utilize parameters, like `@pipeline().parameters.parameterName`, *not* system variables, in pipeline definitions. ## Related content -For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). +For more information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). |
data-factory | Tumbling Window Trigger Dependency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md | Last updated 10/20/2023 # Create a tumbling window trigger dependency [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article provides steps to create a dependency on a tumbling window trigger. For general information about Tumbling Window triggers, see [How to create tumbling window trigger](how-to-create-tumbling-window-trigger.md). +This article provides steps to create a dependency on a tumbling window trigger. For general information about tumbling window triggers, see [Create a tumbling window trigger](how-to-create-tumbling-window-trigger.md). -In order to build a dependency chain and make sure that a trigger is executed only after the successful execution of another trigger within the service, use this advanced feature to create a tumbling window dependency. +To build a dependency chain and make sure that a trigger is executed only after the successful execution of another trigger within the service, use this advanced feature to create a tumbling window dependency. -For a demonstration on how to create dependent pipelines using tumbling window trigger, watch the following video: +For a demonstration on how to create dependent pipelines by using a tumbling window trigger, watch the following video: > [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player] ## Create a dependency in the UI -To create dependency on a trigger, select **Trigger > Advanced > New**, and then choose the trigger to depend on with the appropriate offset and size. Select **Finish** and publish the changes for the dependencies to take effect. +To create dependency on a trigger, select **Trigger** > **Advanced** > **New**. Then choose the trigger to depend on with the appropriate offset and size. Select **Finish** and publish the changes for the dependencies to take effect. ## Tumbling window dependency properties A tumbling window trigger with a dependency has the following properties: } ``` -The following table provides the list of attributes needed to define a Tumbling Window dependency. +The following table provides the list of attributes needed to define a tumbling window dependency. -| **Property Name** | **Description** | **Type** | **Required** | +| Property name | Description | Type | Required | |||||-| type | All the existing tumbling window triggers are displayed in this drop down. Choose the trigger to take dependency on. | TumblingWindowTriggerDependencyReference or SelfDependencyTumblingWindowTriggerReference | Yes | -| offset | Offset of the dependency trigger. Provide a value in time span format and both negative and positive offsets are allowed. This property is mandatory if the trigger is depending on itself and in all other cases it is optional. Self-dependency should always be a negative offset. If no value specified, the window is the same as the trigger itself. | Timespan<br/>(hh:mm:ss) | Self-Dependency: Yes<br/>Other: No | -| size | Size of the dependency tumbling window. Provide a positive timespan value. This property is optional. | Timespan<br/>(hh:mm:ss) | No | +| `type` | All the existing tumbling window triggers are displayed in this dropdown list. Choose the trigger to take dependency on. | `TumblingWindowTriggerDependencyReference` or `SelfDependencyTumblingWindowTriggerReference` | Yes | +| `offset` | Offset of the dependency trigger. Provide a value in the timespan format. Both negative and positive offsets are allowed. This property is mandatory if the trigger is depending on itself. In all other cases, it's optional. Self-dependency should always be a negative offset. If no value is specified, the window is the same as the trigger itself. | Timespan<br/>(hh:mm:ss) | Self-Dependency: Yes<br/>Other: No | +| `size` | Size of the dependency tumbling window. Provide a positive timespan value. This property is optional. | Timespan<br/>(hh:mm:ss) | No | > [!NOTE] > A tumbling window trigger can depend on a maximum of five other triggers. ## Tumbling window self-dependency properties -In scenarios where the trigger shouldn't proceed to the next window until the preceding window is successfully completed, build a self-dependency. A self-dependency trigger that's dependent on the success of earlier runs of itself within the preceding hour will have the properties indicated in the following code. +In scenarios where the trigger shouldn't proceed to the next window until the preceding window is successfully completed, build a self-dependency. A self-dependency trigger that's dependent on the success of earlier runs of itself within the preceding hour has the properties indicated in the following code. > [!NOTE]-> If your triggered pipeline relies on the output of pipelines in previously triggered windows, we recommend using only tumbling window trigger self-dependency. To limit parallel trigger runs, set the maximimum trigger concurrency. +> If your triggered pipeline relies on the output of pipelines in previously triggered windows, we recommend using only tumbling window trigger self-dependency. To limit parallel trigger runs, set the maximum trigger concurrency. ```json { In scenarios where the trigger shouldn't proceed to the next window until the pr } } ```+ ## Usage scenarios and examples -Below are illustrations of scenarios and usage of tumbling window dependency properties. +The following scenarios show the use of tumbling window dependency properties. ### Dependency offset ### Dependency size ### Self-dependency ### Dependency on another tumbling window trigger -A daily telemetry processing job depending on another daily job aggregating the last seven days output and generates seven day rolling window streams: +The following example shows a daily telemetry processing job that depends on another daily job aggregating the last seven days of output and generates seven-day rolling window streams. ### Dependency on itself -A daily job with no gaps in the output streams of the job: +The following example shows a daily job with no gaps in the output streams of the job. ## Monitor dependencies -You can monitor the dependency chain and the corresponding windows from the trigger run monitoring page. Navigate to **Monitoring > Trigger Runs**. If a Tumbling Window trigger has dependencies, Trigger Name will bear a hyperlink to dependency monitoring view. +You can monitor the dependency chain and the corresponding windows from the trigger run monitoring page. Go to **Monitoring** > **Trigger Runs**. If a tumbling window trigger has dependencies, the trigger name bears a hyperlink to a dependency monitoring view. -Click through the trigger name to view trigger dependencies. Right-hand panel shows detailed trigger run information, such as RunID, window time, status, and so on. +Click through the trigger name to view trigger dependencies. The pane on the right shows trigger run information such as the run ID, window time, and status. -You can see the status of the dependencies, and windows for each dependent trigger. If one of the dependencies triggers fails, you must successfully rerun it in order for the dependent trigger to run. +You can see the status of the dependencies and windows for each dependent trigger. If one of the dependencies triggers fails, you must successfully rerun it for the dependent trigger to run. -A tumbling window trigger will wait on dependencies for _seven days_ before timing out. After seven days, the trigger run will fail. +A tumbling window trigger waits on dependencies for _seven days_ before timing out. After seven days, the trigger run fails. > [!NOTE]-> A tumbling window trigger cannot be cancelled while it is in the **Waiting on dependency** state. The dependent activity must finish before the tumbling window trigger can be cancelled. This is by design to ensure dependent activities can complete once started, and helps reduce the likelihood of unexpected results. +> A tumbling window trigger can't be canceled while it's in the **Waiting on dependency** state. The dependent activity must finish before the tumbling window trigger can be canceled. This restriction is by design to ensure that dependent activities can complete once they're started. It also helps to reduce the likelihood of unexpected results. -For a more visual to view the trigger dependency schedule, select the Gantt view. +For a more visual way to view the trigger dependency schedule, select the Gantt view. -Transparent boxes show the dependency windows for each down stream-dependent trigger, while solid colored boxes above show individual window runs. Here are some tips for interpreting the Gantt chart view: +Transparent boxes show the dependency windows for each downstream-dependent trigger. Solid-colored boxes shown in the preceding image show individual window runs. Here are some tips for interpreting the Gantt chart view: -* Transparent box renders blue when dependent windows are in pending or running state -* After all windows succeeds for a dependent trigger, the transparent box will turn green -* Transparent box renders red when some dependent window fails. Look for a solid red box to identify the failure window run +* Transparent boxes render blue when dependent windows are in a **Pending** or **Running** state. +* After all windows succeed for a dependent trigger, the transparent box turns green. +* Transparent boxes render red when a dependent window fails. Look for a solid red box to identify the failure window run. -To rerun a window in Gantt chart view, select the solid color box for the window, and an action panel will pop up with details and rerun options +To rerun a window in the Gantt chart view, select the solid color box for the window. An action pane pops up with information and rerun options. ## Related content -* Review [How to create a tumbling window trigger](how-to-create-tumbling-window-trigger.md) +- [Create a tumbling window trigger](how-to-create-tumbling-window-trigger.md) |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | One common challenge when connecting sensors to Defender for IoT in the Azure po ### Security update -This update resolves six CVEs, which are listed in [software version 23.1.3 feature documentation](release-notes.md#version-2413). +This update resolves six CVEs, which are listed in [software version 24.1.3 feature documentation](release-notes.md#version-2413). ## February 2024 |
logic-apps | Deploy Single Tenant Logic Apps Private Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md | ms.suite: integration Previously updated : 10/09/2023 Last updated : 07/04/2024 # Customer intent: As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints. For more information, review the following documentation: This deployment method requires that temporary public access to your storage account. If you can't enable public access due to your organization's policies, you can still deploy your logic app to a private storage account. However, you have to [deploy with an Azure Resource Manager template (ARM template)](#deploy-arm-template), which is described in a later section. > [!NOTE]+> > An exception to the previous rule is that you can use the Azure portal to deploy your logic app to an App Service Environment, > even if the storage account is protected with a private endpoint. However, you'll need connectivity between the > subnet used by the App Service Environment and the subnet used by the storage account's private endpoint. This deployment method requires that temporary public access to your storage acc 1. Deploy your logic app resource by using either the Azure portal or Visual Studio Code. -1. After deployment finishes, enable virtual network integration between your logic app and the private endpoints on the virtual network that connects to your storage account. +1. After deployment finishes, enable virtual network integration between your logic app and the private endpoints on the virtual network connected to your storage account. 1. In the [Azure portal](https://portal.azure.com), open your logic app resource. 1. On the logic app resource menu, under **Settings**, select **Networking**. - 1. Select **VNet integration** on **Outbound Traffic** card to enable integration with a virtual network connecting to your storage account. + 1. In the **Outbound traffic configuration** section, next to **Virtual network integration**, select **Not configured** > **Add virtual network integration** . - 1. To access your logic app workflow data over the virtual network, in your logic app resource settings, set the `WEBSITE_CONTENTOVERVNET` setting to `1`. + 1. On the **Add virtual network integration** pane that opens, select your Azure subscription and your virtual network. - If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your secondary DNS. + 1. From the **Subnet** list, select the subnet where you want to add your logic app. When you're done, select **Connect**. ++1. To access your logic app workflow data over the virtual network, follow these steps: ++ 1. On the logic app resource menu, under **Settings**, select **Environment variables**. ++ 1. On the **App settings** tab, add the **WEBSITE_CONTENTOVERVNET** app setting, if none exist, and set the value to **1**. ++ 1. If you use your own domain name server (DNS) with your virtual network, add the **WEBSITE_DNS_SERVER** app setting, if none exist, and set the value to the IP address for your DNS. If you have a secondary DNS, add another app setting named **WEBSITE_DNS_ALT_SERVER**, and set the value to the IP for your secondary DNS. 1. After you apply these app settings, you can remove public access from your storage account. This deployment method requires that temporary public access to your storage acc 1. On the **Networking** pane, on the **Firewalls and virtual networks** tab, under **Allow access from**, clear **Selected networks**, and add virtual networks as necessary. > [!NOTE]+ > > Your logic app might experience an interruption because the connectivity switch between public and private endpoints might take time. > This disruption might result in your workflows temporarily disappearing. If this behavior happens, you can try to reload your workflows > by restarting the logic app and waiting several minutes. The following errors commonly happen with a private storage account that's behin ||-| | Access to the `host.json` file is denied | `"System.Private.CoreLib: Access to the path 'C:\\home\\site\\wwwroot\\host.json' is denied."` | | Can't load workflows in the logic app resource | `"Encountered an error (ServiceUnavailable) from host runtime."` |-||| As the logic app isn't running when these errors occur, you can't use the Kudu console debugging service on the Azure platform to troubleshoot these errors. However, you can use the following methods instead: |
logic-apps | Logic Apps Securing A Logic App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md | You can limit access to the inputs and outputs in the run history for your logic For example, to block anyone from accessing inputs and outputs, specify an IP address range such as `0.0.0.0-0.0.0.0`. Only a person with administrator permissions can remove this restriction, which provides the possibility for "just-in-time" access to data in your logic app workflows. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x* -To specify the allowed IP ranges, follow these steps for either the Azure portal or your Azure Resource Manager template: +To specify the allowed IP ranges, follow these steps for your Consumption or Standard logic app in the Azure portal or your Azure Resource Manager template: #### [Portal](#tab/azure-portal) ##### Consumption workflows -1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. +1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer. 1. On your logic app's menu, under **Settings**, select **Workflow settings**. -1. Under **Access control configuration** > **Allowed inbound IP addresses**, select **Specific IP ranges**. +1. In the **Access control configuration** section, under **Allowed inbound IP addresses**, from the **Trigger access option** list, select **Specific IP ranges**. -1. Under **IP ranges for contents**, specify the IP address ranges that can access content from inputs and outputs. +1. In the **IP ranges for contents** box, specify the IP address ranges that can access content from inputs and outputs. ##### Standard workflows -1. In the [Azure portal](https://portal.azure.com), open your logic app resource. +1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource. 1. On the logic app menu, under **Settings**, select **Networking**. -1. In the **Inbound Traffic** section, select **Access restriction**. +1. In the **Inbound traffic configuration** section, next to **Public network access**, select **Enabled with no access restriction**. -1. Create one or more rules to either **Allow** or **Deny** requests from specific IP ranges. You can also use the HTTP header filter settings and forwarding settings. +1. On the **Access restrictions** page, under **App access**, select **Enabled from select virtual networks and IP addresses**. ++1. Under **Site access and rules**, on the **Main site** tab, add one or more rules to either **Allow** or **Deny** requests from specific IP ranges. You can also use the HTTP header filter settings and forwarding settings. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x* For more information, see [Blocking inbound IP addresses in Azure Logic Apps (Standard)](https://www.serverlessnotes.com/docs/block-inbound-ip-addresses-in-azure-logic-apps-standard). In the Azure portal, IP address restriction affects both triggers *and* actions, ##### Standard workflows -1. In the [Azure portal](https://portal.azure.com), open your logic app resource. +1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource. 1. On the logic app menu, under **Settings**, select **Networking**. -1. In the **Inbound Traffic** section, select **Access restriction**. +1. In the **Inbound traffic configuration** section, next to **Public network access**, select **Enabled with no access restriction**. ++1. On the **Access restrictions** page, under **App access**, select **Enabled from select virtual networks and IP addresses**. -1. Create one or more rules to either **Allow** or **Deny** requests from specific IP ranges. You can also use the HTTP header filter settings and forwarding settings. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x* +1. Under **Site access and rules**, on the **Main site** tab, add one or more rules to either **Allow** or **Deny** requests from specific IP ranges. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x* For more information, see [Blocking inbound IP addresses in Azure Logic Apps (Standard)](https://www.serverlessnotes.com/docs/block-inbound-ip-addresses-in-azure-logic-apps-standard). |
logic-apps | Secure Single Tenant Workflow Virtual Network Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md | For more information, review [Create single-tenant logic app workflows in Azure ### Set up private endpoint connection -1. On your logic app menu, under **Settings**, select **Networking**. +1. On the logic app resource menu, under **Settings**, select **Networking**. -1. On the **Networking** page, on the **Inbound traffic** card, select **Private endpoints**. +1. On the **Networking** page, in the **Inbound traffic configuration** section, select the link next to **Private endpoints**. -1. On the **Private Endpoint connections**, select **Add**. +1. On the **Private Endpoint connections** page, select **Add** > **Express** or **Advanced**. -1. On the **Add Private Endpoint** pane that opens, provide the requested information about the endpoint. + For more information about the **Advanced** option, see [Create a private endpoint](../private-link/create-private-endpoint-portal.md#create-a-private-endpoint). ++1. On the **Add Private Endpoint** pane, provide the requested information about the endpoint. For more information, review [Private Endpoint properties](../private-link/private-endpoint-overview.md#private-endpoint-properties). For more information, review the following documentation: ### Set up virtual network integration -1. In the Azure portal, on the logic app resource menu, under **Settings**, select **Networking**. +1. In the [Azure portal](https://portal.azure.com), on the logic app resource menu, under **Settings**, select **Networking**. ++1. On the **Networking** page, in the **Outbound traffic configuration** section, select the link next to **Virtual network integration**. -1. On the **Networking** pane, on the **Outbound traffic** card, select **VNet integration**. +1. On the **Virtual network integration** page, select **Add virtual network integration**. -1. On the **VNet Integration** pane, select **Add Vnet**. +1. On the **Add virtual network integration** pane, select the subscription, the virtual network that connects to your internal service, and the subnet where to add the logic app. When you finish, select **Connect**. -1. On the **Add VNet Integration** pane, select the subscription and the virtual network that connects to your internal service. + On the **Virtual Network Integration** page, by default, the **Outbound internet traffic** setting is selected, which routes all outbound traffic through the virtual network. In this scenario, the app setting named **WEBSITE_VNET_ROUTE_ALL** is ignored. - After you add virtual network integration, on the **VNet Integration** pane, the **Route All** setting is enabled by default. This setting routes all outbound traffic through the virtual network. When this setting is enabled, the `WEBSITE_VNET_ROUTE_ALL` app setting is ignored. + To find this app setting, on the logic app resource menu, under **Settings**, select **Environment variables**. -1. If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your DNS. +1. If you use your own domain name server (DNS) with your virtual network, add the **WEBSITE_DNS_SERVER** app setting, if none exist, and set the value to the IP address for your DNS. If you have a secondary DNS, add another app setting named **WEBSITE_DNS_ALT_SERVER**, and set the value to the IP for your secondary DNS. 1. After Azure successfully provisions the virtual network integration, try to run the workflow again. |
machine-learning | Azure Machine Learning Ci Image Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md | -Azure Machine Learning checks and validates any machine learning packages that may require an upgrade. Updates incorporate the latest OS-related patches from Canonical as the original Linux OS publisher. In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available. For details on the patching process, see [Vulnerability Management](./concept-vulnerability-management.md). +Azure Machine Learning checks and validates any machine learning packages that might require an upgrade. Updates incorporate the latest OS-related patches from Canonical as the original Linux OS publisher. In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available. For details on the patching process, see [Vulnerability Management](./concept-vulnerability-management.md). Main updates provided with each image version are described in the below sections. Major: Image Version: `24.06.10` SDK (azureml-core): `1.56.0` Python: `3.9`+ CUDA: `12.2`-CUDnn==9.1.1 ++CUDnn==`9.1.1` + Nvidia Driver: `535.171.04`+ PyTorch: `1.13.1`+ TensorFlow: `2.15.0` -autokeras==1.0.16 -keras=2.15.0 -ray==2.2.0 -docker version==24.0.9-1 +autokeras==`1.0.16` ++keras=`2.15.0` ++ray==`2.2.0` ++docker version==`24.0.9-1` -## Feb 16, 2024 +## February 16, 2024 Version: `24.01.30` Main changes: Main changes: - `Azure Machine Learning SDK` to version `1.49.0` - `Certifi` updated to `2022.9.24`-- `.NET` updated from `3.1` (EOL) to `6.0`+- `.NET` updated from `3.1` (end-of-life) to `6.0` - `Pyspark` update to `3.3.1` (mitigating log4j 1.2.17 and common-text-1.6 vulnerabilities) - Default `intellisense` to Python `3.10` on the CI - Bug fixes and stability improvements |
machine-learning | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md | Azure portal users can find the latest image available for provisioning the Data Visit the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds. -## June 28, 2024 --Image Version: 24.06.10 --SDK Version: 1.56.0 --Issue fixed: Compute Instance 20.04 image build with SDK 1.56.0 --Major: Image Version: 24.06.10 --- SDK(azureml-core):1.56.0-- Python:3.9-- CUDA: 12.2-- CUDnn==9.1.1-- Nvidia Driver: 535.171.04-- PyTorch: 1.13.1-- TensorFlow: 2.15.0-- autokeras==1.0.16-- keras=2.15.0-- ray==2.2.0-- docker version==24.0.9-1- ## June 17, 2024 [Data Science Virtual Machine - Windows 2022](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2022?tab=Overview) |
machine-learning | How To Deploy Models Mistral | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-mistral.md | -In this article, you learn how to use Azure Machine Learning studio to deploy the Mistral family of models as a service with pay-as-you-go billing. +In this article, you learn how to use Azure Machine Learning studio to deploy the Mistral family of models as serverless APIs with pay-as-you-go token-based billing. -Mistral AI offers two categories of models in Azure Machine Learning studio: +Mistral AI offers two categories of models in Azure Machine Learning studio. These models are available in the [model catalog](concept-model-catalog.md). -- __Premium models__: Mistral Large and Mistral Small. These models are available with pay-as-you-go token based billing with Models as a Service in the studio model catalog. -- __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models are also available in the studio model catalog and can be deployed to dedicated VM instances in your own Azure subscription with managed online endpoints.- -You can browse the Mistral family of models in the [model catalog](concept-model-catalog.md) by filtering on the Mistral collection. +- __Premium models__: Mistral Large and Mistral Small. These models can be deployed as serverless APIs with pay-as-you-go token-based billing. +- __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models can be deployed to managed computes in your own Azure subscription. ++You can browse the Mistral family of models in the model catalog by filtering on the Mistral collection. ## Mistral family of models Additionally, Mistral Large is: - __Strong in coding.__ Code generation, review, and comments. Supports all mainstream coding languages. - __Multi-lingual by design.__ Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported. - __Responsible AI compliant.__ Efficient guardrails baked in the model, and extra safety layer with the `safe_mode` option.- + # [Mistral Small](#tab/mistral-small) Mistral Small is Mistral AI's most efficient Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency. Mistral Small is: [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] -## Deploy Mistral family of models with pay-as-you-go +## Deploy Mistral family of models as a serverless API ++Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription. -Certain models in the model catalog can be deployed as a service with pay-as-you-go. Pay-as-you-go deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription. +**Mistral Large** and **Mistral Small** can be deployed as a serverless API with pay-as-you-go billing and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models. -**Mistral Large** and **Mistral Small** are eligible to be deployed as a service with pay-as-you-go and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models. ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.+- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one. The serverless API model deployment offering for eligible models in the Mistral family is only available in workspaces created in these regions: - > [!IMPORTANT] - > The pay-as-you-go model deployment offering for eligible models in the Mistral family is only available in workspaces created in the **East US 2** and **Sweden Central** regions. + - East US + - East US 2 + - North Central US + - South Central US + - West US + - West US 3 + - Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). The following steps demonstrate the deployment of Mistral Large, but you can use To create a deployment: 1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).-1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **Sweden Central** region. -1. Choose the model (Mistral-large) that you want to deploy from the [model catalog](https://ml.azure.com/model/catalog). +1. Select the workspace in which you want to deploy your model. To use the serverless API model deployment offering, your workspace must belong to one of the regions listed in the [prerequisites](#prerequisites). +1. Choose the model you want to deploy, for example Mistral-large, from the [model catalog](https://ml.azure.com/model/catalog). Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**. -1. On the model's overview page in the model catalog, select **Deploy** and then **Pay-as-you-go**. +1. On the model's overview page in the model catalog, select **Deploy** to open a serverless API deployment window for the model. +1. Select the checkbox to acknowledge the Microsoft purchase policy. - :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go.png"::: + :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-serverless-api.png" alt-text="A screenshot showing how to deploy a model as a serverless API." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-serverless-api.png"::: 1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. -1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model. +1. You can also select the **Pricing and terms** tab to learn about pricing for the selected model. 1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, Mistral-large). This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a workspace. - :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-marketplace-terms.png"::: - 1. Once you subscribe the workspace for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ workspace don't require subscribing again. If this scenario applies to you, you'll see a **Continue to deploy** option to select. - :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go-project.png" alt-text="A screenshot showing a workspace that is already subscribed to the offering." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go-project.png"::: + :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-serverless-api-project.png" alt-text="A screenshot showing a workspace that is already subscribed to the offering." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-serverless-api-project.png"::: 1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region. To create a deployment: 1. Select the **Test** tab to start interacting with the model. 1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**. -To learn about billing for Mistral models deployed with pay-as-you-go, see [Cost and quota considerations for Mistral family of models deployed as a service](#cost-and-quota-considerations-for-mistral-family-of-models-deployed-as-a-service). +To learn about billing for Mistral models deployed as a serverless API with pay-as-you-go token-based billing, see [Cost and quota considerations for Mistral family of models deployed as a service](#cost-and-quota-considerations-for-mistral-family-of-models-deployed-as-a-service). ### Consume the Mistral family of models as a service You can consume Mistral Large by using the chat API. For more information on using the APIs, see the [reference](#reference-for-mistral-family-of-models-deployed-as-a-service) section. -### Reference for Mistral family of models deployed as a service +## Reference for Mistral family of models deployed as a service Mistral models accept both the [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/chat/completions` and the native [Mistral Chat API](#mistral-chat-api) on `/v1/chat/completions`. Mistral models accept both the [Azure AI Model Inference API](reference-model-in The [Azure AI Model Inference API](reference-model-inference-api.md) schema can be found in the [reference for Chat Completions](reference-model-inference-chat-completions.md) article and an [OpenAPI specification can be obtained from the endpoint itself](reference-model-inference-api.md?tabs=rest#getting-started). -#### Mistral Chat API +### Mistral Chat API Use the method `POST` to send the request to the `/v1/chat/completions` route: The `messages` object has the following fields: | `role` | `string` | The role of the message's author. One of `system`, `user`, or `assistant`. | -#### Example +#### Request example __Body__ The `logprobs` object is a dictionary with the following fields: | `tokens` | `array` of `string` | Selected tokens. | | `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the prob. | -#### Example +#### Response example The following JSON is an example response: Models deployed as a service with pay-as-you-go are protected by Azure AI conten ## Related content - [Model Catalog and Collections](concept-model-catalog.md)+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)-- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md) |
migrate | Concepts Assessment Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md | Assessments also determine readiness of the recommended target for Microsoft Def - SUSE Linux Enterprise Server 12, 15+ - Debian 9, 10, 11 - Oracle Linux 7.2+, 8- - CentOS Linux 7.2+ - Amazon Linux 2 - For other Operating Systems, the server is marked as **Ready with Conditions**. If a server is not ready to be migrated to Azure, it is marked as **Not Ready** for Microsoft Defender for Servers. |
migrate | Concepts Business Case Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md | You can override the above values in the assumptions section of the Business cas ## Next steps-- [Learn more](./migrate-services-overview.md) about Azure Migrate.+- [Review](best-practices-assessment.md) the best practices for creating assessments. +- Learn more on how to [build](how-to-build-a-business-case.md) and [view](how-to-view-a-business-case.md) a business case. |
migrate | Discover And Assess Using Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md | In the configuration manager, select **Set up prerequisites**, and then complete After the appliance is successfully registered, to see the registration details, select **View details**. -4. **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions. +4. **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7, 7, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions. You can *rerun prerequisites* at any time during appliance configuration to check whether the appliance meets all the prerequisites. |
migrate | How To Scale Out For Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-scale-out-for-migration.md | To complete the registration of the scale-out appliance, click **import** to get 1. In the pop-up window opened in the previous step, select the location of the copied configuration zip file and click **Save**. Once the files have been successfully imported, the registration of the scale-out appliance will complete and it will show you the timestamp of the last successful import. You can also see the registration details by clicking **View details**.-1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, as indicated in the *Installation instructions*. +1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7, 7, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, as indicated in the *Installation instructions*. The Migration and modernization tool uses the VDDK to replicate servers during migration to Azure. |
migrate | How To Set Up Appliance Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/how-to-set-up-appliance-vmware.md | In the configuration manager, select **Set up prerequisites**, and complete thes After the appliance is successfully registered, select **View details** to see the registration details. -1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 or 7.0 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*. +1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7, 7.0, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*. The Migration and modernization tool uses the VDDK to replicate servers during migration to Azure. |
migrate | Migrate Support Matrix Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware.md | Requirement | Details Before deployment | You should have a project in place with the Azure Migrate: Discovery and assessment tool added to the project.<br /><br />You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br /><br />Learn how to [create a project for the first time](../create-manage-projects.md).<br /> Learn how to [add a discovery and assessment tool to an existing project](../how-to-assess.md).<br /> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](../how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers. Supported servers | Supported for all servers in your on-premises environment. Log Analytics | Azure Migrate and Modernize uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br /><br /> You associate a new or existing Log Analytics workspace with a project. You can't modify the workspace for a project after you add the workspace. <br /><br /> The workspace must be in the same subscription as the project.<br /><br /> The workspace must be located in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /><br /> The workspace must be in a [region in which Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor®ions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> In Log Analytics, the workspace associated with Azure Migrate is tagged with the project key and project name.-Required agents | On each server that you want to analyze, install the following agents:<br />- [Microsoft Monitoring Agent (MMA)](../../azure-monitor/agents/agent-windows.md)<br />- [Dependency agent](../../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](../how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the [MMA](../how-to-create-group-machine-dependencies.md#install-the-mma). +Required agents | On each server that you want to analyze, install the following agents:<br />- Azure Monitor agent (AMA)<br />- [Dependency agent](../../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](../how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the Azure Monitor agent. Log Analytics workspace | The workspace must be in the same subscription as the project.<br /><br /> Azure Migrate supports workspaces that are located in the East US, Southeast Asia, and West Europe regions.<br /><br /> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor®ions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> You can't modify the workspace for a project after you add the workspace. Cost | The Service Map solution doesn't incur any charges for the first 180 days. The count starts from the day you associate the Log Analytics workspace with the project.<br /><br /> After 180 days, standard Log Analytics charges apply.<br /><br /> Using any solution other than Service Map in the associated Log Analytics workspace incurs [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br /><br /> When the project is deleted, the workspace isn't automatically deleted. After you delete the project, Service Map usage isn't free. Each node is charged according to the paid tier of the Log Analytics workspace.<br /><br />If you have projects that you created before Azure Migrate general availability (GA on February 28, 2018), you might incur other Service Map charges. To ensure that you're charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable. Management | When you register agents to the workspace, use the ID and key provided by the project.<br /><br /> You can use the Log Analytics workspace outside Azure Migrate and Modernize.<br /><br /> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../../azure-monitor/logs/manage-access.md).<br /><br /> Don't delete the workspace created by Azure Migrate and Modernize unless you delete the project. If you do, the dependency visualization functionality doesn't work as expected. |
migrate | Prepare For Agentless Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/prepare-for-agentless-migration.md | Azure Migrate automatically handles these configuration changes for the followin - Windows Server 2008 or later - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x-- CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.9, 7.7, 7.6, 7.5, 7.4, 6.x+- CentOS 9.x (Release and Stream) - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 - Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) The preparation script executes the following changes based on the OS type of th Azure Migrate will attempt to install the Microsoft Azure Linux Agent (waagent), a secure, lightweight process that manages Linux & FreeBSD provisioning, and VM interaction with the Azure Fabric Controller. [Learn more](../../virtual-machines/extensions/agent-linux.md) about the functionality enabled for Linux and FreeBSD IaaS deployments via the Linux agent. - Review the list of [required packages](../../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8.x/7.x/6.x, CentOS 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12, Debian 9/8/7, and Oracle 7/6 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](../../virtual-machines/extensions/agent-linux.md#installation) for other OS versions. + Review the list of [required packages](../../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12, Debian 9/8/7, and Oracle 7/6 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](../../virtual-machines/extensions/agent-linux.md#installation) for other OS versions. You can use the command to verify the service status of the Azure Linux Agent to make sure it's running. The service name might be **walinuxagent** or **waagent**. Once the hydration changes are done, the script will unmount all the partitions mounted, deactivate volume groups, and then flush the devices. |
migrate | Tutorial Discover Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-discover-vmware.md | Requirement | Details **vCenter Server/ESXi host** | You need a server running vCenter Server version 8.0, 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Servers must be hosted on an ESXi host running version 5.5 or later.<br /><br /> On the vCenter Server, allow inbound connections on TCP port 443 so that the appliance can collect configuration and performance metadata.<br /><br /> The appliance connects to vCenter Server on port 443 by default. If the server running vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details in the appliance configuration manager.<br /><br /> On the ESXi hosts, make sure that inbound access is allowed on TCP port 443 for discovery of installed applications and for agentless dependency analysis on servers. **Azure Migrate appliance** | vCenter Server must have these resources to allocate to a server that hosts the Azure Migrate appliance:<br /><br /> - 32 GB of RAM, 8 vCPUs, and approximately 80 GB of disk storage.<br /><br /> - An external virtual switch and internet access on the appliance server, directly or via a proxy. **Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](migrate-support-matrix-vmware.md#sql-server-instance-and-database-discovery-requirements) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). -**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account [requires these permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance. You can use the [account provisioning utility](../least-privilege-credentials.md) to create custom accounts or use any existing account that is a member of the sysadmin server role for simplicity. +**SQL Server access** | To discover SQL Server instances and databases, the Windows account, or SQL Server account [requires these permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance. You can use the [account provisioning utility](../least-privilege-credentials.md) to create custom accounts or use any existing account that is a member of the sysadmin server role for simplicity. ## Prepare an Azure user account In the configuration manager, select **Set up prerequisites**, and then complete After the appliance is successfully registered, to see the registration details, select **View details**. -1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. Download VDDK 6.7 or 7 (depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*. +1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. Download VDDK 6.7, 7, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*. The Migration and modernization tool uses the VDDK to replicate servers during migration to Azure. Details such as OS license support status, inventory, database instances, etc. a You can gain deeper insights into the support posture of your environment from the **Discovered servers** and **Discovered database instances** sections. -The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. +The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right, which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. -The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. +The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right, which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. |
mysql | Concepts Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md | Refer to the following sections to learn more about the limits of the several co ### lower_case_table_names -For MySQL version 5.7, default value is 1 in Azure Database for MySQL flexible server. It's important to note that while it is possible to change the supported value to 2, reverting from 2 back to 1 isn't allowed. Contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value. +For MySQL version 5.7, default value is 1 in Azure Database for MySQL flexible server. It's important to note that while it's possible to change the supported value to 2, reverting from 2 back to 1 isn't allowed. Contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value. For [MySQL version 8.0+](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html) lower_case_table_names can only be configured when initializing the server. [Learn more](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html). Changing the lower_case_table_names setting after the server is initialized is prohibited. For MySQL version 8.0, default value is 1 in Azure Database for MySQL flexible server. Supported value for MySQL version 8.0 are 1 and 2 in Azure Database for MySQL flexible server. Contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value during server creation. If [`log_bin_trust_function_creators`] is set to OFF, if you try to create trigg ### innodb_buffer_pool_size Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) to learn more about this parameter.+The [Physical Memory Size](./concepts-service-tiers-storage.md#physical-memory-size-gb) (GB) in the table below represents the available random-access memory (RAM) in gigabytes (GB) on your Azure Database for MySQL flexible server. -|**Pricing Tier**|**vCore(s)**|**Memory Size (GiB)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| +|**Pricing Tier**|**vCore(s)**|**Physical Memory Size (GiB)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| |||||||-|Burstable (B1s)|1|1|134217728|33554432|134217728| -|Burstable (B1ms)|1|2|536870912|134217728|536870912| +|Burstable (B1s)|1|1|134217728|33554432|268435456| +|Burstable (B1ms)|1|2|536870912|134217728|1073741824| |Burstable|2|4|2147483648|134217728|2147483648|-|General Purpose|2|8|5368709120|134217728|5368709120| +|General Purpose|2|8|4294967296|134217728|5368709120| |General Purpose|4|16|12884901888|134217728|12884901888| |General Purpose|8|32|25769803776|134217728|25769803776| |General Purpose|16|64|51539607552|134217728|51539607552| Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb- |Business Critical|4|32|25769803776|134217728|25769803776| |Business Critical|8|64|51539607552|134217728|51539607552| |Business Critical|16|128|103079215104|134217728|103079215104|+|Business Critical|20|160|128849018880|134217728|128849018880| |Business Critical|32|256|206158430208|134217728|206158430208| |Business Critical|48|384|309237645312|134217728|309237645312| |Business Critical|64|504|405874409472|134217728|405874409472| Azure Database for MySQL flexible server supports at largest, **4 TB**, in a sin ### innodb_log_file_size -[innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) is the size in bytes of each [log file](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_file) in a [log group](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_group). The combined size of log files [(innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) * [innodb_log_files_in_group](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_files_in_group)) can't exceed a maximum value that is slightly less than 512 GB). A bigger log file size is better for performance, but it has a drawback that the recovery time after a crash is high. You need to balance recovery time in the rare event of a crash recovery versus maximizing throughput during peak operations. These can also result in longer restart times. You can configure innodb_log_size to any of these values - 256 MB, 512 MB, 1 GB or 2 GB for Azure Database for MySQL flexible server. The parameter is static and requires a restart. +[innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) is the size in bytes of each [log file](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_file) in a [log group](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_group). The combined size of log files [(innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) * [innodb_log_files_in_group](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_files_in_group)) can't exceed a maximum value that is slightly less than 512 GB). A bigger log file size is better for performance, but it has a drawback that the recovery time after a crash is high. You need to balance recovery time in the rare event of a crash recovery versus maximizing throughput during peak operations. These can also result in longer restart times. You can configure innodb_log_size to any of these values - 256 MB, 512 MB, 1 GB, or 2 GB for Azure Database for MySQL flexible server. The parameter is static and requires a restart. > [!NOTE] > If you have changed the parameter innodb_log_file_size from default, check if the value of "show global status like 'innodb_buffer_pool_pages_dirty'" stays at 0 for 30 seconds to avoid restart delay. Upon initial deployment, an Azure Database for MySQL flexible server instance in In Azure Database for MySQL flexible server this parameter specifies the number of seconds the service waits before purging the binary log file. -The binary log contains "events" that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log is used mainly for two purposes, replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before it's been purged. If you want to persist binary logs for a more duration of time, you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it purges as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure Database for MySQL flexible server, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data-out from Azure Database for MySQL flexible server, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs won't be purged soon enough and can lead to increase in the storage billing. +The binary log contains "events" that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log is used mainly for two purposes, replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before it's been purged. If you want to persist binary logs for a more duration of time, you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it purges as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure Database for MySQL flexible server, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data-out from Azure Database for MySQL flexible server, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs won't be purged soon enough and can lead to increase in the storage billing. ### event_scheduler |
mysql | Azure Pipelines Mysql Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/azure-pipelines-mysql-deploy.md | - Title: Azure Pipelines task for Azure Database for MySQL single server -description: Enable Azure Database for MySQL Single Server CLI task for using with Azure Pipelines ------ Previously updated : 09/14/2022---# Azure Pipelines for Azure Database for MySQL - Single Server ---Get started with Azure Database for MySQL by deploying a database update with Azure Pipelines. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/). --You'll use the [Azure Database for MySQL Deployment task](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment). The Azure Database for MySQL Deployment task only works with Azure Database for MySQL single server. --## Prerequisites --Before you begin, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Azure DevOps organization. [Sign up for Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-sign-up).-- A GitHub repository that you can use for your pipeline. If you donΓÇÖt have an existing repository, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline). --This quickstart uses the resources created in either of these guides as a starting point: -- [Create an Azure Database for MySQL server using Azure portal](/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal)-- [Create an Azure Database for MySQL server using Azure CLI](/azure/mysql/quickstart-create-mysql-server-database-using-azure-cli)---## Create your pipeline --You'll use the basic starter pipeline as a basis for your pipeline. --1. Sign in to your Azure DevOps organization and go to your project. --2. In your project, navigate to the **Pipelines** page. Then choose the action to create a new pipeline. --3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code. --4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials. --5. When the list of repositories appears, select your desired repository. --6. Azure Pipelines will analyze your repository and offer configuration options. Select **Starter pipeline**. -- :::image type="content" source="media/azure-pipelines-mysql-task/configure-pipeline-option.png" alt-text="Screenshot of Select Starter pipeline."::: - -## Create a secret --You'll need to know your database server name, SQL username, and SQL password to use with the [Azure Database for MySQL Deployment task](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment). --For security, you'll want to save your SQL password as a secret variable in the pipeline settings UI for your pipeline. --1. Go to the **Pipelines** page, select the appropriate pipeline, and then select **Edit**. -1. Select **Variables**. -1. Add a new variable named `SQLpass` and select **Keep this value secret** to encrypt and save the variable. -- :::image type="content" source="media/azure-pipelines-mysql-task/save-secret-variable.png" alt-text="Screenshot of adding a secret variable."::: - -1. Select **Ok** and **Save** to add the variable. --## Verify permissions for your database --To access your MySQL database with Azure Pipelines, you need to set your database to accept connections from all Azure resources. --1. In the Azure portal, open your database resource. -1. Select **Connection security**. -1. Toggle **Allow access to Azure services** to **Yes**. -- :::image type="content" source="media/azure-pipelines-mysql-task/allow-azure-access-mysql.png" alt-text="Screenshot of setting MySQL to allow Azure connections."::: --## Add the Azure Database for MySQL Deployment task --In this example, we'll create a new databases named `quickstartdb` and add an inventory table. The inline SQL script will: --- Delete `quickstartdb` if it exists and create a new `quickstartdb` database.-- Delete the table `inventory` if it exists and creates a new `inventory` table.-- Insert three rows into `inventory`.-- Show all the rows.-- Update the value of the first row in `inventory`.-- Delete the second row in `inventory`.--You'll need to replace the following values in your deployment task. --|Input |Description |Example | -|||| -|`azureSubscription` | Authenticate with your Azure Subscription with a [service connection](/azure/devops/pipelines/library/connect-to-azure). | `My Subscription` | -|`ServerName` | The name of your Azure Database for MySQL server. | `fabrikam.mysql.database.azure.com` | -|`SqlUsername` | The user name of your Azure Database for MySQL. | `mysqladmin@fabrikam` | -|`SqlPassword` | The password for the username. This should be defined as a secret variable. | `$(SQLpass)` | --```yaml --trigger: -- main--pool: - vmImage: ubuntu-latest --steps: -- task: AzureMysqlDeployment@1- inputs: - azureSubscription: '<your-subscription> - ServerName: '<db>.mysql.database.azure.com' - SqlUsername: '<username>@<db>' - SqlPassword: '$(SQLpass)' - TaskNameSelector: 'InlineSqlTask' - SqlInline: | - DROP DATABASE IF EXISTS quickstartdb; - CREATE DATABASE quickstartdb; - USE quickstartdb; - - -- Create a table and insert rows - DROP TABLE IF EXISTS inventory; - CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER); - INSERT INTO inventory (name, quantity) VALUES ('banana', 150); - INSERT INTO inventory (name, quantity) VALUES ('orange', 154); - INSERT INTO inventory (name, quantity) VALUES ('apple', 100); - - -- Read - SELECT * FROM inventory; - - -- Update - UPDATE inventory SET quantity = 200 WHERE id = 1; - SELECT * FROM inventory; - - -- Delete - DELETE FROM inventory WHERE id = 2; - SELECT * FROM inventory; - IpDetectionMethod: 'AutoDetect' -``` --## Deploy and verify resources --Select **Save and run** to deploy your pipeline. The pipeline job will be launched and after few minutes, the job status should indicate `Success`. --You can verify that your pipeline ran successfully within the `AzureMysqlDeployment` task in the pipeline run. --Open the task and verify that the last two entries show two rows in `inventory`. There are two rows because the second row has been deleted. ----## Clean up resources --When youΓÇÖre done working with your pipeline, delete `quickstartdb` in your Azure Database for MySQL. You can also delete the deployment pipeline you created. --## Next steps --> [!div class="nextstepaction"] -> [Tutorial: Build an ASP.NET Core and Azure SQL Database app in Azure App Service](../../app-service/tutorial-dotnetcore-sqldb-app.md) |
mysql | Concepts Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-audit-logs.md | - Title: Audit logs - Azure Database for MySQL -description: Describes the audit logs available in Azure Database for MySQL, and the available parameters for enabling logging levels. ----- Previously updated : 06/20/2022---# Audit Logs in Azure Database for MySQL ----In Azure Database for MySQL, the audit log is available to users. The audit log can be used to track database-level activity and is commonly used for compliance. --## Configure audit logging -->[!IMPORTANT] -> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted and minimum amount of data is collected. --By default the audit log is disabled. To enable it, set `audit_log_enabled` to ON. --Other parameters you can adjust include: --- `audit_log_events`: controls the events to be logged. See below table for specific audit events.-- `audit_log_include_users`: MySQL users to be included for logging. The default value for this parameter is empty, which will include all the users for logging. This has higher priority over `audit_log_exclude_users`. Max length of the parameter is 512 characters.-- `audit_log_exclude_users`: MySQL users to be excluded from logging. Max length of the parameter is 512 characters.--> [!NOTE] -> `audit_log_include_users` has higher priority over `audit_log_exclude_users`. For example, if `audit_log_include_users` = `demouser` and `audit_log_exclude_users` = `demouser`, the user will be included in the audit logs because `audit_log_include_users` has higher priority. --| **Event** | **Description** | -||| -| `CONNECTION` | - Connection initiation (successful or unsuccessful) <br> - User reauthentication with different user/password during session <br> - Connection termination | -| `DML_SELECT`| SELECT queries | -| `DML_NONSELECT` | INSERT/DELETE/UPDATE queries | -| `DML` | DML = DML_SELECT + DML_NONSELECT | -| `DDL` | Queries like "DROP DATABASE" | -| `DCL` | Queries like "GRANT PERMISSION" | -| `ADMIN` | Queries like "SHOW STATUS" | -| `GENERAL` | All in DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN | -| `TABLE_ACCESS` | - Available for MySQL 5.7 and MySQL 8.0 <br> - Table read statements, such as SELECT or INSERT INTO ... SELECT <br> - Table delete statements, such as DELETE or TRUNCATE TABLE <br> - Table insert statements, such as INSERT or REPLACE <br> - Table update statements, such as UPDATE | --## Access audit logs --Audit logs are integrated with Azure Monitor Diagnostic Logs. Once you've enabled audit logs on your MySQL server, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs in the Azure portal, see the [audit log portal article](how-to-configure-audit-logs-portal.md#set-up-diagnostic-logs). -->[!NOTE] ->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings --## Diagnostic Logs Schemas --The following sections describe what's output by MySQL audit logs based on the event type. Depending on the output method, the fields included and the order in which they appear may vary. --### Connection --| **Property** | **Description** | -||| -| `TenantId` | Your tenant ID | -| `SourceSystem` | `Azure` | -| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC | -| `Type` | Type of the log. Always `AzureDiagnostics` | -| `SubscriptionId` | GUID for the subscription that the server belongs to | -| `ResourceGroup` | Name of the resource group the server belongs to | -| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | -| `ResourceType` | `Servers` | -| `ResourceId` | Resource URI | -| `Resource` | Name of the server | -| `Category` | `MySqlAuditLogs` | -| `OperationName` | `LogEvent` | -| `LogicalServerName_s` | Name of the server | -| `event_class_s` | `connection_log` | -| `event_subclass_s` | `CONNECT`, `DISCONNECT`, `CHANGE USER` (only available for MySQL 5.7) | -| `connection_id_d` | Unique connection ID generated by MySQL | -| `host_s` | Blank | -| `ip_s` | IP address of client connecting to MySQL | -| `user_s` | Name of user executing the query | -| `db_s` | Name of database connected to | -| `\_ResourceId` | Resource URI | --### General --Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN event types. --> [!NOTE] -> For `sql_text`, log will be truncated if it exceeds 2048 characters. --| **Property** | **Description** | -||| -| `TenantId` | Your tenant ID | -| `SourceSystem` | `Azure` | -| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC | -| `Type` | Type of the log. Always `AzureDiagnostics` | -| `SubscriptionId` | GUID for the subscription that the server belongs to | -| `ResourceGroup` | Name of the resource group the server belongs to | -| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | -| `ResourceType` | `Servers` | -| `ResourceId` | Resource URI | -| `Resource` | Name of the server | -| `Category` | `MySqlAuditLogs` | -| `OperationName` | `LogEvent` | -| `LogicalServerName_s` | Name of the server | -| `event_class_s` | `general_log` | -| `event_subclass_s` | `LOG`, `ERROR`, `RESULT` (only available for MySQL 5.6) | -| `event_time` | Query start time in UTC timestamp | -| `error_code_d` | Error code if query failed. `0` means no error | -| `thread_id_d` | ID of thread that executed the query | -| `host_s` | Blank | -| `ip_s` | IP address of client connecting to MySQL | -| `user_s` | Name of user executing the query | -| `sql_text_s` | Full query text | -| `\_ResourceId` | Resource URI | --### Table access --> [!NOTE] -> Table access logs are only output for MySQL 5.7.<br>For `sql_text`, log will be truncated if it exceeds 2048 characters. --| **Property** | **Description** | -||| -| `TenantId` | Your tenant ID | -| `SourceSystem` | `Azure` | -| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC | -| `Type` | Type of the log. Always `AzureDiagnostics` | -| `SubscriptionId` | GUID for the subscription that the server belongs to | -| `ResourceGroup` | Name of the resource group the server belongs to | -| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | -| `ResourceType` | `Servers` | -| `ResourceId` | Resource URI | -| `Resource` | Name of the server | -| `Category` | `MySqlAuditLogs` | -| `OperationName` | `LogEvent` | -| `LogicalServerName_s` | Name of the server | -| `event_class_s` | `table_access_log` | -| `event_subclass_s` | `READ`, `INSERT`, `UPDATE`, or `DELETE` | -| `connection_id_d` | Unique connection ID generated by MySQL | -| `db_s` | Name of database accessed | -| `table_s` | Name of table accessed | -| `sql_text_s` | Full query text | -| `\_ResourceId` | Resource URI | --## Analyze logs in Azure Monitor Logs --Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your audited events. Below are some sample queries to help you get started. Make sure to update the below with your server name. --- List GENERAL events on a particular server-- ```kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlAuditLogs' and event_class_s == "general_log" - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | order by TimeGenerated asc nulls last - ``` --- List CONNECTION events on a particular server-- ```kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlAuditLogs' and event_class_s == "connection_log" - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | order by TimeGenerated asc nulls last - ``` --- Summarize audited events on a particular server-- ```kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlAuditLogs' - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | summarize count() by event_class_s, event_subclass_s, user_s, ip_s - ``` --- Graph the audit event type distribution on a particular server-- ```kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlAuditLogs' - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m) - | render timechart - ``` --- List audited events across all MySQL servers with Diagnostic Logs enabled for audit logs-- ```kusto - AzureDiagnostics - | where Category == 'MySqlAuditLogs' - | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s - | order by TimeGenerated asc nulls last - ``` --## Next steps --- [How to configure audit logs in the Azure portal](how-to-configure-audit-logs-portal.md) |
mysql | Concepts Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-ad-authentication.md | - Title: Active Directory authentication - Azure Database for MySQL -description: Learn about the concepts of Microsoft Entra ID for authentication with Azure Database for MySQL ----- Previously updated : 06/20/2022---# Use Microsoft Entra ID for authenticating with MySQL ----Microsoft Entra authentication is a mechanism of connecting to Azure Database for MySQL using identities defined in Microsoft Entra ID. -With Microsoft Entra authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. --Benefits of using Microsoft Entra ID include: --- Authentication of users across Azure Services in a uniform way-- Management of password policies and password rotation in a single place-- Multiple forms of authentication supported by Microsoft Entra ID, which can eliminate the need to store passwords-- Customers can manage database permissions using external (Microsoft Entra ID) groups.-- Microsoft Entra authentication uses MySQL database users to authenticate identities at the database level-- Support of token-based authentication for applications connecting to Azure Database for MySQL--To configure and use Microsoft Entra authentication, use the following process: --1. Create and populate Microsoft Entra ID with user identities as needed. -2. Optionally associate or change the Active Directory currently associated with your Azure subscription. -3. Create a Microsoft Entra administrator for the Azure Database for MySQL server. -4. Create database users in your database mapped to Microsoft Entra identities. -5. Connect to your database by retrieving a token for a Microsoft Entra identity and logging in. --> [!NOTE] -> To learn how to create and populate Microsoft Entra ID, and then configure Microsoft Entra ID with Azure Database for MySQL, see [Configure and sign in with Microsoft Entra ID for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md). --## Architecture --The following high-level diagram summarizes how authentication works using Microsoft Entra authentication with Azure Database for MySQL. The arrows indicate communication pathways. --![authentication flow][1] --## Administrator structure --When using Microsoft Entra authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Microsoft Entra administrator. Only the administrator based on a Microsoft Entra account can create the first Microsoft Entra ID contained database user in a user database. The Microsoft Entra administrator login can be a Microsoft Entra user or a Microsoft Entra group. When the administrator is a group account, it can be used by any group member, enabling multiple Microsoft Entra administrators for the MySQL server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Microsoft Entra ID without changing the users or permissions in the MySQL server. Only one Microsoft Entra administrator (a user or group) can be configured at any time. --![admin structure][2] --## Permissions --To create new users that can authenticate with Microsoft Entra ID, you must be the designated Microsoft Entra administrator. This user is assigned by configuring the Microsoft Entra Administrator account for a specific Azure Database for MySQL server. --To create a new Microsoft Entra database user, you must connect as the Microsoft Entra administrator. This is demonstrated in [Configure and Login with Microsoft Entra ID for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md). --Any Microsoft Entra authentication is only possible if the Microsoft Entra admin was created for Azure Database for MySQL. If the Microsoft Entra admin was removed from the server, existing Microsoft Entra users created previously can no longer connect to the database using their Microsoft Entra credentials. --<a name='connecting-using-azure-ad-identities'></a> --## Connecting using Microsoft Entra identities --Microsoft Entra authentication supports the following methods of connecting to a database using Microsoft Entra identities: --- Microsoft Entra Password-- Microsoft Entra integrated-- Microsoft Entra Universal with MFA-- Using Active Directory Application certificates or client secrets-- [Managed Identity](how-to-connect-with-managed-identity.md)--Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in. --Please note that management operations, such as adding new users, are only supported for Microsoft Entra user roles at this point. --> [!NOTE] -> For more details on how to connect with an Active Directory token, see [Configure and sign in with Microsoft Entra ID for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md). --## Additional considerations --- Microsoft Entra authentication is only available for MySQL 5.7 and newer.-- Only one Microsoft Entra administrator can be configured for an Azure Database for MySQL server at any time.-- Only a Microsoft Entra administrator for MySQL can initially connect to the Azure Database for MySQL using a Microsoft Entra account. The Active Directory administrator can configure subsequent Microsoft Entra database users.-- If a user is deleted from Microsoft Entra ID, that user will no longer be able to authenticate with Microsoft Entra ID, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user.-> [!NOTE] -> Login with the deleted Microsoft Entra user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for MySQL this access will be revoked immediately. -- If the Microsoft Entra admin is removed from the server, the server will no longer be associated with a Microsoft Entra tenant, and therefore all Microsoft Entra logins will be disabled for the server. Adding a new Microsoft Entra admin from the same tenant will re-enable Microsoft Entra logins.-- Azure Database for MySQL matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Microsoft Entra user ID, as opposed to using the username. This means that if a Microsoft Entra user is deleted in Microsoft Entra ID and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Microsoft Entra ID and then a new user with the same name added, the new user will not be able to connect with the existing user.--> [!NOTE] -> The subscriptions of an Azure MySQL with Microsoft Entra authentication enabled cannot be transferred to another tenant or directory. --## Next steps --- To learn how to create and populate Microsoft Entra ID, and then configure Microsoft Entra ID with Azure Database for MySQL, see [Configure and sign in with Microsoft Entra ID for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md).-- For an overview of logins, and database users for Azure Database for MySQL, see [Create users in Azure Database for MySQL](how-to-create-users.md).--<!--Image references--> --[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png -[2]: ./media/concepts-azure-ad-authentication/admin-structure.png |
mysql | Concepts Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-backup.md | - Title: Backup and restore - Azure Database for MySQL -description: Learn about automatic backups and restoring your Azure Database for MySQL server. ------ Previously updated : 06/20/2022---# Backup and restore in Azure Database for MySQL ----Azure Database for MySQL automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion. --## Backups --Azure Database for MySQL takes backups of the data files and the transaction log. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can [optionally configure it](how-to-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted using AES 256-bit encryption. --These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](concepts-migrate-dump-restore.md) to copy a database. --The backup type and frequency is depending on the backend storage for the servers. --### Backup type and frequency --#### Basic storage servers --The Basic storage is the backend storage supporting [Basic tier servers](concepts-pricing-tiers.md). Backups on Basic storage servers are snapshot-based. A full database snapshot is performed daily. There are no differential backups performed for basic storage servers and all snapshot backups are full database backups only. --Transaction log backups occur every five minutes. --#### General purpose storage v1 servers (supports up to 4-TB storage) --The General purpose storage is the backend storage supporting [General Purpose](concepts-pricing-tiers.md) and [Memory Optimized tier](concepts-pricing-tiers.md) server. For servers with general purpose storage up to 4 TB, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. The backups on general purpose storage up to 4-TB storage are not snapshot-based and consumes IO bandwidth at the time of backup. For large databases (> 1 TB) on 4-TB storage, we recommend you consider --- Provisioning more IOPs to account for backup IOs OR-- Alternatively, migrate to general purpose storage that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred [Azure regions](./concepts-pricing-tiers.md#storage). There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.--#### General purpose storage v2 servers (supports up to 16-TB storage) --In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes. --For more information of Basic and General purpose storage, refer [storage documentation](./concepts-pricing-tiers.md#storage). --### Backup retention --Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./how-to-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./how-to-restore-server-cli.md#set-backup-configuration). --The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days: --- General purpose storage v1 servers (supporting up to 4-TB storage) will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.-- General purpose storage v2 servers (supporting up to 16-TB storage) will retain the full database snapshots and transaction log backups in last 8 days.--#### Long-term retention --Long-term retention of backups beyond 35 days is currently not natively supported by the service yet. You have an option to use mysqldump to take backups and store them for long-term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it. --### Backup redundancy options --Azure Database for MySQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../../availability-zones/cross-region-replication-azure.md). This geo-redundancy provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage. --> [!NOTE] ->For the following regions - Central India, France Central, UAE North, South Africa North; General purpose storage v2 storage is in Public Preview. If you create a source server in General purpose storage v2 (Supporting up to 16-TB storage) in the above mentioned regions then enabling Geo-Redundant Backup is not supported. --#### Moving from locally redundant to geo-redundant backup storage --Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](concepts-migrate-dump-restore.md) is the only supported option. --### Backup storage cost --Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/). --You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available via the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage. --The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups. --## Restore --In Azure Database for MySQL, performing a restore creates a new server from the original server's backups and restores all databases contained in the server. Restore is currently not supported if original server is in stopped state. --There are two types of restore available: --- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server utilizing the combination of full and transaction log backups.-- **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region utilizing the most recent backup taken.--The estimated time for the recovery of the server depends on several factors: -* The size of the databases -* The number of transaction logs involved -* The amount of activity that needs to be replayed to recover to the restore point -* The network bandwidth if the restore is to a different region -* The number of concurrent restore requests being processed in the target region -* The presence of primary key in the tables in the database. For faster recovery, consider adding primary key for all the tables in your database. To check if your tables have primary key, you can use the following query: -```sql -select tab.table_schema as database_name, tab.table_name from information_schema.tables tab left join information_schema.table_constraints tco on tab.table_schema = tco.table_schema and tab.table_name = tco.table_name and tco.constraint_type = 'PRIMARY KEY' where tco.constraint_type is null and tab.table_schema not in('mysql', 'information_schema', 'performance_schema', 'sys') and tab.table_type = 'BASE TABLE' order by tab.table_schema, tab.table_name; -``` -For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that a high number of geo-restore requests will be initiated for disaster recovery. When there are many requests, the recovery time for individual databases can increase. Most database restores finish in less than 12 hours. --> [!IMPORTANT] -> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](how-to-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md). --### Point-in-time restore --Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option. --> [!NOTE] -> There are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation -> -> - time_zone - This value to set to DEFAULT value **SYSTEM** -> - event_scheduler - The event_scheduler is set to **OFF** on the restored server -> -> You will need to set these server parameters by reconfiguring the [server parameter](how-to-server-parameters.md) --Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect. --You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes. --### Geo-restore --You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups. -- General purpose storage v1 servers (supporting up to 4-TB storage) can be restored to the geo-paired region, or to any Azure region that supports Azure Database for MySQL - Single Server service.-- General purpose storage v2 servers (supporting up to 16-TB storage) can only be restored to Azure regions that support General purpose storage v2 servers infrastructure. -Review [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage) for the list of supported regions. --Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss. --> [!IMPORTANT] ->If a geo-restore is performed for a newly created server, the initial backup synchronization may take more than 24 hours depending on data size as the initial full snapshot backup copy time is much higher. Subsequent snapshot backups are incremental copy and hence the restores are faster after 24 hours of server creation. If you are evaluating geo-restores to define your RTO, we recommend you to wait and evaluate geo-restore **only after 24 hours** of server creation for better estimates. --During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size during geo-restore is not supported. --The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours. --### Perform post-restore tasks --After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running: --- If the new server is meant to replace the original server, redirect clients and client applications to the new server-- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.-- Ensure appropriate logins and database level permissions are in place-- Configure alerts, as appropriate--## Next steps --- To learn more about business continuity, see the [business continuity overview](concepts-business-continuity.md).-- To restore to a point-in-time using the Azure portal, see [restore server to a point-in-time using the Azure portal](how-to-restore-server-portal.md).-- To restore to a point-in-time using Azure CLI, see [restore server to a point-in-time using CLI](how-to-restore-server-cli.md). |
mysql | Concepts Business Continuity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-business-continuity.md | - Title: Business continuity - Azure Database for MySQL -description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MySQL service. ----- Previously updated : 06/20/2022---# Overview of business continuity with Azure Database for MySQL - Single Server ----This article describes the capabilities that Azure Database for MySQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance. --## Features that you can use to provide business continuity --As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after the disruptive event - this is your Recovery Time Objective (RTO). You also need to understand the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event - this is your Recovery Point Objective (RPO). --Azure Database for MySQL single server provides business continuity and disaster recovery features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in a different region. Each has different characteristics for the recovery time and the potential data loss. With [Geo-restore](concepts-backup.md) feature, a new server is created using the backup data that is replicated from another region. The overall time it takes to restore and recover depends on the size of the database and the amount of logs to recover. The overall time to establish the server varies from few minutes to few hours. With [read replicas](concepts-read-replicas.md), transaction logs from the primary are asynchronously streamed to the replica. In the event of a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss. --> [!NOTE] -> The lag between the primary and the replica depends on the latency between the sites, the amount of data to be transmitted and most importantly on the write workload of the primary server. Heavy write workloads can generate significant lag. -> -> Because of asynchronous nature of replication used for read-replicas, they **should not** be considered as a High Availability (HA) solution since the higher lags can mean higher RTO and RPO. Only for workloads where the lag remains smaller through the peak and non-peak times of the workload, read replicas can act as a HA alternative. Otherwise read replicas are intended for true read-scale for ready heavy workloads and for (Disaster Recovery) DR scenarios. --The following table compares RTO and RPO in a **typical workload** scenario: --| **Capability** | **Basic** | **General Purpose** | **Memory optimized** | -| :: | :-: | :--: | :: | -| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | -| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h | -| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*| -- \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload. --## Recover a server after a user or application error --You can use the service's backups to recover a server from various disruptive events. A user may accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data due to an application defect, and so on. --You can perform a point-in-time-restore to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server. --> [!IMPORTANT] -> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](how-to-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md). --## Recover from an Azure regional data center outage --Although rare, an Azure data center can have an outage. When an outage occurs, it causes a business disruption that might only last a few minutes, but could last for hours. --One option is to wait for your server to come back online when the data center outage is over. This works for applications that can afford to have the server offline for some period of time, for example a development environment. When data center has an outage, you do not know how long the outage might last, so this option only works if you don't need your server for a while. --## Geo-restore --The geo-restore feature restores the server using geo-redundant backups. The backups are hosted in your server's [paired region](../../availability-zones/cross-region-replication-azure.md). These backups are accessible even when the region your server is hosted in is offline. You can restore from these backups to any other region and bring your server back online. Learn more about geo-restore from the [backup and restore concepts article](concepts-backup.md). --> [!IMPORTANT] -> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using mysqldump of your existing server and restore it to a newly created server configured with geo-redundant backups. --## Cross-region read replicas --You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using MySQL's binary log replication technology. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md). --## FAQ --### Where does Azure Database for MySQL store customer data? -By default, Azure Database for MySQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region. --## Next steps --- Learn more about the [automated backups in Azure Database for MySQL](concepts-backup.md).-- Learn how to restore using [the Azure portal](how-to-restore-server-portal.md) or [the Azure CLI](how-to-restore-server-cli.md).-- Learn about [read replicas in Azure Database for MySQL](concepts-read-replicas.md). |
mysql | Concepts Certificate Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-certificate-rotation.md | - Title: Certificate rotation for Azure Database for MySQL -description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for MySQL ----- Previously updated : 06/20/2022---# Understanding the changes in the Root CA change for Azure Database for MySQL single server ----Azure Database for MySQL single server as part of standard maintenance and security best practices will complete the root certificate change starting October 2022. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server. --> [!NOTE] -> This article applies to [Azure Database for MySQL - Single Server](single-server-overview.md) ONLY. For [Azure Database for MySQL - Flexible Server](../flexible-server/overview.md), the certificate needed to communicate over SSL is [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) -> -> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. -> --#### Why is a root certificate update required? --Azure Database for MySQL users can only use the predefined certificate to connect to their MySQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant. --Per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers. --#### Do I need to make any changes on my client to maintain connectivity? --If you followed steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.** --###### Create a combined CA certificate --To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation, one of the allowed values will be used. Refer to the following steps: --1. Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from the following links: -- * [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) - * [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) --2. Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included. -- * For Java (MySQL Connector/J) users, execute: -- ```console - keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt - ``` -- ```console - keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt - ``` -- Then replace the original keystore file with the new generated one: -- * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file"); - * System.setProperty("javax.net.ssl.trustStorePassword","password"); -- * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate. -- :::image type="content" source="../flexible-server/media/overview-single/netconnecter-cert.png" alt-text="Azure Database for MySQL .NET cert diagram"::: -- * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file. -- * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format: -- ``` - --BEGIN CERTIFICATE-- - (Root CA1: BaltimoreCyberTrustRoot.crt.pem) - --END CERTIFICATE-- - --BEGIN CERTIFICATE-- - (Root CA2: DigiCertGlobalRootG2.crt.pem) - --END CERTIFICATE-- - ``` --3. Replace the original root CA pem file with the combined root CA file and restart your application/client. -- In the future, after the new certificate is deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem. --> [!NOTE] -> Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done and then it will be safe to drop the **Baltimore certificate**. --#### What if we removed the BaltimoreCyberTrustRoot certificate? --You'll start to encounter connectivity errors while connecting to your Azure Database for MySQL server. You'll need to [configure SSL](how-to-configure-ssl.md) with the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity. --## Frequently asked questions --#### If I'm not using SSL/TLS, do I still need to update the root CA? --No actions are required if you aren't using SSL/TLS. --#### When will my single server instance undergo root certificate change? --The migration from **BaltimoreCyberTrustRoot** to **DigiCertGlobalRootG2** will be carried out across all regions of Azure starting **October 2022** in phases. -To make sure that you do not lose connectivity to your server, follow the steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate). -Combined CA certificate will allow connectivity over SSL to your single server instance with either of these two certificates. ---#### When can I remove BaltimoreCyberTrustRoot certificate completely? --Once the migration is completed successfully across all Azure regions we'll send a communication post that you're safe to change single CA **DigiCertGlobalRootG2** certificate. ---#### I don't specify any CA cert while connecting to my single server instance over SSL, do I still need to perform [the steps](#create-a-combined-ca-certificate) mentioned above? --If you have both the CA root cert in your [trusted root store](/windows-hardware/drivers/install/trusted-root-certification-authorities-certificate-store), then no further actions are required. This also applies to your client drivers that use local store for accessing root CA certificate. ---#### If I'm using SSL/TLS, do I need to restart my database server to update the root CA? --No, you don't need to restart the database server to start using the new certificate. This root certificate is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server. --#### How do I know if I'm using SSL/TLS with root certificate verification? --You can identify whether your connections verify the root certificate by reviewing your connection string. --* If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate. -* If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you don't need to update certificates. -* If your connection string doesn't specify sslmode, you don't need to update certificates. --If you're using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. --#### What is the impact of using App Service with Azure Database for MySQL? --For Azure app services connecting to Azure Database for MySQL, there are two possible scenarios depending on how on you're using SSL with your application. --* This new certificate has been added to App Service at platform level. If you're using the SSL certificates included on App Service platform in your application, then no action is needed. This is the most common scenario. -* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and produce a combined certificate as mentioned above and use the certificate file. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress). This is an uncommon scenario but we have seen some users using this. --#### What is the impact of using Azure Kubernetes Services (AKS) with Azure Database for MySQL? --If you're trying to connect to the Azure Database for MySQL using Azure Kubernetes Services (AKS), it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-own-tls.md). --#### What is the impact of using Azure Data Factory to connect to Azure Database for MySQL? --For a connector using Azure Integration Runtime, the connector uses certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates, and therefore no action is needed. --For a connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you'll need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it. --#### Do I need to plan a database server maintenance downtime for this change? --No. Since the change is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change. --#### How often does Microsoft update their certificates or what is the expiry policy? --These certificates used by Azure Database for MySQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times. --#### If I'm using read replicas, do I need to perform this update only on source server or the read replicas? --Since this update is a client-side change, if the client used to read data from the replica server, you'll need to apply the changes for those clients as well. --#### If I'm using Data-in replication, do I need to perform any action? --If you're using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider: --* If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting. -- ```azurecli-interactive - Master_SSL_Allowed : Yes - Master_SSL_CA_File : ~\azure_mysqlservice.pem - Master_SSL_CA_Path : - Master_SSL_Cert : ~\azure_mysqlclient_cert.pem - Master_SSL_Cipher : - Master_SSL_Key : ~\azure_mysqlclient_key.pem - ``` -- If you see that the certificate is provided for the CA_file, SSL_Cert, and SSL_Key, you'll need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and create a combined cert file. --* If the data-replication is between two Azure Database for MySQL servers, then you'll need to reset the replica by executing **CALL mysql.az_replication_change_master** and provide the new dual root certificate as the last parameter [master_ssl_ca](how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication). --#### Is there a server-side query to determine whether SSL is being used? --To verify if you're using SSL connection to connect to the server refer [SSL verification](how-to-configure-ssl.md#step-4-verify-the-ssl-connection). --#### Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file? --No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**. --#### Why do I need to update my root certificate if I am using PHP driver with [enableRedirect](./how-to-redirection.md) ? -To address compliance requirements, the CA certificates of the host server were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2. With this update, database connections using the PHP Client driver with enableRedirect can no longer connect to the server, as the client devices are unaware of the certificate change and the new root CA details. Client devices that use PHP redirection drivers connect directly to the host server, bypassing the gateway. Refer this [link](single-server-overview.md#high-availability) for more on architecture of Azure Database for MySQL single server. --#### What if I have further questions? --For questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com). |
mysql | Concepts Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-compatibility.md | - Title: Driver and tools compatibility - Azure Database for MySQL -description: This article describes the MySQL drivers and management tools that are compatible with Azure Database for MySQL. ----- Previously updated : 06/20/2022---# MySQL drivers and management tools compatible with Azure Database for MySQL ----This article describes the drivers and management tools that are compatible with Azure Database for MySQL single server. --> [!NOTE] -> This article is only applicable to Azure Database for MySQL single server to ensure drivers are compatible with [connectivity architecture](concepts-connectivity-architecture.md) of Single Server service. [Azure Database for MySQL Flexible Server](../flexible-server/overview.md) is compatible with all the drivers and tools supported and compatible with MySQL community edition. --## MySQL Drivers -Azure Database for MySQL uses the world's most popular community edition of MySQL database. As such, it's compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open-source community to constantly improve the functionality and usability of MySQL drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 is provided in the following table: --| **Programming Language** | **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** | -| :-- | : | :-- | :- | : | :-- | -| PHP | mysqli, pdo_mysql, mysqlnd | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.| -| .NET | Async MySQL Connector for .NET | https://github.com/mysql-net/MySqlConnector <br> [Installation package from NuGet](https://www.nuget.org/packages/MySqlConnector/) | 0.27 and after | 0.26.5 and before | | -| .NET | MySQL Connector/NET | https://github.com/mysql/mysql-connector-net | 6.6.3, 7.0, 8.0 | | An encoding bug may cause connections to fail on some non-UTF8 Windows systems. | -| Node.js | mysqljs | https://github.com/mysqljs/mysql/ <br> Installation package from NPM:<br> Run `npm install mysql` from NPM | 2.15 | 2.14.1 and before | | -| Node.js | node-mysql2 | https://github.com/sidorares/node-mysql2 | 1.3.4+ | | | -| Go | Go MySQL Driver | https://github.com/go-sql-driver/mysql/releases | 1.3, 1.4 | 1.2 and before | Use `allowNativePasswords=true` in the connection string for version 1.3. Version 1.4 contains a fix and `allowNativePasswords=true` is no longer required. | -| Python | MySQL Connector/Python | https://pypi.python.org/pypi/mysql-connector-python | 1.2.3, 2.0, 2.1, 2.2, use 8.0.16+ with MySQL 8.0 | 1.2.2 and before | | -| Python | PyMySQL | https://pypi.org/project/PyMySQL/ | 0.7.11, 0.8.0, 0.8.1, 0.9.3+ | 0.9.0 - 0.9.2 (regression in web2py) | | -| Java | MariaDB Connector/J | https://downloads.mariadb.org/connector-java/ | 2.1, 2.0, 1.6 | 1.5.5 and before | | -| Java | MySQL Connector/J | https://github.com/mysql/mysql-connector-j | 5.1.21+, use 8.0.17+ with MySQL 8.0 | 5.1.20 and below | | -| C | MySQL Connector/C (libmysqlclient) | https://dev.mysql.com/doc/c-api/5.7/en/c-api-implementations.html | 6.0.2+ | | | -| C | MySQL Connector/ODBC (myodbc) | https://github.com/mysql/mysql-connector-odbc | 3.51.29+ | | | -| C++ | MySQL Connector/C++ | https://github.com/mysql/mysql-connector-cpp | 1.1.9+ | 1.1.3 and below | | -| C++ | MySQL++| https://github.com/tangentsoft/mysqlpp | 3.2.3+ | | | -| Ruby | mysql2 | https://github.com/brianmario/mysql2 | 0.4.10+ | | | -| R | RMySQL | https://github.com/rstats-db/RMySQL | 0.10.16+ | | | -| Swift | mysql-swift | https://github.com/novi/mysql-swift | 0.7.2+ | | | -| Swift | vapor/mysql | https://github.com/vapor/mysql-kit | 2.0.1+ | | | --## Management Tools -The compatibility advantage extends to database management tools as well. Your existing tools should continue to work with Azure Database for MySQL, as long as the database manipulation operates within the confines of user permissions. Three common database management tools that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 are listed in the following table: --| | **MySQL Workbench 6.x and up** | **Navicat 12** | **PHPMyAdmin 4.x and up** | **dbForge Studio for MySQL 9.0** | -| :- | :-- | :- | :-| :- | -| **Create, Update, Read, Write, Delete** | X | X | X | X | -| **SSL Connection** | X | X | X | X | -| **SQL Query Auto Completion** | X | X | | X | -| **Import and Export Data** | X | X | X | X | -| **Export to Multiple Formats** | X | X | X | X | -| **Backup and Restore** | | X | | X | -| **Display Server Parameters** | X | X | X | X | -| **Display Client Connections** | X | X | X | X | --## Next steps --- [Troubleshoot connection issues to Azure Database for MySQL](how-to-troubleshoot-common-connection-issues.md) |
mysql | Concepts Connect To A Gateway Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connect-to-a-gateway-node.md | - Title: Azure Database for MySQL managing updates and upgrades -description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL Service. ----- Previously updated : 06/20/2022---# Connect to a gateway node to a specific MySQL version ----In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL Service architecture. --As Azure Database for MySQL supports major version v5.7 and v8.0, the default port 3306 to connect to Azure Database for MySQL runs MySQL client version 5.6 (least common denominator) to support connections to servers of all 2 supported major versions. However, if your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string. --In Azure Database for MySQL Service, gateway nodes listens on port 3308 for v5.7 clients and port 3309 for v8.0 clients. In other words, if you would like to connect to v5.7 gateway client, you should use your fully qualified server name and port 3308 to connect to your server from client application. Similarly, if you would like to connect to v8.0 gateway client, you can use your fully qualified server name and port 3309 to connect to your server. Check the following example for further clarity. ---> [!NOTE] -> Connecting to Azure Database for MySQL via ports 3308 and 3309 are only supported for public connectivity, Private Link and VNet service endpoints can only be used with port 3306. --Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql) --## Managing updates and upgrades --The service automatically manages patching for bug fix version updates. For example, 5.7.20 to 5.7.21. --Major version upgrade is currently supported by service for upgrades from MySQL v5.6 to v5.7. For more details, refer [how to perform major version upgrades](how-to-major-version-upgrade.md). If you'd like to upgrade from 5.7 to 8.0, we recommend you perform [dump and restore](./concepts-migrate-dump-restore.md) to a server that was created with the new engine version. --## Next steps --- To see supported versions, visit [Azure Database for MySQL version support policy](../concepts-version-policy.md)-- For details around Azure Database for MySQL versioning policy, see [this document](concepts-version-policy.md).-- For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](./concepts-pricing-tiers.md) |
mysql | Concepts Connectivity Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md | - Title: Connectivity architecture - Azure Database for MySQL -description: Describes the connectivity architecture for your Azure Database for MySQL server. ----- Previously updated : 06/20/2022---# Connectivity architecture in Azure Database for MySQL ----This article explains the Azure Database for MySQL connectivity architecture and how the traffic is directed to your Azure Database for MySQL instance from clients both within and outside Azure. --## Connectivity architecture -Connection to your Azure Database for MySQL is established through a gateway that is responsible for routing incoming connections to the physical location of your server in our clusters. The following diagram illustrates the traffic flow. ---As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MySQL. Therefore, in order to connect to your server, such as from corporate networks, it's necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region. --## Azure Database for MySQL gateway IP addresses --The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MySQL server. --As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal. The decommissioning of gateways can impact the connectivity to your servers if --* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**. You should use fully qualified domain name (FQDN) of your server in the format `<servername>.mysql.database.azure.com`, in the connection string for your application. -* You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings. --> [!IMPORTANT] -> If customer connectivity stack needs to connect directly to gateway instead of **recommended DNS name approach**, or allow-list gateway in the firewall rules for connections to\from customer infrastructure, we **strongly encourage** customers to use Gateway IP address **subnets** versus hardcoding static IP in order to not be impacted by this activity in a region that may cause IP to change within the subnet range. ---The following table lists the gateway IP addresses of the Azure Database for MySQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following: --* **Gateway IP address subnets:** This column lists the IP address subnets of the gateway rings located in the particular region. As we retire older gateway hardware, we recommend that you open the client-side firewall to allow outbound traffic for the IP address subnets in the region you're operating. -* **Gateway IP addresses**: Periodically, individual **Gateway IP addresses** will be retired and traffic will be migrated to corresponding **Gateway IP address subnets**. --We strongly encourage customers to move away from relying on any individual Gateway IP address (since these will be retired in the future). Instead allow network traffic to reach both the individual Gateway IP addresses and Gateway IP address subnets in a region. --| **Region name** |**Current Gateway IP address**| **Gateway IP address subnets** | -|:-|:--|:--| -| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 | -| Australia Central2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 | -| Australia East | 13.70.112.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 40.79.160.32/29, 20.53.46.128/27 | -| Australia South East |13.77.49.33 |13.77.49.32/29, 104.46.179.160/27| -| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27| -|Brazil Southeast|191.233.48.2|191.233.48.32/29, 191.233.15.160/27| -| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27| -| Canada East |40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 | -| Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27| -| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27| -| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27| -| China North | 52.130.128.89| 52.130.128.88/29, 40.72.77.128/27 | -| China North 2 |40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27| -| East Asia |13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27| -| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27| -| East US 2 |52.167.105.38, 40.70.144.38| 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27| -| France Central |40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 | -| France South |40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27| -| Germany North| 51.116.56.0| 51.116.57.32/29, 51.116.54.96/27| -| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27| -| Central India | 20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27| -| South India | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27| -| West India | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27| -| Japan East | 40.79.184.8, 40.79.192.23| 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 | -| Japan West |40.74.96.6| 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 | -| Jio India Central| 20.192.233.32|20.192.233.32/29, 20.192.48.32/27| -| Jio India West|20.193.200.32|20.193.200.32/29, 20.192.167.224/27| -| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27| -| Korea South |52.231.145.3| 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 | -| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27| -| North Europe |52.138.224.6, 52.138.224.7 |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 | -|Norway East|51.120.96.0|51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27| -|Norway West|51.120.216.0|51.120.217.32/29, 51.13.136.224/27| -| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 | -| South Africa West |102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27| -| South Central US | 20.45.120.0 |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27| -| South East Asia | 23.98.80.12, 40.78.233.2|13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 | -| Sweden Central|51.12.96.32|51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27| -| Sweden South|51.12.200.32|51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27| -| Switzerland North |51.107.56.0 |51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27| -| Switzerland West | 51.107.152.0| 51.107.153.32/29, 51.107.250.64/27| -| UAE Central | 20.37.72.64| 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 | -| UAE North |65.52.248.0 |20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 | -| UK South | 51.105.64.0|51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27| -| UK West |51.140.208.98 |51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 | -| West Central US |13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 | -| West Europe | 13.69.105.208,104.40.169.187|104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27| -| West US |13.86.216.212, 13.86.217.212 |20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27| -| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27| -| West US 3 |20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 | ----## Connection redirection --Azure Database for MySQL supports an extra connection policy, **redirection** that helps to reduce network latency between client applications and MySQL servers. With redirection, and after the initial TCP session is established to the Azure Database for MySQL server, the server returns the backend address of the node hosting the MySQL server to the client. Thereafter, all subsequent packets flow directly to the server, bypassing the gateway. As packets flow directly to the server, latency and throughput have improved performance. --This feature is supported in Azure Database for MySQL servers with engine versions 5.7, and 8.0. --Support for redirection is available in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft, and is available on [PECL](https://pecl.php.net/package/mysqlnd_azure). See the [configuring redirection](./how-to-redirection.md) article for more information on how to use redirection in your applications. ---> [!IMPORTANT] -> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview. --## Frequently asked questions --### What you need to know about this planned maintenance? -This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache is refreshed within 5 minutes, and it's automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications. --### What are we decommissioning? -Only Gateway nodes are decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We're decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification. --### How can you validate if your connections are going to old gateway nodes or new gateway nodes? -Ping your server's FQDN, for example ``ping xxx.mysql.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip-address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway. --You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses --### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned? -You receive an email to inform you when we start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Prepare your client to connect to the database server using the FQDN or using the new IP address from the table above. --### What do I do if my client applications are still connecting to old gateway server? -This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code. --### Is there any impact for my application connections? -This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections connect to the new IP address and all the existing connections will still work fine until the old IP address fully get decommissioned, which happens several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Use FQDN to connect to the database server in your application connection string. This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring. --### Can I request for a specific time window for the maintenance? -As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string. --### I'm using private link, will my connections get affected? -No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses. ----## Next steps -* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md) -* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md) -* [Configure redirection with Azure Database for MySQL](./how-to-redirection.md) |
mysql | Concepts Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity.md | - Title: Transient connectivity errors - Azure Database for MySQL -description: Learn how to handle transient connectivity errors and connect efficiently to Azure Database for MySQL. -keywords: mysql connection,connection string,connectivity issues,transient error,connection error,connect efficiently ----- Previously updated : 06/20/2022---# Handle transient errors and connect efficiently to Azure Database for MySQL ----This article describes how to handle transient errors and connect efficiently to Azure Database for MySQL. --## Transient errors --A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens. Another reason could be a new version of a PaaS service that is being rolled out. Most of these events are automatically mitigated by the system in less than 60 seconds. A best practice for designing and developing applications in the cloud is to expect transient errors. Assume they can happen in any component at any time and to have the appropriate logic in place to handle these situations. --## Handling transient errors --Transient errors should be handled using retry logic. Situations that must be considered: --* An error occurs when you try to open a connection -* An idle connection is dropped on the server side. When you try to issue a command it can't be executed -* An active connection that currently is executing a command is dropped. --The first and second case are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for MySQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is: --* Wait for 5 seconds before your first retry. -* For each following retry, the increase the wait exponentially, up to 60 seconds. -* Set a max number of retries at which point your application considers the operation failed. --When a connection with an active transaction fails, it is more difficult to handle the recovery correctly. There are two cases: If the transaction was read-only in nature, it is safe to reopen the connection and to retry the transaction. If however the transaction was also writing to the database, you must determine if the transaction was rolled back, or if it succeeded before the transient error happened. In that case, you might just not have received the commit acknowledgment from the database server. --One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client-generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully. --When your program communicates with Azure Database for MySQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors. --Make sure to test you retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for MySQL server. Your application should handle the brief downtime that is encountered during this operation without any problems. --## Connect efficiently to Azure Database for MySQL --Database connections are a limited resource, so making effective use of connection pooling to access Azure Database for MySQL optimizes performance. The below section explains how to use connection pooling or persistent connections to more effectively access Azure Database for MySQL. --## Access databases by using connection pooling (recommended) --Managing database connections can have a significant impact on the performance of the application as a whole. To optimize the performance of your application, the goal should be to reduce the number of times connections are established and time for establishing connections in key code paths. We strongly recommend using database connection pooling or persistent connections to connect to Azure Database for MySQL. Database connection pooling handles the creation, management, and allocation of database connections. When a program requests a database connection, it prioritizes the allocation of existing idle database connections, rather than the creation of a new connection. After the program has finished using the database connection, the connection is recovered in preparation for further use, rather than simply being closed down. --For better illustration, this article provides [a piece of sample code](./sample-scripts-java-connection-pooling.md) that uses JAVA as an example. For more information, see [Apache common DBCP](https://commons.apache.org/proper/commons-dbcp/). --> [!NOTE] -> The server configures a timeout mechanism to close a connection that has been in an idle state for some time to free up resources. Be sure to set up the verification system to ensure the effectiveness of persistent connections when you are using them. For more information, see [Configure verification systems on the client side to ensure the effectiveness of persistent connections](concepts-connectivity.md#configure-verification-mechanisms-in-clients-to-confirm-the-effectiveness-of-persistent-connections). --## Access databases by using persistent connections (recommended) --The concept of persistent connections is similar to that of connection pooling. Replacing short connections with persistent connections requires only minor changes to the code, but it has a major effect in terms of improving performance in many typical application scenarios. --## Access databases by using wait and retry mechanism with short connections --If you have resource limitations, we strongly recommend that you use database pooling or persistent connections to access databases. If your application use short connections and experience connection failures when you approach the upper limit on the number of concurrent connections,you can try wait and retry mechanism. You can set an appropriate wait time, with a shorter wait time after the first attempt. Thereafter, you can try waiting for events multiple times. --## Configure verification mechanisms in clients to confirm the effectiveness of persistent connections --The server configures a timeout mechanism to close a connection thatΓÇÖs been in an idle state for some time to free up resources. When the client accesses the database again, itΓÇÖs equivalent to creating a new connection request between the client and the server. To ensure the effectiveness of connections during the process of using them, configure a verification mechanism on the client. As shown in the following example, you can use Tomcat JDBC connection pooling to configure this verification mechanism. --By setting the TestOnBorrow parameter, when there's a new request, the connection pool automatically verifies the effectiveness of any available idle connections. If such a connection is effective, its directly returned otherwise connection pool withdraws the connection. The connection pool then creates a new effective connection and returns it. This process ensures that database is accessed efficiently. --For information on the specific settings, see the [JDBC connection pool official introduction document](https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Common_Attributes). You mainly need to set the following three parameters: TestOnBorrow (set to true), ValidationQuery (set to SELECT 1), and ValidationQueryTimeout (set to 1). The specific sample code is shown below: --```java -public class SimpleTestOnBorrowExample { - public static void main(String[] args) throws Exception { - PoolProperties p = new PoolProperties(); - p.setUrl("jdbc:mysql://localhost:3306/mysql"); - p.setDriverClassName("com.mysql.jdbc.Driver"); - p.setUsername("root"); - p.setPassword("password"); - // The indication of whether objects will be validated by the idle object evictor (if any). - // If an object fails to validate, it will be dropped from the pool. - // NOTE - for a true value to have any effect, the validationQuery or validatorClassName parameter must be set to a non-null string. - p.setTestOnBorrow(true); -- // The SQL query that will be used to validate connections from this pool before returning them to the caller. - // If specified, this query does not have to return any data, it just can't throw a SQLException. - p.setValidationQuery("SELECT 1"); -- // The timeout in seconds before a connection validation queries fail. - // This works by calling java.sql.Statement.setQueryTimeout(seconds) on the statement that executes the validationQuery. - // The pool itself doesn't timeout the query, it is still up to the JDBC driver to enforce query timeouts. - // A value less than or equal to zero will disable this feature. - p.setValidationQueryTimeout(1); - // set other useful pool properties. - DataSource datasource = new DataSource(); - datasource.setPoolProperties(p); -- Connection con = null; - try { - con = datasource.getConnection(); - // execute your query here - } finally { - if (con!=null) try {con.close();}catch (Exception ignore) {} - } - } - } -``` --## Next steps --* [Troubleshoot connection issues to Azure Database for MySQL](how-to-troubleshoot-common-connection-issues.md) |
mysql | Concepts Data Access And Security Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-and-security-vnet.md | - Title: VNet service endpoints - Azure Database for MySQL -description: 'Describes how VNet service endpoints work for your Azure Database for MySQL server.' ----- Previously updated : 06/20/2022---# Use Virtual Network service endpoints and rules for Azure Database for MySQL ----*Virtual network rules* are one firewall security feature that controls whether your Azure Database for MySQL server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for MySQL server. --To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for MySQL: ---> [!NOTE] -> This feature is available in all regions of Azure where Azure Database for MySQL is deployed for General Purpose and Memory Optimized servers. -> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server. --You can also consider using [Private Link](concepts-data-access-security-private-link.md) for connections. Private Link provides a private IP address in your VNet for the Azure Database for MySQL server. --<a name="anch-terminology-and-description-82f"></a> --## Terminology and description --**Virtual network:** You can have virtual networks associated with your Azure subscription. --**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) that you have are assigned to subnets. One subnet can contain multiple VMs or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access. --**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for MySQL and PostgreSQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for all Azure SQL Database, Azure Database for MySQL and Azure Database for PostgreSQL servers on the subnet. --**Virtual network rule:** A virtual network rule for your Azure Database for MySQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for MySQL server. To be in the ACL for your Azure Database for MySQL server, the subnet must contain the **Microsoft.Sql** type name. --A virtual network rule tells your Azure Database for MySQL server to accept communications from every node that is on the subnet. --------<a name="anch-benefits-of-a-vnet-rule-68b"></a> --## Benefits of a virtual network rule --Until you take action, the VMs on your subnets cannot communicate with your Azure Database for MySQL server. One action that establishes the communication is the creation of a virtual network rule. The rationale for choosing the VNet rule approach requires a compare-and-contrast discussion involving the competing security options offered by the firewall. --### A. Allow access to Azure services --The Connection security pane has an **ON/OFF** button that is labeled **Allow access to Azure services**. The **ON** setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This **ON** setting is probably more open than you want your Azure Database for MySQL Database to be. The virtual network rule feature offers much finer granular control. --### B. IP rules --The Azure Database for MySQL firewall allows you to specify IP address ranges from which communications are accepted into the Azure Database for MySQL Database. This approach is fine for stable IP addresses that are outside the Azure private network. But many nodes inside the Azure private network are configured with *dynamic* IP addresses. Dynamic IP addresses might change, such as when your VM is restarted. It would be folly to specify a dynamic IP address in a firewall rule, in a production environment. --You can salvage the IP option by obtaining a *static* IP address for your VM. For details, see [Configure private IP addresses for a virtual machine by using the Azure portal][vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]. --However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage. --<a name="anch-details-about-vnet-rules-38q"></a> --## Details about virtual network rules --This section describes several details about virtual network rules. --### Only one geographic region --Each Virtual Network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet. --Any virtual network rule is limited to the region that its underlying endpoint applies to. --### Server-level, not database-level --Each virtual network rule applies to your whole Azure Database for MySQL server, not just to one particular database on the server. In other words, virtual network rule applies at the server-level, not at the database-level. --### Security administration roles --There is a separation of security roles in the administration of Virtual Network service endpoints. Action is required from each of the following roles: --- **Network Admin:** Turn on the endpoint.-- **Database Admin:** Update the access control list (ACL) to add the given subnet to the Azure Database for MySQL server.--*Azure RBAC alternative:* --The roles of Network Admin and Database Admin have more capabilities than are needed to manage virtual network rules. Only a subset of their capabilities is needed. --You have the option of using [Azure role-based access control (Azure RBAC)][rbac-what-is-813s] in Azure to create a single custom role that has only the necessary subset of capabilities. The custom role could be used instead of involving either the Network Admin or the Database Admin. The surface area of your security exposure is lower if you add a user to a custom role, versus adding the user to the other two major administrator roles. --> [!NOTE] -> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations: -> - Both subscriptions must be in the same Microsoft Entra tenant. -> - The user has the required permissions to initiate operations, such as enabling service endpoints and adding a VNet-subnet to the given Server. -> - Make sure that both the subscription have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] --## Limitations --For Azure Database for MySQL, the virtual network rules feature has the following limitations: --- A Web App can be mapped to a private IP in a VNet/subnet. Even if service endpoints are turned ON from the given VNet/subnet, connections from the Web App to the server will have an Azure public IP source, not a VNet/subnet source. To enable connectivity from a Web App to a server that has VNet firewall rules, you must Allow Azure services to access server on the server.--- In the firewall for your Azure Database for MySQL, each virtual network rule references a subnet. All these referenced subnets must be hosted in the same geographic region that hosts the Azure Database for MySQL.--- Each Azure Database for MySQL server can have up to 128 ACL entries for any given virtual network.--- Virtual network rules apply only to Azure Resource Manager virtual networks; and not to [classic deployment model][arm-deployment-model-568f] networks.--- Turning ON virtual network service endpoints to Azure Database for MySQL using the **Microsoft.Sql** service tag also enables the endpoints for all Azure Database --- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.--- If **Microsoft.Sql** is enabled in a subnet, it indicates that you only want to use VNet rules to connect. [Non-VNet firewall rules](concepts-firewall-rules.md) of resources in that subnet will not work.--- On the firewall, IP address ranges do apply to the following networking items, but virtual network rules do not:- - [Site-to-Site (S2S) virtual private network (VPN)][vpn-gateway-indexmd-608y] - - On-premises via [ExpressRoute][expressroute-indexmd-744v] --## ExpressRoute --If your network is connected to the Azure network through use of [ExpressRoute][expressroute-indexmd-744v], each circuit is configured with two public IP addresses at the Microsoft Edge. The two IP addresses are used to connect to Microsoft Services, such as to Azure Storage, by using Azure Public Peering. --To allow communication from your circuit to Azure Database for MySQL, you must create IP network rules for the public IP addresses of your circuits. In order to find the public IP addresses of your ExpressRoute circuit, open a support ticket with ExpressRoute by using the Azure portal. --## Adding a VNET Firewall rule to your server without turning on VNET Service Endpoints --Merely setting a VNet firewall rule does not help secure the server to the VNet. You must also turn VNet service endpoints **On** for the security to take effect. When you turn service endpoints **On**, your VNet-subnet experiences downtime until it completes the transition from **Off** to **On**. This is especially true in the context of large VNets. You can use the **IgnoreMissingServiceEndpoint** flag to reduce or eliminate the downtime during transition. --You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal. --## Related articles -- [Azure virtual networks][vm-virtual-network-overview]-- [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d]--## Next steps -For articles on creating VNet rules, see: -- [Create and manage Azure Database for MySQL VNet rules using the Azure portal](how-to-manage-vnet-using-portal.md)-- [Create and manage Azure Database for MySQL VNet rules using Azure CLI](how-to-manage-vnet-using-cli.md)--<!-- Link references, to text, Within this same GitHub repo. --> -[arm-deployment-model-568f]: ../../azure-resource-manager/management/deployment-models.md --[vm-virtual-network-overview]: ../../virtual-network/virtual-networks-overview.md --[vm-virtual-network-service-endpoints-overview-649d]: ../../virtual-network/virtual-network-service-endpoints-overview.md --[vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]: ../../virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md --[rbac-what-is-813s]: ../../role-based-access-control/overview.md --[vpn-gateway-indexmd-608y]: ../../vpn-gateway/index.yml --[expressroute-indexmd-744v]: ../../expressroute/index.yml --[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | Concepts Data Access Security Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-security-private-link.md | - Title: Private Link - Azure Database for MySQL -description: Learn how Private link works for Azure Database for MySQL. ----- Previously updated : 06/20/2022---# Private Link for Azure Database for MySQL ----Private Link allows you to connect to various PaaS services in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. --For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../../virtual-network/virtual-networks-overview.md) and Subnet. --> [!NOTE] -> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. --## Data exfiltration prevention --Data ex-filtration in Azure Database for MySQL is when an authorized user, such as a database admin, is able to extract data from one system and move it to another location or system outside the organization. For example, the user moves the data to a storage account owned by a third party. --Consider a scenario with a user running MySQL Workbench inside an Azure Virtual Machine (VM) that is connecting to an Azure Database for MySQL server provisioned in West US. The example below shows how to limit access with public endpoints on Azure Database for MySQL using network access controls. --* Disable all Azure service traffic to Azure Database for MySQL via the public endpoint by setting *Allow Azure Services* to OFF. Ensure no IP addresses or ranges are allowed to access the server either via [firewall rules](./concepts-firewall-rules.md) or [virtual network service endpoints](./concepts-data-access-and-security-vnet.md). --* Only allow traffic to the Azure Database for MySQL using the Private IP address of the VM. For more information, see the articles on [Service Endpoint](concepts-data-access-and-security-vnet.md) and [VNet firewall rules](how-to-manage-vnet-using-portal.md). --* On the Azure VM, narrow down the scope of outgoing connection by using Network Security Groups (NSGs) and Service Tags as follows -- * Specify an NSG rule to allow traffic for *Service Tag = SQL.WestUs* - only allowing connection to Azure Database for MySQL in West US - * Specify an NSG rule (with a higher priority) to deny traffic for *Service Tag = SQL* - denying connections to Update to Azure Database for MySQL in all regions</br></br> --At the end of this setup, the Azure VM can connect only to Azure Database for MySQL in the West US region. However, the connectivity isn't restricted to a single Azure Database for MySQL. The VM can still connect to any Azure Database for MySQL in the West US region, including the databases that aren't part of the subscription. While we've reduced the scope of data exfiltration in the above scenario to a specific region, we haven't eliminated it altogether.</br> --With Private Link, you can now set up network access controls like NSGs to restrict access to the private endpoint. Individual Azure PaaS resources are then mapped to specific private endpoints. A malicious insider can only access the mapped PaaS resource (for example an Azure Database for MySQL) and no other resource. --## On-premises connectivity over private peering --When you connect to the public endpoint from on-premises machines, your IP address needs to be added to the IP-based firewall using a server-level firewall rule. While this model works well for allowing access to individual machines for dev or test workloads, it's difficult to manage in a production environment. --With Private Link, you can enable cross-premises access to the private endpoint using [Express Route](https://azure.microsoft.com/services/expressroute/) (ER), private peering or [VPN tunnel](../../vpn-gateway/index.yml). They can subsequently disable all access via public endpoint and not use the IP-based firewall. --> [!NOTE] -> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations: -> - Make sure that both subscriptions have the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] --## Configure Private Link for Azure Database for MySQL --### Creation Process --Private endpoints are required to enable Private Link. This can be done using the following how-to guides. --* [Azure portal](./how-to-configure-private-link-portal.md) -* [CLI](./how-to-configure-private-link-cli.md) --### Approval Process -Once the network admin creates the private endpoint (PE), the MySQL admin can manage the private endpoint Connection (PEC) to Azure Database for MySQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for MySQL connectivity. --* Navigate to the Azure Database for MySQL server resource in the Azure portal. - * Select the private endpoint connections in the left pane - * Shows a list of all private endpoint Connections (PECs) - * Corresponding private endpoint (PE) created ---* Select an individual PEC from the list by selecting it. ---* The MySQL server admin can choose to approve or reject a PEC and optionally add a short text response. ---* After approval or rejection, the list will reflect the appropriate state along with the response text ---## Use cases of Private Link for Azure Database for MySQL --Clients can connect to the private endpoint from the same VNet, [peered VNet](../../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases. ---### Connecting from an Azure VM in Peered Virtual Network (VNet) -Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for MySQL from an Azure VM in a peered VNet. --### Connecting from an Azure VM in VNet-to-VNet environment -Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to an Azure Database for MySQL from an Azure VM in a different region or subscription. --### Connecting from an on-premises environment over VPN -To establish connectivity from an on-premises environment to the Azure Database for MySQL, choose and implement one of the options: --* [Point-to-Site connection](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md) -* [Site-to-Site VPN connection](../../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md) -* [ExpressRoute circuit](../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md) --## Private Link combined with firewall rules --The following situations and outcomes are possible when you use Private Link in combination with firewall rules: --* If you don't configure any firewall rules, then by default, no traffic will be able to access the Azure Database for MySQL. --* If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule. --* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for MySQL is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for MySQL. --## Deny public access for Azure Database for MySQL --If you want to rely only on private endpoints for accessing their Azure Database for MySQL, you can disable setting all public endpoints (i.e. [firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server. --When this setting is set to *YES*, only connections via private endpoints are allowed to your Azure Database for MySQL. When this setting is set to *NO*, clients can connect to your Azure Database for MySQL based on your firewall or VNet service endpoint settings. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'. --> [!NOTE] -> This feature is available in all Azure regions where Azure Database for MySQL - Single Server supports General Purpose and Memory Optimized pricing tiers. -> -> This setting does not have any impact on the SSL and TLS configurations for your Azure Database for MySQL. --To learn how to set the **Deny Public Network Access** for your Azure Database for MySQL from Azure portal, refer to [How to configure Deny Public Network Access](how-to-deny-public-network-access.md). --## Next steps --To learn more about Azure Database for MySQL security features, see the following articles: --* To configure a firewall for Azure Database for MySQL, see [Firewall support](./concepts-firewall-rules.md). --* To learn how to configure a virtual network service endpoint for your Azure Database for MySQL, see [Configure access from virtual networks](./concepts-data-access-and-security-vnet.md). --* For an overview of Azure Database for MySQL connectivity, see [Azure Database for MySQL Connectivity Architecture](./concepts-connectivity-architecture.md) --<!-- Link references, to text, Within this same GitHub repo. --> -[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | Concepts Data Encryption Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-encryption-mysql.md | - Title: Data encryption with customer-managed key - Azure Database for MySQL -description: Azure Database for MySQL data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. ----- Previously updated : 06/20/2022---# Azure Database for MySQL data encryption with a customer-managed key ----Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys. --Data encryption with customer-managed keys for Azure Database for MySQL, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../../key-vault/general/security-features.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article. --Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by [FIPS 140 validated](/azure/key-vault/keys/about-keys#compliance) hardware security modules (HSMs). It doesn't allow direct access to a stored key, but does provide services of encryption and decryption to authorized entities. Key Vault can generate the key, import it, or [have it transferred from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md). --> [!NOTE] -> This feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-data-encryption-mysql.md#limitations) section. --## Benefits --Data encryption with customer-managed keys for Azure Database for MySQL provides the following benefits: --* Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible -* Full control over the key-lifecycle, including rotation of the key to align with corporate policies -* Central management and organization of keys in Azure Key Vault -* Ability to implement separation of duties between security officers, and DBA and system administrators ---## Terminology and description --**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key. --**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK. --The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../../security/fundamentals/encryption-atrest.md). --## How data encryption with a customer-managed key work ---For a MySQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server: --* **get**: For retrieving the public part and properties of the key in the key vault. -* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL. -* **unwrapKey**: To be able to decrypt the DEK. Azure Database for MySQL needs the decrypted DEK to encrypt/decrypt the data --The key vault administrator can also [enable logging of Key Vault audit events](../../key-vault/key-vault-insights-overview.md), so they can be audited later. --When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled. --## Requirements for configuring data encryption for Azure Database for MySQL --The following are requirements for configuring Key Vault: --* Key Vault and Azure Database for MySQL must belong to the same Microsoft Entra tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving Key Vault resource afterwards requires you to reconfigure the data encryption. -* Enable the [soft-delete](../../key-vault/general/soft-delete-overview.md) feature on the key vault with retention period set to **90 days**, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days by default, unless the retention period is explicitly set to <=90 days. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal). -* Enable the [Purge Protection](../../key-vault/general/soft-delete-overview.md#purge-protection) feature on the key vault with retention period set to **90 days**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via Azure CLI or PowerShell. When purge protection is on, a vault or an object in the deleted state cannot be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed. -* Grant the Azure Database for MySQL access to the key vault with the get, wrapKey, and unwrapKey permissions by using its unique managed identity. In the Azure portal, the unique 'Service' identity is automatically created when data encryption is enabled on the MySQL. See [Configure data encryption for MySQL](how-to-data-encryption-portal.md) for detailed, step-by-step instructions when you're using the Azure portal. --The following are requirements for configuring the customer-managed key: --* The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048. -* The key activation date (if set) must be a date and time in the past. The expiration date not set. -* The key must be in the *Enabled* state. -* The key must have [soft delete](../../key-vault/general/soft-delete-overview.md) with retention period set to **90 days**.This implicitly sets the required key attribute recoveryLevel: ΓÇ£RecoverableΓÇ¥. If the retention is set to < 90 days, the recoveryLevel: "CustomizedRecoverable", which doesn't the requirement so ensure to set the retention period is set to **90 days**. -* The key must have [purge protection enabled](../../key-vault/general/soft-delete-overview.md#purge-protection). -* If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key) into the key vault, make sure to provide it in the supported file formats (`.pfx`, `.byok`, `.backup`). --## Recommendations --When you're using data encryption by using a customer-managed key, here are recommendations for configuring Key Vault: --* Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion. -* Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated. -* Ensure that Key Vault and Azure Database for MySQL reside in the same region, to ensure a faster access for DEK wrap, and unwrap operations. -* Lock down the Azure KeyVault to only **private endpoint and selected networks** and allow only *trusted Microsoft* services to secure the resources. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/keyvault-trusted-service.png" alt-text="trusted-service-with-AKV"::: --Here are recommendations for configuring a customer-managed key: --* Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service. --* If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey). --## Inaccessible customer-managed key condition --When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*. Some of the reason why the server can reach this state are: --* If we create a Point In Time Restore server for your Azure Database for MySQL, which has data encryption enabled, the newly created server will be in *Inaccessible* state. You can fix this through [Azure portal](how-to-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](how-to-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers). -* If we create a read replica for your Azure Database for MySQL, which has data encryption enabled, the replica server will be in *Inaccessible* state. You can fix this through [Azure portal](how-to-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](how-to-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers). -* If you delete the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*. -* If we delete the key from the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*. -* If the key stored in the Azure KeyVault expires, the key will become invalid and the Azure Database for MySQL will transition into *Inaccessible* state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the server *Available*. --### Accidental key access revocation from Key Vault --It might happen that someone with sufficient access rights to Key Vault accidentally disables server access to the key by: --* Revoking the key vault's `get`, `wrapKey`, and `unwrapKey` permissions from the server. -* Deleting the key. -* Deleting the key vault. -* Changing the key vault's firewall rules. -* Deleting the managed identity of the server in Microsoft Entra ID. --## Monitor the customer-managed key in Key Vault --To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features: --* [Azure Resource Health](../../service-health/resource-health-overview.md): An inaccessible database that has lost access to the customer key shows as "Inaccessible" after the first connection to the database has been denied. -* [Activity log](../../service-health/alerts-activity-log-service-notifications-portal.md): When access to the customer key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible, if you create alerts for these events. --* [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences. --## Restore and replicate with a customer's managed key in Key Vault --After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through read replicas. However, the copy can be changed to reflect a new customer's managed key for encryption. When the customer-managed key is changed, old backups of the server start using the latest key. --To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the source and restored/replica servers: --* Initiate the restore or read replica creation process from the source Azure Database for MySQL. -* Keep the newly created server (restored/replica) in an inaccessible state, because its unique identity hasn't yet been given permissions to Key Vault. -* On the restored/replica server, revalidate the customer-managed key in the data encryption settings to ensures that the newly created server is given wrap and unwrap permissions to the key stored in Key Vault. --## Limitations --For Azure Database for MySQL, the support for encryption of data at rest using customers managed key (CMK) has few limitations - --* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers. -* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage) -- > [!NOTE] - > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ. - > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. --* Encryption is only supported with RSA 2048 cryptographic key. --## Next steps --* Learn how to set up data encryption with a customer-managed key for your Azure database for MySQL by using the [Azure portal](how-to-data-encryption-portal.md) and [Azure CLI](how-to-data-encryption-cli.md). -* Learn about the storage type support for [Azure Database for MySQL - Single Server](concepts-pricing-tiers.md#storage) |
mysql | Concepts Data In Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-in-replication.md | - Title: Data-in Replication - Azure Database for MySQL -description: Learn about using Data-in Replication to synchronize from an external server into the Azure Database for MySQL service. ----- Previously updated : 06/20/2022---# Replicate data into Azure Database for MySQL ----Data-in Replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL service. The external server can be on-premises, in virtual machines, or a database service hosted by other cloud providers. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). --## When to use Data-in Replication --The main scenarios to consider about using Data-in Replication are: --- **Hybrid Data Synchronization:** With Data-in Replication, you can keep data synchronized between your on-premises servers and Azure Database for MySQL. This synchronization is useful for creating hybrid applications. This method is appealing when you have an existing local database server but want to move the data to a region closer to end users.-- **Multi-Cloud Synchronization:** For complex cloud solutions, use Data-in Replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database services hosted in those clouds.--For migration scenarios, use the [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/)(DMS). --## Limitations and considerations --### Data not replicated --The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html). --### Filtering --To skip replicating tables from your source server (hosted on-premises, in virtual machines, or a database service hosted by other cloud providers), the `replicate_wild_ignore_table` parameter is supported. Optionally, update this parameter on the replica server hosted in Azure using the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md). --To learn more about this parameter, review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table). --## Supported in General Purpose or Memory Optimized tier only --Data-in Replication is only supported in General Purpose and Memory Optimized pricing tiers. --## Private Link support --The private link for Azure database for MySQL support only inbound connections. As data-in replication requires outbound connection from service private link is not supported for the data-in traffic. -->[!NOTE] ->GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2). --### Requirements --- The source server version must be at least MySQL version 5.6.-- The source and replica server versions must be the same. For example, both must be MySQL version 5.6 or both must be MySQL version 5.7.-- Each table must have a primary key.-- The source server should use the MySQL InnoDB engine.-- User must have permissions to configure binary logging and create new users on the source server.-- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` or `mysql.az_replication_change_master_with_gtid` stored procedure. Refer to the following [examples](./how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter.-- Ensure that the source server's IP address has been added to the Azure Database for MySQL replica server's firewall rules. Update firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md).-- Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306.-- Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).--## Next steps --- Learn how to [set up data-in replication](how-to-data-in-replication.md)-- Learn about [replicating in Azure with read replicas](concepts-read-replicas.md)-- Learn about how to [migrate data with minimal downtime using DMS](how-to-migrate-online.md) |
mysql | Concepts Database Application Development | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-database-application-development.md | - Title: Application development - Azure Database for MySQL -description: Introduces design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL ----- Previously updated : 06/20/2022---# Application development overview for Azure Database for MySQL ----This article discusses design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL. --> [!TIP] -> For a tutorial showing you how to create a server, create a server-based firewall, view server properties, create database, and connect and query by using workbench and mysql.exe, see [Design your first Azure Database for MySQL database](tutorial-design-database-using-portal.md) --## Language and platform -There are code samples available for various programming languages and platforms. You can find links to the code samples at: -[Connectivity libraries used to connect to Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md) --## Tools -Azure Database for MySQL uses the MySQL community version, compatible with MySQL common management tools such as Workbench or MySQL utilities such as mysql.exe, [phpMyAdmin](https://www.phpmyadmin.net/), [Navicat](https://www.navicat.com/products/navicat-for-mysql), [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/) and others. You can also use the Azure portal, Azure CLI, and REST APIs to interact with the database service. --## Resource limitations -Azure Database for MySQL manages the resources available to a server by using two different mechanisms: -- Resources Governance.-- Enforcement of Limits.--## Security -Azure Database for MySQL provides resources for limiting access, protecting data, configuring users and roles, and monitoring activities on a MySQL database. --## Authentication -Azure Database for MySQL supports server authentication of users and logins. --## Resiliency -When a transient error occurs while connecting to a MySQL database, your code should retry the call. We recommend that the retry logic use back off logic so that it does not overwhelm the SQL database with multiple clients retrying simultaneously. --- Code samples: For code samples that illustrate retry logic, see samples for the language of your choice at: [Connectivity libraries used to connect to Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md)--## Managing connections -Database connections are a limited resource, so we recommend sensible use of connections when accessing your MySQL database to achieve better performance. -- Access the database by using connection pooling or persistent connections.-- Access the database by using short connection life span. -- Use retry logic in your application at the point of the connection attempt to catch failures resulting from concurrent connections have reached the maximum allowed. In the retry logic, set a short delay, and then wait for a random time before the additional connection attempts. |
mysql | Concepts Firewall Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-firewall-rules.md | - Title: Firewall rules - Azure Database for MySQL -description: Learn about using firewall rules to enable connections to your Azure Database for MySQL server. ----- Previously updated : 06/20/2022---# Azure Database for MySQL server firewall rules ----Firewalls prevent all access to your database server until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request. --To configure a firewall, create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level. --**Firewall rules:** These rules enable clients to access your entire Azure Database for MySQL server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor. --## Firewall overview -All database access to your Azure Database for MySQL server is by default blocked by the firewall. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules. --Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your Azure Database for MySQL database, as shown in the following diagram: ---## Connecting from the Internet -Server-level firewall rules apply to all databases on the Azure Database for MySQL server. --If the IP address of the request is within one of the ranges specified in the server-level firewall rules, then the connection is granted. --If the IP address of the request is outside the ranges specified in any of the database-level or server-level firewall rules, then the connection request fails. --## Connecting from Azure -It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints). --If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL server. --> [!IMPORTANT] -> The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users. -> ---### Connecting from a VNet -To connect securely to your Azure Database for MySQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md). --## Programmatically managing firewall rules -In addition to the Azure portal, firewall rules can be managed programmatically by using the Azure CLI. See also [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md) --## Troubleshooting firewall issues -Consider the following points when access to the Microsoft Azure Database for MySQL server service does not behave as expected: --* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect. --* **The login is not authorized or an incorrect password was used:** If a login does not have permissions on the Azure Database for MySQL server or the password used is incorrect, the connection to the Azure Database for MySQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must provide the necessary security credentials. --* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you can try one of the following solutions: -- * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for MySQL server, and then add the IP address range as a firewall rule. -- * Get static IP addressing instead for your client computers, and then add the IP addresses as firewall rules. --* **Server's IP appears to be public:** Connections to the Azure Database for MySQL server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md). --* **Cannot connect from Azure resource with allowed IP:** Check whether the **Microsoft.Sql** service endpoint is enabled for the subnet you are connecting from. If **Microsoft.Sql** is enabled, it indicates that you only want to use [VNet service endpoint rules](concepts-data-access-and-security-vnet.md) on that subnet. -- For example, you may see the following error if you are connecting from an Azure VM in a subnet that has **Microsoft.Sql** enabled but has no corresponding VNet rule: - `FATAL: Client from Azure Virtual Networks is not allowed to access the server` --* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error. --## Next steps --* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md) -* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md) -* [VNet service endpoints in Azure Database for MySQL](./concepts-data-access-and-security-vnet.md) |
mysql | Concepts High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-high-availability.md | - Title: High availability - Azure Database for MySQL -description: This article provides information on high availability in Azure Database for MySQL ----- Previously updated : 06/20/2022---# High availability in Azure Database for MySQL ----The Azure Database for MySQL service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/mysql) uptime. Azure Database for MySQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MySQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service. --Azure Database for MySQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components. --## Components in Azure Database for MySQL --| **Component** | **Description**| -| | -- | -| <b>MySQL Database Server | Azure Database for MySQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in 60-120 seconds depending on the transactional activity on the database. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (ib_log) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://dev.mysql.com/doc/refman/5.7/en/innodb-checkpoints.html) process, data pages from the database server memory are also flushed to the storage. | -| <b>Remote Storage | All MySQL physical data files and log files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within 60 seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. | -| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. | --## Planned downtime mitigation -Azure Database for MySQL is architected to provide high availability during planned downtime operations. ---Here are some planned maintenance scenarios: --| **Scenario** | **Description**| -| | -- | -| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.| -| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.| -| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| -| <b>Minor version upgrades | Azure Database for MySQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| ---## Unplanned downtime mitigation --Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in 60-120 seconds. The remote storage is automatically attached to the new database server. MySQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MySQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention. ----### Unplanned downtime: failure scenarios and service recovery -Here are some failure scenarios and how Azure Database for MySQL automatically recovers: --| **Scenario** | **Automatic recovery** | -| - | - | -| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. | -| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. | --Here are some failure scenarios that require user action to recover: --| **Scenario** | **Recovery plan** | -| - | - | -| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](how-to-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. | -| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [mysqldump](concepts-migrate-dump-restore.md), and then use [restore](concepts-migrate-dump-restore.md#restore-your-mysql-database-using-command-line) to restore those tables into your database. | ----## Summary --Azure Database for MySQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for MySQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/mysql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications. --## Next steps -- Learn about [Azure regions](../../availability-zones/az-overview.md)-- Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](how-to-read-replicas-portal.md) |
mysql | Concepts Infrastructure Double Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-infrastructure-double-encryption.md | - Title: Infrastructure double encryption - Azure Database for MySQL -description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service managed keys. ----- Previously updated : 06/20/2022---# Azure Database for MySQL Infrastructure double encryption ----Azure Database for MySQL uses storage [encryption of data at-rest](concepts-security.md#at-rest) for data using Microsoft's managed keys. Data, including backups, are encrypted on disk and this encryption is always on and can't be disabled. The encryption uses FIPS 140-2 validated cryptographic module and an AES 256-bit cipher for the Azure storage encryption. --Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for MySQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact. --> [!NOTE] -> Like data encryption at rest, this feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-infrastructure-double-encryption.md#limitations) section. --Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for MySQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-mysql.md) for the provisioned MySQL server. --Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures. --> [!NOTE] -> Using Infrastructure double encryption will have 5-10% impact on the throughput of your Azure Database for MySQL server due to the additional encryption process. --## Benefits --Infrastructure double encryption for Azure Database for MySQL provides the following benefits: --1. **Additional diversity of crypto implementation** - The planned move to hardware-based encryption will further diversify the implementations by providing a hardware-based implementation in addition to the software-based implementation. -2. **Implementation errors** - Two layers of encryption at infrastructure layer protects against any errors in caching or memory management in higher layers that exposes plaintext data. Additionally, the two layers also ensures against errors in the implementation of the encryption in general. --The combination of these provides strong protection against common threats and weaknesses used to attack cryptography. --## Supported scenarios with infrastructure double encryption --The encryption capabilities that are provided by Azure Database for MySQL can be used together. Below is a summary of the various scenarios that you can use: --| ## | Default encryption | Infrastructure double encryption | Data encryption using Customer-managed keys | -|:|::|:--:|:--:| -| 1 | *Yes* | *No* | *No* | -| 2 | *Yes* | *Yes* | *No* | -| 3 | *Yes* | *No* | *Yes* | -| 4 | *Yes* | *Yes* | *Yes* | -| | | | | --> [!Important] -> - Scenario 2 and 4 can introduce 5-10 percent drop in throughput based on the workload type for Azure Database for MySQL server due to the additional layer of infrastructure encryption. -> - Configuring Infrastructure double encryption for Azure Database for MySQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption. However, you can still enable Data encryption using customer-managed keys for the server created with/without infrastructure double encryption. --## Limitations --For Azure Database for MySQL, the support for infrastruction double encryption has few limitations - --* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers. -* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage) -- > [!NOTE] - > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ. - > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. ---## Next steps --Learn how to [set up Infrastructure double encryption for Azure database for MySQL](how-to-double-encryption.md). |
mysql | Concepts Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-limits.md | - Title: Limitations - Azure Database for MySQL -description: This article describes limitations in Azure Database for MySQL, such as number of connection and storage engine options. ----- Previously updated : 06/20/2022---# Limitations in Azure Database for MySQL ----The following sections describe capacity, storage engine support, privilege support, data manipulation statement support, and functional limits in the database service. Also see [general limitations](https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.6/en/limits.html) applicable to the MySQL database engine. --## Server parameters --> [!NOTE] -> If you are looking for min/max values for server parameters like `max_connections` and `innodb_buffer_pool_size`, this information has moved to the **[server parameters](./concepts-server-parameters.md)** article. --Azure Database for MySQL supports tuning the values of server parameters. The min and max value of some parameters (ex. `max_connections`, `join_buffer_size`, `query_cache_size`) is determined by the pricing tier and vCores of the server. Refer to [server parameters](./concepts-server-parameters.md) for more information about these limits. --Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables are not populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](how-to-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](how-to-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones. --Password plugins such as "validate_password" and "caching_sha2_password" aren't supported by the service. --## Storage engines --MySQL supports many storage engines. On Azure Database for MySQL, the following storage engines are supported and unsupported: --### Supported -- [InnoDB](https://dev.mysql.com/doc/refman/5.7/en/innodb-introduction.html)-- [MEMORY](https://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html)--### Unsupported -- [MyISAM](https://dev.mysql.com/doc/refman/5.7/en/myisam-storage-engine.html)-- [BLACKHOLE](https://dev.mysql.com/doc/refman/5.7/en/blackhole-storage-engine.html)-- [ARCHIVE](https://dev.mysql.com/doc/refman/5.7/en/archive-storage-engine.html)-- [FEDERATED](https://dev.mysql.com/doc/refman/5.7/en/federated-storage-engine.html)--## Privileges & data manipulation support --Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MySQL server. To maintain the service integrity and SLA at a product level, this service doesn't expose multiple roles. --The MySQL service doesn't allow direct access to the underlying file system. Some data manipulation commands aren't supported. --### Unsupported --The following are unsupported: -- DBA role: Restricted. Alternatively, you can use the administrator user (created during new server creation), allows you to perform most of DDL and DML statements. -- SUPER privilege: Similarly, [SUPER privilege](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_super) is restricted.-- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, remove the `CREATE DEFINER` commands manually or by using the `--skip-definer` command when performing a [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html).-- System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionality. You can't make changes to the `mysql` system database.-- `SELECT ... INTO OUTFILE`: Not supported in the service.-- `LOAD_FILE(file_name)`: Not supported in the service.-- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege is not supported for taking backups using any [utility tools](./how-to-decide-on-right-migration-tools.md).--### Supported -- `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). Additionally, if you are using MySQL client version >= 8.0 you need to include `-ΓÇôlocal-infile=1` parameter in your connection string.---## Functional limitations --### Scale operations -- Dynamic scaling to and from the Basic pricing tiers is currently not supported.-- Decreasing server storage size is not supported.--### Major version upgrades -- [Major version upgrade is supported for v5.6 to v5.7 upgrades only](how-to-major-version-upgrade.md). Upgrades to v8.0 is not supported yet.--### Point-in-time-restore -- When using the PITR feature, the new server is created with the same configurations as the server it is based on.-- Restoring a deleted server is not supported.--### VNet service endpoints -- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.--### Storage size -- Please refer to [pricing tiers](concepts-pricing-tiers.md#storage) for the storage size limits per pricing tier.--## Current known issues -- MySQL server instance displays the wrong server version after connection is established. To get the correct server instance engine version, use the `select version();` command.--## Next steps -- [What's available in each service tier](concepts-pricing-tiers.md)-- [Supported MySQL database versions](concepts-supported-versions.md) |
mysql | Concepts Migrate Dbforge Studio For Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dbforge-studio-for-mysql.md | - Title: Use dbForge Studio for MySQL to migrate a MySQL database to Azure Database for MySQL -description: The article demonstrates how to migrate to Azure Database for MySQL by using dbForge Studio for MySQL. ----- Previously updated : 06/20/2022---# Migrate data to Azure Database for MySQL with dbForge Studio for MySQL ----Looking to move your MySQL databases to Azure Database for MySQL? Consider using the migration tools in dbForge Studio for MySQL. With it, database transfer can be configured, saved, edited, automated, and scheduled. --To complete the examples in this article, you'll need to download and install [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/). --## Connect to Azure Database for MySQL --1. In dbForge Studio for MySQL, select **New Connection** from the **Database** menu. --1. Provide a host name and sign-in credentials. --1. Select **Test Connection** to check the configuration. ---## Migrate with the Backup and Restore functionality --You can choose from many options when using dbForge Studio for MySQL to migrate databases to Azure. If you need to move the entire database, it's best to use the **Backup and Restore** functionality. --In this example, we migrate the *sakila* database from MySQL server to Azure Database for MySQL. The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL. --### Back up the database --1. In dbForge Studio for MySQL, select **Backup Database** from the **Backup and Restore** menu. The **Database Backup Wizard** appears. --1. On the **Backup content** tab of the **Database Backup Wizard**, select database objects you want to back up. --1. On the **Options** tab, configure the backup process to fit your requirements. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png" alt-text="Screenshot showing the options pane of the Backup wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png"::: --1. Select **Next**, and then specify error processing behavior and logging options. --1. Select **Backup**. --### Restore the database --1. In dbForge Studio for MySQL, connect to Azure Database for MySQL. [Refer to the instructions](#connect-to-azure-database-for-mysql). --1. Select **Restore Database** from the **Backup and Restore** menu. The **Database Restore Wizard** appears. --1. In the **Database Restore Wizard**, select a file with a database backup. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png" alt-text="Screenshot showing the Restore step of the Database Restore wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png"::: --1. Select **Restore**. --1. Check the result. --## Migrate with the Copy Databases functionality --The **Copy Databases** functionality in dbForge Studio for MySQL is similar to **Backup and Restore**, except that it doesn't require two steps to migrate a database. It also lets you transfer two or more databases at once. -->[!NOTE] -> The **Copy Databases** functionality is only available in the Enterprise edition of dbForge Studio for MySQL. --In this example, we migrate the *world_x* database from MySQL server to Azure Database for MySQL. --To migrate a database using the Copy Databases functionality: --1. In dbForge Studio for MySQL, select **Copy Databases** from the **Database** menu. --1. On the **Copy Databases** tab, specify the source and target connection. Also select the databases to be migrated. -- We enter the Azure MySQL connection and select the *world_x* database. Select the green arrow to start the process. --1. Check the result. --You'll see that the *world_x* database has successfully appeared in Azure MySQL. ---## Migrate a database with schema and data comparison --You can choose from many options when using dbForge Studio for MySQL to migrate databases, schemas, and/or data to Azure. If you need to move selective tables from a MySQL database to Azure, it's best to use the **Schema Comparison** and the **Data Comparison** functionality. --In this example, we migrate the *world* database from MySQL server to Azure Database for MySQL. --The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL. --The logic behind this approach is to create an empty database in Azure Database for MySQL and synchronize it with the source MySQL database. We first use the **Schema Comparison** tool, and next we use the **Data Comparison** functionality. These steps ensure that the MySQL schemas and data are accurately moved to Azure. --To complete this exercise, you'll first need to [connect to Azure Database for MySQL](#connect-to-azure-database-for-mysql) and create an empty database. --### Schema synchronization --1. On the **Comparison** menu, select **New Schema Comparison**. The **New Schema Comparison Wizard** appears. --1. Choose your source and target, and then specify the schema comparison options. Select **Compare**. --1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Schema Synchronization Wizard**. --1. Walk through the steps of the wizard to configure synchronization. Select **Synchronize** to deploy the changes. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png" alt-text="Screenshot showing the schema synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png"::: --### Data Comparison --1. On the **Comparison** menu, select **New Data Comparison**. The **New Data Comparison Wizard** appears. --1. Choose your source and target, and then specify the data comparison options. Change mappings if necessary, and then select **Compare**. --1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Data Synchronization Wizard**. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png" alt-text="Screenshot showing the results of the data comparison." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png"::: --1. Walk through the steps of the wizard configuring synchronization. Select **Synchronize** to deploy the changes. --1. Check the result. -- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png" alt-text="Screenshot showing the results of the Data Synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png"::: --## Next steps -- [MySQL overview](overview.md) |
mysql | Concepts Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-monitoring.md | - Title: Monitoring - Azure Database for MySQL -description: This article describes the metrics for monitoring and alerting for Azure Database for MySQL, including CPU, storage, and connection statistics. ------ Previously updated : 06/20/2022---# Monitoring in Azure Database for MySQL ----Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for MySQL provides various metrics that give insight into the behavior of your server. --## Metrics --All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](how-to-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md). --### List of metrics --These metrics are available for Azure Database for MySQL: --|Metric|Metric Display Name|Unit|Description| -||||| -|cpu_percent|CPU percent|Percent|The percentage of CPU in use.| -|memory_percent|Memory percent|Percent|The percentage of memory in use.| -|io_consumption_percent|IO percent|Percent|The percentage of IO in use. (Not applicable for Basic tier servers)| -|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.| -|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.| -|serverlog_storage_percent|Server Log storage percent|Percent|The percentage of server log storage used out of the server's maximum server log storage.| -|serverlog_storage_usage|Server Log storage used|Bytes|The amount of server log storage in use.| -|serverlog_storage_limit|Server Log storage limit|Bytes|The maximum server log storage for this server.| -|storage_limit|Storage limit|Bytes|The maximum storage for this server.| -|active_connections|Active Connections|Count|The number of active connections to the server.| -|connections_failed|Failed Connections|Count|The number of failed connections to the server.| -|seconds_behind_master|Replication lag in seconds|Count|The number of seconds the replica server is lagging against the source server. (Not applicable for Basic tier servers)| -|network_bytes_egress|Network Out|Bytes|Network Out across active connections.| -|network_bytes_ingress|Network In|Bytes|Network In across active connections.| -|backup_storage_used|Backup Storage Used|Bytes|The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.| --## Server logs --You can enable slow query and audit logging on your server. These logs are also available through Azure Diagnostic Logs in Azure Monitor logs, Event Hubs, and Storage Account. To learn more about logging, visit the [audit logs](concepts-audit-logs.md) and [slow query logs](concepts-server-logs.md) articles. --## Query Store --[Query Store](concepts-query-store.md) is a feature that keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in the **mysql** schema. You can control the collection and storage of data via various configuration knobs. --## Query Performance Insight --[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible in the **Intelligent Performance** section of your Azure Database for MySQL server's portal page. --## Performance Recommendations --The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes. --## Planned maintenance notification --[Planned maintenance notifications](./concepts-planned-maintenance-notification.md) allow you to receive alerts for upcoming planned maintenance to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 hours before the event. --Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document. --## Next steps --- See [How to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric.-- For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md).-- Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-mysql-monitoring/).-- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for MySQL - Single Server |
mysql | Concepts Performance Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-performance-recommendations.md | - Title: Performance recommendations - Azure Database for MySQL -description: This article describes the Performance Recommendation feature in Azure Database for MySQL ----- Previously updated : 06/20/2022---# Performance Recommendations in Azure Database for MySQL ----**Applies to:** Azure Database for MySQL 5.7, 8.0 --The Performance Recommendations feature analyzes your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. If performance schema is OFF, turning on Query Store enables performance_schema and a subset of performance schema instruments required for the feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes. --## Permissions --**Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature. --## Performance recommendations --The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance. --Open **Performance Recommendations** from the **Intelligent Performance** section of the menu bar on the Azure portal page for your MySQL server. ---Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, the analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods. --The **Recommendations** window will show a list of recommendations if any were found and the related query ID that generated this recommendation. With the query ID, you can use the [mysql.query_store](concepts-query-store.md#mysqlquery_store) view to learn more about the query. ---Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation. --## Recommendation types --### Index recommendations --*Create Index* recommendations suggest new indexes to speed up the most frequently run or time-consuming queries in the workload. This recommendation type requires [Query Store](concepts-query-store.md) to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation. --### Query recommendations --Query recommendations suggest optimizations and rewrites for queries in the workload. By identifying MySQL query anti-patterns and fixing them syntactically, the performance of time-consuming queries can be improved. This recommendation type requires Query Store to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation. --## Next steps -- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL. |
mysql | Concepts Planned Maintenance Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-planned-maintenance-notification.md | - Title: Planned maintenance notification - Azure Database for MySQL - Single Server -description: This article describes the Planned maintenance notification feature in Azure Database for MySQL - Single Server ----- Previously updated : 06/20/2022---# Planned maintenance notification in Azure Database for MySQL - Single Server ----Learn how to prepare for planned maintenance events on your Azure Database for MySQL. --## What is a planned maintenance? --Azure Database for MySQL service performs automated patching of the underlying hardware, OS, and database engine. The patch includes new service features, security, and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patch is tested extensively and rolled out using safe deployment practices. --A planned maintenance is a maintenance window when these service updates are deployed to servers in a given Azure region. During planned maintenance, a notification event is created to inform customers when the service update is deployed in the Azure region hosting their servers. Minimum duration between two planned maintenance is 30 days. You receive a notification of the next maintenance window 72 hours in advance. --## Planned maintenance - duration and customer impact --A planned maintenance for a given Azure region is typically expected to run 15 hrs. The window also includes buffer time to execute a rollback plan if necessary. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events. --In summary, while the planned maintenance event runs for 15 hours, the individual server impact generally lasts 60 seconds depending on the transactional activity on the server. A notification is sent 72 calendar hours before planned maintenance starts and another one while maintenance is in progress for a given region. --## How can I get notified of planned maintenance? --You can utilize the planned maintenance notifications feature to receive alerts for an upcoming planned maintenance event. You will receive the notification about the upcoming maintenance 72 calendar hours before the event and another one while maintenance is in-progress for a given region. --### Planned maintenance notification --> [!IMPORTANT] -> Planned maintenance notifications are currently available in preview in all regions **except** West Central US --**Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event. --We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted. --You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification. --### Check planned maintenance notification from Azure portal --1. In the [Azure portal](https://portal.azure.com), select **Service Health**. -2. Select **Planned Maintenance** tab -3. Select **Subscription**, **Region**, and **Service** for which you want to check the planned maintenance notification. - -### To receive planned maintenance notification --1. In the [portal](https://portal.azure.com), select **Service Health**. -2. In the **Alerts** section, select **Health alerts**. -3. Select **+ Add service health alert** and fill in the fields. -4. Fill out the required fields. -5. Choose the **Event type**, select **Planned maintenance** or **Select all** -6. In **Action groups** define how you would like to receive the alert (get an email, trigger a logic app etc.) -7. Ensure Enable rule upon creation is set to Yes. -8. Select **Create alert rule** to complete your alert --For detailed steps on how to create **service health alerts**, refer to [Create activity log alerts on service notifications](../../service-health/alerts-activity-log-service-notifications-portal.md). --## Can I cancel or postpone planned maintenance? --Maintenance is needed to keep your server secure, stable, and up-to-date. The planned maintenance event cannot be canceled or postponed. Once the notification is sent to a given Azure region, the patching schedule changes cannot be made for any individual server in that region. The patch is rolled out for entire region at once. Azure Database for MySQL - Single Server service is designed for cloud native application that doesn't require granular control or customization of the service. If you are looking to have ability to schedule maintenance for your servers, we recommend you consider [Flexible servers](../flexible-server/overview.md). --## Are all the Azure regions patched at the same time? --No, all the Azure regions are patched during the deployment wise window timings. The deployment wise window generally stretches from 5 PM - 8 AM local time next day, in a given Azure region. Geo-paired Azure regions are patched on different days. For high availability and business continuity of database servers, leveraging [cross region read replicas](./concepts-read-replicas.md#cross-region-replication) is recommended. --## Retry logic --A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors). ---## Next steps --- See [How to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric.-- [Troubleshoot connection issues to Azure Database for MySQL - Single Server](how-to-troubleshoot-common-connection-issues.md)-- [Handle transient errors and connect efficiently to Azure Database for MySQL - Single Server](concepts-connectivity.md) |
mysql | Concepts Pricing Tiers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-pricing-tiers.md | - Title: Azure Database for MySQL - Single Server service tiers -description: Learn about the various service tiers for Azure Database for MySQL including compute generations, storage types, storage size, vCores, memory, and backup retention periods. ----- Previously updated : 06/20/2022---# Azure Database for MySQL - Single Server service tiers ----You can create an Azure Database for MySQL server in one of three different service tiers: Basic, General Purpose, and Memory Optimized. The service tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MySQL server level. A server can have one or many databases. --| Attribute | **Basic** | **General Purpose** | **Memory Optimized** | -|:|:-|:--|:| -| Compute generation | Gen 4, Gen 5 | Gen 4, Gen 5 | Gen 5 | -| vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 | -| Memory per vCore | 2 GB | 5 GB | 10 GB | -| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB | -| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days | --To choose a pricing tier, use the following table as a starting point. --| Service tier | Target workloads | -|:-|:--| -| Basic | Workloads that require light compute and I/O performance. Examples include servers used for development or testing or small-scale infrequently used applications. | -| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.| -| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.| --> [!NOTE] -> Dynamic scaling to and from the Basic service tiers is currently not supported. Basic Tier SKUs servers can't be scaled up to General Purpose or Memory Optimized Tiers. --After you create a General Purpose or Memory Optimized server, the number of vCores, hardware generation, and pricing tier can be changed up or down within seconds. You also can independently adjust the amount of storage up and the backup retention period up or down with no application downtime. You can't change the backup storage type after a server is created. For more information, see the [Scale resources](#scale-resources) section. --## Compute generations and vCores --Compute resources are provided as vCores, which represent the logical CPU of the underlying hardware. China East 1, China North 1, US DoD Central, and US DoD East utilize Gen 4 logical CPUs that are based on Intel E5-2673 v3 (Haswell) 2.4-GHz processors. All other regions utilize Gen 5 logical CPUs that are based on Intel E5-2673 v4 (Broadwell) 2.3-GHz processors. --## Storage --The storage you provision is the amount of storage capacity available to your Azure Database for MySQL server. The storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server. --Azure Database for MySQL ΓÇô Single Server supports the following the backend storage for the servers. --| Storage type | Basic | General purpose v1 | General purpose v2 | -|:|:-|:--|:| -| Storage size | 5 GB to 1 TB | 5 GB to 4 TB | 5 GB to 16 TB | -| Storage increment size | 1 GB | 1 GB | 1 GB | -| IOPS | Variable |3 IOPS/GB<br/>Min 100 IOPS<br/>Max 6000 IOPS | 3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS | -->[!NOTE] -> Basic storage does not provide an IOPS guarantee. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. --### Basic storage -Basic storage is the backend storage supporting Basic pricing tier servers. Basic storage uses Azure standard storage in the backend where iops provisioned are not guaranteed and latency is variable. Basic tier is best suited for workloads that require light compute, low cost and I/O performance for development or small-scale infrequently used applications. --### General purpose storage -General purpose storage is the backend storage supporting General Purpose and Memory Optimized tier server. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. There are two generations of general purpose storage as described below: --#### General purpose storage v1 (Supports up to 4-TB) -General purpose storage v1 is based on the legacy storage technology which can support up to 4-TB storage and 6000 IOPs per server. General purpose storage v1 is optimized to leverage memory from the compute nodes running MySQL engine for local caching and backups. The backup process on general purpose storage v1 reads from the data and log files in the memory of the compute nodes and copies it to the target backup storage for retention up to 35 days. As a result, the memory and io consumption of storage during backups is relatively higher. --All Azure regions supports General purpose storage v1 --For General Purpose or Memory Optimized server on general purpose storage v1, we recommend you consider --* Plan for compute sku tier accounting for 10-30% excess memory for storage caching and backup buffers -* Provision 10% higher IOPs than required by the database workload to account for backup IOs -* Alternatively, migrate to general purpose storage v2 described below that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred Azure regions shared below. --#### General purpose storage v2 (Supports up to 16-TB storage) -General purpose storage v2 is based on the latest storage infrastructure which can support up to 16-TB and 20000 IOPs. In a subset of Azure regions where the infrastructure is available, all newly provisioned servers land on general purpose storage v2 by default. General purpose storage v2 does not consume any memory from the compute node of MySQL and provides better predictable IO latencies compared to general purpose v1 storage. Backups on the general purpose v2 storage servers are snapshot-based with no additional IO overhead. On general purpose v2 storage, the MySQL server performance is expected to higher compared to general purpose storage v1 for the same storage and iops provisioned.There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal. --General purpose storage v2 is supported in the following Azure regions: --| Region | General purpose storage v2 availability | -| | | -| Australia East | :heavy_check_mark: | -| Australia South East | :heavy_check_mark: | -| Brazil South | :heavy_check_mark: | -| Canada Central | :heavy_check_mark: | -| Canada East | :heavy_check_mark: | -| Central US | :heavy_check_mark: | -| East US | :heavy_check_mark: | -| East US 2 | :heavy_check_mark: | -| East Asia | :heavy_check_mark: | -| Japan East | :heavy_check_mark: | -| Japan West | :heavy_check_mark: | -| Korea Central | :heavy_check_mark: | -| Korea South | :heavy_check_mark: | -| North Europe | :heavy_check_mark: | -| North Central US | :heavy_check_mark: | -| South Central US | :heavy_check_mark: | -| Southeast Asia | :heavy_check_mark: | -| UK South | :heavy_check_mark: | -| UK West | :heavy_check_mark: | -| West Central US | :heavy_check_mark: | -| West US | :heavy_check_mark: | -| West US 2 | :heavy_check_mark: | -| West Europe | :heavy_check_mark: | -| Central India | :heavy_check_mark: | -| France Central* | :heavy_check_mark: | -| UAE North* | :heavy_check_mark: | -| South Africa North* | :heavy_check_mark: | --> [!NOTE] -> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br /> -> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, following are the limitations, <br /> -> * Geo-Redundant Backup will not be supported<br /> -> * The replica server should be in the regions which support General purpose storage v2. <br /> --### How can I determine which storage type my server is running on? --You can find the storage type of your server by going to **Settings** > **Compute + storage** page -* If the server is provisioned using Basic SKU, the storage type is Basic storage. -* If the server is provisioned using General Purpose or Memory Optimized SKU, the storage type is General Purpose storage - * If the maximum storage that can be provisioned on your server is up to 4-TB, the storage type is General Purpose storage v1. - * If the maximum storage that can be provisioned on your server is up to 16-TB, the storage type is General Purpose storage v2. --### Can I move from general purpose storage v1 to general purpose storage v2? if yes, how and is there any additional cost? -Yes, migration to general purpose storage v2 from v1 is supported if the underlying storage infrastructure is available in the Azure region of the source server. The migration and v2 storage is available at no additional cost. --### Can I grow storage size after server is provisioned? -You can add additional storage capacity during and after the creation of the server, and allow the system to grow storage automatically based on the storage consumption of your workload. --> [!IMPORTANT] -> Storage can only be scaled up, not down. --### Monitoring IO consumption -You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and IO percent](concepts-monitoring.md).The monitoring metrics for the MySQL server with general purpose storage v1 reports the memory and IO consumed by the MySQL engine but may not capture the memory and IO consumption of the storage layer which is a limitation. --### Reaching the storage limit --Servers with less than or equal to 100 GB provisioned storage are marked read-only if the free storage is less than 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB. --For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 256 MB. --While the service attempts to make the server read-only, all new write transaction requests are blocked and existing active transactions will continue to execute. When the server is set to read-only, all subsequent write operations and transaction commits fail. Read queries will continue to work uninterrupted. After you increase the provisioned storage, the server will be ready to accept write transactions again. --We recommend that you turn on storage auto-grow or to set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on [how to set up an alert](how-to-alert-on-metric.md). --### Storage auto-grow --Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than or equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply. --For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free. --Remember that storage can only be scaled up, not down. --## Backup storage --Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any backup storage you use in excess of this amount is charged in GB per month. For example, if you provision a server with 250 GB of storage, youΓÇÖll have 250 GB of additional storage available for server backups at no charge. Storage for backups in excess of the 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/). To understand factors influencing backup storage usage, monitoring and controlling backup storage cost, you can refer to the [backup documentation](concepts-backup.md). --## Scale resources --After you create your server, you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI. For an example of scaling by using Azure CLI, see [Monitor and scale an Azure Database for MySQL server by using Azure CLI](../scripts/sample-scale-server.md). --When you change the number of vCores, the hardware generation, or the pricing tier, a copy of the original server is created with the new compute allocation. After the new server is up and running, connections are switched over to the new server. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This downtime during scaling can be around 60-120 seconds. The downtime during scaling is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of scaling operation. To avoid longer restart time, it is recommended to perform scaling operations during periods of low transactional activity on the server. --Scaling storage and changing the backup retention period are true online operations. There is no downtime, and your application isn't affected. As IOPS scale with the size of the provisioned storage, you can increase the IOPS available to your server by scaling up storage. --## Pricing --For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/mysql/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.MySQLServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for MySQL** to customize the options. --## Next steps --- Learn how to [create a MySQL server in the portal](how-to-create-manage-server-portal.md).-- Learn about [service limits](concepts-limits.md).-- Learn how to [scale out with read replicas](how-to-read-replicas-portal.md). |
mysql | Concepts Query Performance Insight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-performance-insight.md | - Title: Query Performance Insight - Azure Database for MySQL -description: This article describes the Query Performance Insight feature in Azure Database for MySQL ----- Previously updated : 06/20/2022---# Query Performance Insight in Azure Database for MySQL ----**Applies to:** Azure Database for MySQL 5.7, 8.0 --Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them. --## Common scenarios --### Long running queries --- Identifying longest running queries in the past X hours-- Identifying top N queries that are waiting on resources- -### Wait statistics --- Understanding wait nature for a query-- Understanding trends for resource waits and where resource contention exists--## Prerequisites --For Query Performance Insight to function, data must exist in the [Query Store](concepts-query-store.md). --## Viewing performance insights --The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store. --In the portal page of your Azure Database for MySQL server, select **Query Performance Insight** under the **Intelligent Performance** section of the menu bar. --### Long running queries --The **Long running queries** tab shows the top 5 Query IDs by average duration per execution, aggregated in 15-minute intervals. You can view more Query IDs by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this. --> [!NOTE] -> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk. --The recommended steps to view the query text is shared below: - 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal. -1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries. - -```sql - SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store - SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics -``` --You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger time period respectively. ---### Wait statistics --> [!NOTE] -> Wait statistics are meant for troubleshooting query performance issues. It is recommended to be turned on only for troubleshooting purposes. <br>If you receive the error message in the Azure portal "*The issue encountered for 'Microsoft.DBforMySQL'; cannot fulfill the request. If this issue continues or is unexpected, please contact support with this information.*" while viewing wait statistics, use a smaller time period. --Wait statistics provides a view of the wait events that occur during the execution of a specific query. Learn more about the wait event types in the [MySQL engine documentation](https://go.microsoft.com/fwlink/?linkid=2098206). --Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server. --Queries displayed in the wait statistics view are grouped by the queries that exhibit the largest waits during the specified time interval. --> [!NOTE] -> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk. --The recommended steps to view the query text is shared below: - 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal. -1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries. - -```sql - SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store - SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics -``` ---## Next steps --- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL. |
mysql | Concepts Query Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-store.md | - Title: Query Store - Azure Database for MySQL -description: Learn about the Query Store feature in Azure Database for MySQL to help you track performance over time. ----- Previously updated : 06/20/2022---# Monitor Azure Database for MySQL performance with Query Store ----**Applies to:** Azure Database for MySQL 5.7, 8.0 --The Query Store feature in Azure Database for MySQL provides a way to track query performance over time. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in the **mysql** schema database in the Azure Database for MySQL instance. --## Common scenarios for using Query Store --Query store can be used in a number of scenarios, including the following: --- Detecting regressed queries-- Determining the number of times a query was executed in a given time window-- Comparing the average execution time of a query across time windows to see large deltas--## Enabling Query Store --Query Store is an opt-in feature, so it isn't active by default on a server. The query store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database. --### Enable Query Store using the Azure portal --1. Sign in to the Azure portal and select your Azure Database for MySQL server. -1. Select **Server Parameters** in the **Settings** section of the menu. -1. Search for the query_store_capture_mode parameter. -1. Set the value to ALL and **Save**. --To enable wait statistics in your Query Store: --1. Search for the query_store_wait_sampling_capture_mode parameter. -1. Set the value to ALL and **Save**. --Allow up to 20 minutes for the first batch of data to persist in the mysql database. --## Information in Query Store --Query Store has two stores: --- A runtime statistics store for persisting the query execution statistics information.-- A wait statistics store for persisting wait statistics information.--To minimize space usage, the runtime execution statistics in the runtime statistics store are aggregated over a fixed, configurable time window. The information in these stores is visible by querying the query store views. --The following query returns information about queries in Query Store: --```sql -SELECT * FROM mysql.query_store; -``` --Or this query for wait statistics: --```sql -SELECT * FROM mysql.query_store_wait_stats; -``` --## Finding wait queries --> [!NOTE] -> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely. --Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics. --Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store: --| **Observation** | **Action** | -||| -|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level. | -|High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. | -|High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries. | --## Configuration options --When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window. --The following options are available for configuring Query Store parameters. --| **Parameter** | **Description** | **Default** | **Range** | -||||| -| query_store_capture_mode | Turn the query store feature ON/OFF based on the value. Note: If performance_schema is OFF, turning on query_store_capture_mode will turn on performance_schema and a subset of performance schema instruments required for this feature. | ALL | NONE, ALL | -| query_store_capture_interval | The query store capture interval in minutes. Allows specifying the interval in which the query metrics are aggregated | 15 | 5 - 60 | -| query_store_capture_utility_queries | Turning ON or OFF to capture all the utility queries that is executing in the system. | NO | YES, NO | -| query_store_retention_period_in_days | Time window in days to retain the data in the query store. | 7 | 1 - 30 | --The following options apply specifically to wait statistics. --| **Parameter** | **Description** | **Default** | **Range** | -||||| -| query_store_wait_sampling_capture_mode | Allows turning ON / OFF the wait statistics. | NONE | NONE, ALL | -| query_store_wait_sampling_frequency | Alters frequency of wait-sampling in seconds. 5 to 300 seconds. | 30 | 5-300 | --> [!NOTE] -> Currently **query_store_capture_mode** supersedes this configuration, meaning both **query_store_capture_mode** and **query_store_wait_sampling_capture_mode** have to be enabled to ALL for wait statistics to work. If **query_store_capture_mode** is turned off, then wait statistics is turned off as well since wait statistics utilizes the performance_schema enabled, and the query_text captured by query store. --Use the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md) to get or set a different value for a parameter. --## Views and functions --View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](how-to-create-users.md) can use these views to see the data in Query Store. These views are only available in the **mysql** database. --Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash. --### mysql.query_store --This view returns all the data in Query Store. There is one row for each distinct database ID, user ID, and query ID. --| **Name** | **Data Type** | **IS_NULLABLE** | **Description** | -||||| -| `schema_name`| varchar(64) | NO | Name of the schema | -| `query_id`| bigint(20) | NO| Unique ID generated for the specific query, if the same query executes in different schema, a new ID will be generated | -| `timestamp_id` | timestamp| NO| Timestamp in which the query is executed. This is based on the query_store_interval configuration| -| `query_digest_text`| longtext| NO| The normalized query text after removing all the literals| -| `query_sample_text` | longtext| NO| First appearance of the actual query with literals| -| `query_digest_truncated` | bit| YES| Whether the query text has been truncated. Value will be Yes if the query is longer than 1 KB| -| `execution_count` | bigint(20)| NO| The number of times the query got executed for this timestamp ID / during the configured interval period| -| `warning_count` | bigint(20)| NO| Number of warnings this query generated during the internal| -| `error_count` | bigint(20)| NO| Number of errors this query generated during the interval| -| `sum_timer_wait` | double| YES| Total execution time of this query during the interval in milliseconds| -| `avg_timer_wait` | double| YES| Average execution time for this query during the interval in milliseconds| -| `min_timer_wait` | double| YES| Minimum execution time for this query in milliseconds| -| `max_timer_wait` | double| YES| Maximum execution time in milliseconds| -| `sum_lock_time` | bigint(20)| NO| Total amount of time spent for all the locks for this query execution during this time window| -| `sum_rows_affected` | bigint(20)| NO| Number of rows affected| -| `sum_rows_sent` | bigint(20)| NO| Number of rows sent to client| -| `sum_rows_examined` | bigint(20)| NO| Number of rows examined| -| `sum_select_full_join` | bigint(20)| NO| Number of full joins| -| `sum_select_scan` | bigint(20)| NO| Number of select scans | -| `sum_sort_rows` | bigint(20)| NO| Number of rows sorted| -| `sum_no_index_used` | bigint(20)| NO| Number of times when the query did not use any indexes| -| `sum_no_good_index_used` | bigint(20)| NO| Number of times when the query execution engine did not use any good indexes| -| `sum_created_tmp_tables` | bigint(20)| NO| Total number of temp tables created| -| `sum_created_tmp_disk_tables` | bigint(20)| NO| Total number of temp tables created in disk (generates I/O)| -| `first_seen` | timestamp| NO| The first occurrence (UTC) of the query during the aggregation window| -| `last_seen` | timestamp| NO| The last occurrence (UTC) of the query during this aggregation window| --### mysql.query_store_wait_stats --This view returns wait events data in Query Store. There is one row for each distinct database ID, user ID, query ID, and event. --| **Name**| **Data Type** | **IS_NULLABLE** | **Description** | -||||| -| `interval_start` | timestamp | NO| Start of the interval (15-minute increment)| -| `interval_end` | timestamp | NO| End of the interval (15-minute increment)| -| `query_id` | bigint(20) | NO| Generated unique ID on the normalized query (from query store)| -| `query_digest_id` | varchar(32) | NO| The normalized query text after removing all the literals (from query store) | -| `query_digest_text` | longtext | NO| First appearance of the actual query with literals (from query store) | -| `event_type` | varchar(32) | NO| Category of the wait event | -| `event_name` | varchar(128) | NO| Name of the wait event | -| `count_star` | bigint(20) | NO| Number of wait events sampled during the interval for the query | -| `sum_timer_wait_ms` | double | NO| Total wait time (in milliseconds) of this query during the interval | --### Functions --| **Name**| **Description** | -||| -| `mysql.az_purge_querystore_data(TIMESTAMP)` | Purges all query store data before the given time stamp | -| `mysql.az_procedure_purge_querystore_event(TIMESTAMP)` | Purges all wait event data before the given time stamp | -| `mysql.az_procedure_purge_recommendation(TIMESTAMP)` | Purges recommendations whose expiration is before the given time stamp | --## Limitations and known issues --- If a MySQL server has the parameter `read_only` on, Query Store cannot capture data.-- Query Store functionality can be interrupted if it encounters long Unicode queries (\>= 6000 bytes).-- The retention period for wait statistics is 24 hours.-- Wait statistics uses sample to capture a fraction of events. The frequency can be modified using the parameter `query_store_wait_sampling_frequency`.--## Next steps --- Learn more about [Query Performance Insights](concepts-query-performance-insight.md) |
mysql | Concepts Read Replicas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-read-replicas.md | - Title: Read replicas - Azure Database for MySQL -description: 'Learn about read replicas in Azure Database for MySQL: choosing regions, creating replicas, connecting to replicas, monitoring replication, and stopping replication.' ------ Previously updated : 06/20/2022---# Read replicas in Azure Database for MySQL ----The read replica feature allows you to replicate data from an Azure Database for MySQL server to a read-only server. You can replicate from the source server to up to five replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). --Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month. --To learn more about MySQL replication features and issues, see the [MySQL replication documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html). --> [!NOTE] -> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. -> --## When to use a read replica --The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the source. --A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting. --Because replicas are read-only, they don't directly reduce write-capacity burdens on the source. This feature isn't targeted at write-intensive workloads. --The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay. --## Cross-region replication --You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users. --You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its [paired region](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) or the universal replica regions. The following picture shows which replica regions are available depending on your source region. --### Universal replica regions --You can create a read replica in any of the following regions, regardless of where your source server is located. The supported universal replica regions include: --| Region | Replica availability | -| | | -| Australia East | :heavy_check_mark: | -| Australia South East | :heavy_check_mark: | -| Brazil South | :heavy_check_mark: | -| Canada Central | :heavy_check_mark: | -| Canada East | :heavy_check_mark: | -| Central US | :heavy_check_mark: | -| East US | :heavy_check_mark: | -| East US 2 | :heavy_check_mark: | -| East Asia | :heavy_check_mark: | -| Japan East | :heavy_check_mark: | -| Japan West | :heavy_check_mark: | -| Korea Central | :heavy_check_mark: | -| Korea South | :heavy_check_mark: | -| North Europe | :heavy_check_mark: | -| North Central US | :heavy_check_mark: | -| South Central US | :heavy_check_mark: | -| Southeast Asia | :heavy_check_mark: | -| Switzerland North | :heavy_check_mark: | -| UK South | :heavy_check_mark: | -| UK West | :heavy_check_mark: | -| West Central US | :heavy_check_mark: | -| West US | :heavy_check_mark: | -| West US 2 | :heavy_check_mark: | -| West Europe | :heavy_check_mark: | -| Central India* | :heavy_check_mark: | -| France Central* | :heavy_check_mark: | -| UAE North* | :heavy_check_mark: | -| South Africa North* | :heavy_check_mark: | --> [!NOTE] -> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br /> -> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, you are limited to create replica server only in the Azure regions which support General purpose storage v2. --### Paired regions --In addition to the universal replica regions, you can create a read replica in the Azure paired region of your source server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../../availability-zones/cross-region-replication-azure.md). --If you're using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency. --However, there are limitations to consider: --* Regional availability: Azure Database for MySQL is available in France Central, UAE North, and Germany Central. However, their paired regions aren't available. --* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia. - This means that a source server in West India can create a replica in South India. However, a source server in South India can't create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region isn't West India. --## Create a replica --> [!IMPORTANT] -> * The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers. -> * If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details. ---When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the source server. The creation time depends on the amount of data on the source and the time since the last weekly full backup. The time can range from a few minutes to several hours. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. --Every replica is enabled for storage [auto-grow](concepts-pricing-tiers.md#storage-auto-grow). The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent an interruption in replication caused by out-of-storage errors. --Learn how to [create a read replica in the Azure portal](how-to-read-replicas-portal.md). --## Connect to a replica --At creation, a replica inherits the firewall rules of the source server. Afterwards, these rules are independent from the source server. --The replica inherits the admin account from the source server. All user accounts on the source server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the source server. --You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for MySQL server. For a server named **myreplica** with the admin username **myadmin**, you can connect to the replica by using the mysql CLI: --```bash -mysql -h myreplica.mysql.database.azure.com -u myadmin@myreplica -p -``` --At the prompt, enter the password for the user account. --## Monitor replication --Azure Database for MySQL provides the **Replication lag in seconds** metric in Azure Monitor. This metric is available for replicas only. This metric is calculated using the `seconds_behind_master` metric available in MySQL's `SHOW SLAVE STATUS` command. Set an alert to inform you when the replication lag reaches a value that isn't acceptable for your workload. --If you see increased replication lag, refer to [troubleshooting replication latency](how-to-troubleshoot-replication-latency.md) to troubleshoot and understand possible causes. --## Stop replication --You can stop replication between a source and a replica. After replication is stopped between a source server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the source server. --When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There's no automated failover between a source and its replica. --> [!IMPORTANT] -> The standalone server can't be made into a replica again. -> Before you stop replication on a read replica, ensure the replica has all the data that you require. --Learn how to [stop replication to a replica](how-to-read-replicas-portal.md). --## Failover --There's no automated failover between source and replica servers. --Since replication is asynchronous, there's lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action. --> [!Tip] -> If you failover to the replica, the lag at the time you delink the replica from the source will indicate how much data is lost. --After you've decided you want to failover to a replica: --1. Stop replication to the replica<br/> - This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action. --2. Point your application to the (former) replica<br/> - Each server has a unique connection string. Update your application to point to the (former) replica instead of the source. --After your application is successfully processing reads and writes, you've completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 listed previously. --## Global transaction identifier (GTID) --Global transaction identifier (GTID) is a unique identifier created with each committed transaction on a source server and is OFF by default in Azure Database for MySQL. GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB(General purpose storage v2). To learn more about GTID and how it's used in replication, refer to MySQL's [replication with GTID](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html) documentation. --MySQL supports two types of transactions: GTID transactions (identified with GTID) and anonymous transactions (don't have a GTID allocated) --The following server parameters are available for configuring GTID: --|**Server parameter**|**Description**|**Default Value**|**Values**| -|--|--|--|--| -|`gtid_mode`|Indicates if GTIDs are used to identify transactions. Changes between modes can only be done one step at a time in ascending order (ex. `OFF` -> `OFF_PERMISSIVE` -> `ON_PERMISSIVE` -> `ON`)|`OFF`|`OFF`: Both new and replication transactions must be anonymous <br> `OFF_PERMISSIVE`: New transactions are anonymous. Replicated transactions can either be anonymous or GTID transactions. <br> `ON_PERMISSIVE`: New transactions are GTID transactions. Replicated transactions can either be anonymous or GTID transactions. <br> `ON`: Both new and replicated transactions must be GTID transactions.| -|`enforce_gtid_consistency`|Enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. This value must be set to `ON` before enabling GTID replication. |`OFF`|`OFF`: All transactions are allowed to violate GTID consistency. <br> `ON`: No transaction is allowed to violate GTID consistency. <br> `WARN`: All transactions are allowed to violate GTID consistency, but a warning is generated. | --> [!NOTE] -> * After GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support. -> -> * To change GTID's from one value to another can only be one step at a time in ascending order of modes. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to ON_PERMISSIVE but not to ON. -> -> * To keep replication consistent, you cannot update it for a master/replica server. -> -> * Recommended to SET enforce_gtid_consistency to ON before you can set gtid_mode=ON ---To enable GTID and configure the consistency behavior, update the `gtid_mode` and `enforce_gtid_consistency` server parameters using the [Azure portal](how-to-server-parameters.md), [Azure CLI](how-to-configure-server-parameters-using-cli.md), or [PowerShell](how-to-configure-server-parameters-using-powershell.md). --If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID replication. In order to make sure that the replication is consistent, `gtid_mode` cannot be changed once the master or replica server(s) is created with GTID enabled. --## Considerations and limitations --### Pricing tiers --Read replicas are currently only available in the General Purpose and Memory Optimized pricing tiers. --> [!NOTE] -> The cost of running the replica server is based on the region where the replica server is running. --### Source server restart --Server that has General purpose storage v1, the `log_bin` parameter will be OFF by default. The value will be turned ON when you create the first read replica. If a source server has no existing read replicas, source server will first restart to prepare itself for replication. Please consider server restart and perform this operation during off-peak hours. --Source server that has General purpose storage v2, the `log_bin` parameter will be ON by default and does not require a restart when you add a read replica. --### New replicas --A read replica is created as a new Azure Database for MySQL server. An existing server can't be made into a replica. You can't create a replica of another read replica. --### Replica configuration --A replica is created by using the same server configuration as the source. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and backup retention period. The pricing tier can also be changed independently, except to or from the Basic tier. --> [!IMPORTANT] -> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the source. --Firewall rules and parameter settings are inherited from the source server to the replica when the replica is created. Afterwards, the replica's rules are independent. --### Stopped replicas --If you stop replication between a source server and a read replica, the stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again. --### Deleted source and standalone servers --When a source server is deleted, replication is stopped to all read replicas. These replicas automatically become standalone servers and can accept both reads and writes. The source server itself is deleted. --### User accounts --Users on the source server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the source server. --### Server parameters --To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. --The following server parameters are locked on both the source and replica servers: --* [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html) -* [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) --The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers. --To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas. --### GTID --GTID is supported on: --* MySQL versions 5.7 and 8.0. -* Servers that support storage up to 16 TB. Refer to the [pricing tier](concepts-pricing-tiers.md#storage) article for the full list of regions that support 16 TB storage. --GTID is OFF by default. After GTID is enabled, you can't turn it back off. If you need to turn GTID OFF, contact support. --If GTID is enabled on a source server, newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you can't update `gtid_mode` on the source or replica server(s). --### Other --* Creating a replica of a replica isn't supported. -* In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information. -* Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas. -* Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html) --## Next steps --* Learn how to [create and manage read replicas using the Azure portal](how-to-read-replicas-portal.md) -* Learn how to [create and manage read replicas using the Azure CLI and REST API](how-to-read-replicas-cli.md) |
mysql | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-security.md | - Title: Security - Azure Database for MySQL -description: An overview of the security features in Azure Database for MySQL. ----- Previously updated : 06/20/2022---# Security in Azure Database for MySQL ----There are multiple layers of security that are available to protect the data on your Azure Database for MySQL server. This article outlines those security options. --## Information protection and encryption --### In-transit -Azure Database for MySQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default. --### At-rest -The Azure Database for MySQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled. ---## Network security -Connections to an Azure Database for MySQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md). --A newly created Azure Database for MySQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server. --### IP firewall rules -IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information. --### Virtual network firewall rules -Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for MySQL server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-and-security-vnet.md). --### Private IP -Private Link allows you to connect to your Azure Database for MySQL in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-security-private-link.md) --## Access management --While creating the Azure Database for MySQL server, you provide credentials for an administrator user. This administrator can be used to create additional MySQL users. ---## Threat protection --You can opt in to [Microsoft Defender for open-source relational databases](/azure/security-center/defender-for-databases-introduction) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers. --[Audit logging](concepts-audit-logs.md) is available to track activity in your databases. ---## Next steps -- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md) |
mysql | Concepts Server Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-logs.md | - Title: Slow query logs - Azure Database for MySQL -description: Describes the slow query logs available in Azure Database for MySQL, and the available parameters for enabling different logging levels. ----- Previously updated : 06/20/2022---# Slow query logs in Azure Database for MySQL ----In Azure Database for MySQL, the slow query log is available to users. Access to the transaction log is not supported. The slow query log can be used to identify performance bottlenecks for troubleshooting. --For more information about the MySQL slow query log, see the MySQL reference manual's [slow query log section](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html). --When [Query Store](concepts-query-store.md) is enabled on your server, you may see the queries like "`CALL mysql.az_procedure_collect_wait_stats (900, 30);`" logged in your slow query logs. This behavior is expected as the Query Store feature collects statistics about your queries. --## Configure slow query logging -By default the slow query log is disabled. To enable it, set `slow_query_log` to ON. This can be enabled using the Azure portal or Azure CLI. --Other parameters you can adjust include: --- **long_query_time**: if a query takes longer than long_query_time (in seconds) that query is logged. The default is 10 seconds.-- **log_slow_admin_statements**: if ON includes administrative statements like ALTER_TABLE and ANALYZE_TABLE in the statements written to the slow_query_log.-- **log_queries_not_using_indexes**: determines whether queries that do not use indexes are logged to the slow_query_log-- **log_throttle_queries_not_using_indexes**: This parameter limits the number of non-index queries that can be written to the slow query log. This parameter takes effect when log_queries_not_using_indexes is set to ON.-- **log_output**: if "File", allows the slow query log to be written to both the local server storage and to Azure Monitor Diagnostic Logs. If "None", the slow query log will only be written to Azure Monitor Diagnostics Logs. --> [!IMPORTANT] -> If your tables are not indexed, setting the `log_queries_not_using_indexes` and `log_throttle_queries_not_using_indexes` parameters to ON may affect MySQL performance since all queries running against these non-indexed tables will be written to the slow query log.<br><br> -> If you plan on logging slow queries for an extended period of time, it is recommended to set `log_output` to "None". If set to "File", these logs are written to the local server storage and can affect MySQL performance. --See the MySQL [slow query log documentation](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) for full descriptions of the slow query log parameters. --## Access slow query logs -There are two options for accessing slow query logs in Azure Database for MySQL: local server storage or Azure Monitor Diagnostic Logs. This is set using the `log_output` parameter. --For local server storage, you can list and download slow query logs using the Azure portal or the Azure CLI. In the Azure portal, navigate to your server in the Azure portal. Under the **Monitoring** heading, select the **Server Logs** page. For more information on Azure CLI, see [Configure and access slow query logs using Azure CLI](how-to-configure-server-logs-in-cli.md). --Azure Monitor Diagnostic Logs allows you to pipe slow query logs to Azure Monitor Logs (Log Analytics), Azure Storage, or Event Hubs. See [below](concepts-server-logs.md#diagnostic-logs) for more information. --## Local server storage log retention -When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available.The 7 GB storage limit for the server logs is available free of cost and cannot be extended. --Logs are rotated every 24 hours or 7 GB, whichever comes first. --> [!NOTE] -> The above log retention does not apply to logs that are piped using Azure Monitor Diagnostic Logs. You can change the retention period for the data sinks being emitted to (ex. Azure Storage). --## Diagnostic logs -Azure Database for MySQL is integrated with Azure Monitor Diagnostic Logs. Once you have enabled slow query logs on your MySQL server, you can choose to have them emitted to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs, see the how to section of the [diagnostic logs documentation](../../azure-monitor/essentials/platform-logs-overview.md). -->[!NOTE] ->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings --The following table describes what's in each log. Depending on the output method, the fields included and the order in which they appear may vary. --| **Property** | **Description** | -||| -| `TenantId` | Your tenant ID | -| `SourceSystem` | `Azure` | -| `TimeGenerated` [UTC] | Time stamp when the log was recorded in UTC | -| `Type` | Type of the log. Always `AzureDiagnostics` | -| `SubscriptionId` | GUID for the subscription that the server belongs to | -| `ResourceGroup` | Name of the resource group the server belongs to | -| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | -| `ResourceType` | `Servers` | -| `ResourceId` | Resource URI | -| `Resource` | Name of the server | -| `Category` | `MySqlSlowLogs` | -| `OperationName` | `LogEvent` | -| `Logical_server_name_s` | Name of the server | -| `start_time_t` [UTC] | Time the query began | -| `query_time_s` | Total time in seconds the query took to execute | -| `lock_time_s` | Total time in seconds the query was locked | -| `user_host_s` | Username | -| `rows_sent_d` | Number of rows sent | -| `rows_examined_s` | Number of rows examined | -| `last_insert_id_s` | [last_insert_id](https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id) | -| `insert_id_s` | Insert ID | -| `sql_text_s` | Full query | -| `server_id_s` | The server's ID | -| `thread_id_s` | Thread ID | -| `\_ResourceId` | Resource URI | --> [!NOTE] -> For `sql_text`, log will be truncated if it exceeds 2048 characters. --## Analyze logs in Azure Monitor Logs --Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your slow queries. Below are some sample queries to help you get started. Make sure to update the below with your server name. --- Queries longer than 10 seconds on a particular server-- ```Kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | where query_time_d > 10 - ``` --- List top 5 longest queries on a particular server-- ```Kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | order by query_time_d desc - | take 5 - ``` --- Summarize slow queries by minimum, maximum, average, and standard deviation query time on a particular server-- ```Kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | summarize count(), min(query_time_d), max(query_time_d), avg(query_time_d), stdev(query_time_d), percentile(query_time_d, 95) by LogicalServerName_s - ``` --- Graph the slow query distribution on a particular server-- ```Kusto - AzureDiagnostics - | where LogicalServerName_s == '<your server name>' - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m) - | render timechart - ``` --- Display queries longer than 10 seconds across all MySQL servers with Diagnostic Logs enabled-- ```Kusto - AzureDiagnostics - | where Category == 'MySqlSlowLogs' - | project TimeGenerated, LogicalServerName_s, start_time_t , query_time_d, sql_text_s - | where query_time_d > 10 - ``` - -## Next Steps -- [How to configure slow query logs from the Azure portal](how-to-configure-server-logs-in-portal.md)-- [How to configure slow query logs from the Azure CLI](how-to-configure-server-logs-in-cli.md) |
mysql | Concepts Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-parameters.md | - Title: Server parameters - Azure Database for MySQL -description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL. ----- Previously updated : 04/26/2023---# Server parameters in Azure Database for MySQL ----This article provides considerations and guidelines for configuring server parameters in Azure Database for MySQL. --## What are server parameters? --The MySQL engine provides many different server variables and parameters that you use to configure and tune engine behavior. Some parameters can be set dynamically during runtime, while others are static, and require a server restart in order to apply. --Azure Database for MySQL exposes the ability to change the value of various MySQL server parameters by using the [Azure portal](./how-to-server-parameters.md), the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), and [PowerShell](./how-to-configure-server-parameters-using-powershell.md) to match your workload's needs. --## Configurable server parameters --The list of supported server parameters is constantly growing. In the Azure portal, use the server parameters tab to view the full list and configure server parameters values. --Refer to the following sections to learn more about the limits of several commonly updated server parameters. The limits are determined by the pricing tier and vCores of the server. --### Thread pools --MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there's a corresponding drop in performance. Many active threads can affect the performance significantly, due to increased context switching, thread contention, and bad locality for CPU caches. --*Thread pools*, a server-side feature and distinct from connection pooling, maximize performance by introducing a dynamic pool of worker threads. You use this feature to limit the number of active threads running on the server and minimize thread churn. This helps ensure that a burst of connections doesn't cause the server to run out of resources or memory. Thread pools are most efficient for short queries and CPU intensive workloads, such as OLTP workloads. --For more information, see [Introducing thread pools in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/introducing-thread-pools-in-azure-database-for-mysql-service/ba-p/1504173). --> [!NOTE] -> Thread pools aren't supported for MySQL 5.6. --### Configure the thread pool --To enable a thread pool, update the `thread_handling` server parameter to `pool-of-threads`. By default, this parameter is set to `one-thread-per-connection`, which means MySQL creates a new thread for each new connection. This is a static parameter, and requires a server restart to apply. --You can also configure the maximum and minimum number of threads in the pool by setting the following server parameters: --- `thread_pool_max_threads`: This value limits the number of threads in the pool.-- `thread_pool_min_threads`: This value sets the number of threads that are reserved, even after connections are closed.--To improve performance issues of short queries on the thread pool, you can enable *batch execution*. Instead of returning back to the thread pool immediately after running a query, threads will keep active for a short time to wait for the next query through this connection. The thread then runs the query rapidly and, when this is complete, the thread waits for the next one. This process continues until the overall time spent exceeds a threshold. --You determine the behavior of batch execution by using the following server parameters: --- `thread_pool_batch_wait_timeout`: This value specifies the time a thread waits for another query to process.-- `thread_pool_batch_max_time`: This value determines the maximum time a thread will repeat the cycle of query execution and waiting for the next query.--> [!IMPORTANT] -> Don't turn on the thread pool in production until you've tested it. --### log_bin_trust_function_creators --In Azure Database for MySQL, binary logs are always enabled (the `log_bin` parameter is set to `ON`). If you want to use triggers, you get an error similar to the following: *You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*. --The binary logging format is always **ROW**, and all connections to the server *always* use row-based binary logging. Row-based binary logging helps maintain security, and binary logging can't break, so you can safely set [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to `TRUE`. --### innodb_buffer_pool_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) to learn more about this parameter. --#### Servers on [general purpose storage v1 (supporting up to 4 TB)](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb) --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|872415232|134217728|872415232| -|Basic|2|2684354560|134217728|2684354560| -|General Purpose|2|3758096384|134217728|3758096384| -|General Purpose|4|8053063680|134217728|8053063680| -|General Purpose|8|16106127360|134217728|16106127360| -|General Purpose|16|32749125632|134217728|32749125632| -|General Purpose|32|66035122176|134217728|66035122176| -|General Purpose|64|132070244352|134217728|132070244352| -|Memory Optimized|2|7516192768|134217728|7516192768| -|Memory Optimized|4|16106127360|134217728|16106127360| -|Memory Optimized|8|32212254720|134217728|32212254720| -|Memory Optimized|16|65498251264|134217728|65498251264| -|Memory Optimized|32|132070244352|134217728|132070244352| --#### Servers on [general purpose storage v2 (supporting up to 16 TB)](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|872415232|134217728|872415232| -|Basic|2|2684354560|134217728|2684354560| -|General Purpose|2|7516192768|134217728|7516192768| -|General Purpose|4|16106127360|134217728|16106127360| -|General Purpose|8|32212254720|134217728|32212254720| -|General Purpose|16|65498251264|134217728|65498251264| -|General Purpose|32|132070244352|134217728|132070244352| -|General Purpose|64|264140488704|134217728|264140488704| -|Memory Optimized|2|15032385536|134217728|15032385536| -|Memory Optimized|4|32212254720|134217728|32212254720| -|Memory Optimized|8|64424509440|134217728|64424509440| -|Memory Optimized|16|130996502528|134217728|130996502528| -|Memory Optimized|32|264140488704|134217728|264140488704| --### innodb_file_per_table --MySQL stores the `InnoDB` table in different tablespaces, based on the configuration you provide during the table creation. The [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html) is the storage area for the `InnoDB` data dictionary. A [file-per-table tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html) contains data and indexes for a single `InnoDB` table, and is stored in the file system in its own data file. --You control this behavior by using the `innodb_file_per_table` server parameter. Setting `innodb_file_per_table` to `OFF` causes `InnoDB` to create tables in the system tablespace. Otherwise, `InnoDB` creates tables in file-per-table tablespaces. --> [!NOTE] -> You can only update `innodb_file_per_table` in the general purpose and memory optimized pricing tiers on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) and [general purpose storage v1](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb). --Azure Database for MySQL supports 4 TB (at the largest) in a single data file on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage). If your database size is larger than 4 TB, you should create the table in the [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size that is larger than 4 TB, you should use the partition table. --### join_buffer_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_join_buffer_size) to learn more about this parameter. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|262144|128|268435455| -|General Purpose|4|262144|128|536870912| -|General Purpose|8|262144|128|1073741824| -|General Purpose|16|262144|128|2147483648| -|General Purpose|32|262144|128|4294967295| -|General Purpose|64|262144|128|4294967295| -|Memory Optimized|2|262144|128|536870912| -|Memory Optimized|4|262144|128|1073741824| -|Memory Optimized|8|262144|128|2147483648| -|Memory Optimized|16|262144|128|4294967295| -|Memory Optimized|32|262144|128|4294967295| --### max_connections --|**Pricing tier**|**vCore(s)**|**Default value**|**Min value**|**Max value**| -|||||| -|Basic|1|50|10|50| -|Basic|2|100|10|100| -|General Purpose|2|300|10|600| -|General Purpose|4|625|10|1250| -|General Purpose|8|1250|10|2500| -|General Purpose|16|2500|10|5000| -|General Purpose|32|5000|10|10000| -|General Purpose|64|10000|10|20000| -|Memory Optimized|2|625|10|1250| -|Memory Optimized|4|1250|10|2500| -|Memory Optimized|8|2500|10|5000| -|Memory Optimized|16|5000|10|10000| -|Memory Optimized|32|10000|10|20000| --When the number of connections exceeds the limit, you might receive an error. --> [!TIP] -> To manage connections efficiently, it's a good idea to use a connection pooler, like ProxySQL. To learn about setting up ProxySQL, see the blog post [Load balance read replicas using ProxySQL in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042). Note that ProxySQL is an open source community tool. It's supported by Microsoft on a best-effort basis. --### max_heap_table_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_heap_table_size) to learn more about this parameter. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|16777216|16384|268435455| -|General Purpose|4|16777216|16384|536870912| -|General Purpose|8|16777216|16384|1073741824| -|General Purpose|16|16777216|16384|2147483648| -|General Purpose|32|16777216|16384|4294967295| -|General Purpose|64|16777216|16384|4294967295| -|Memory Optimized|2|16777216|16384|536870912| -|Memory Optimized|4|16777216|16384|1073741824| -|Memory Optimized|8|16777216|16384|2147483648| -|Memory Optimized|16|16777216|16384|4294967295| -|Memory Optimized|32|16777216|16384|4294967295| --### query_cache_size --The query cache is turned off by default. To enable the query cache, configure the `query_cache_type` parameter. --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_query_cache_size) to learn more about this parameter. --> [!NOTE] -> The query cache is deprecated as of MySQL 5.7.20 and has been removed in MySQL 8.0. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|0|0|16777216| -|General Purpose|4|0|0|33554432| -|General Purpose|8|0|0|67108864| -|General Purpose|16|0|0|134217728| -|General Purpose|32|0|0|134217728| -|General Purpose|64|0|0|134217728| -|Memory Optimized|2|0|0|33554432| -|Memory Optimized|4|0|0|67108864| -|Memory Optimized|8|0|0|134217728| -|Memory Optimized|16|0|0|134217728| -|Memory Optimized|32|0|0|134217728| --### lower_case_table_names --The `lower_case_table_name` parameter is set to 1 by default, and you can update this parameter in MySQL 5.6 and MySQL 5.7. --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_lower_case_table_names) to learn more about this parameter. --> [!NOTE] -> In MySQL 8.0, `lower_case_table_name` is set to 1 by default, and you can't change it. --### innodb_strict_mode --If you receive an error similar to `Row size too large (> 8126)`, consider turning off the `innodb_strict_mode` parameter. You can't modify `innodb_strict_mode` globally at the server level. If row data size is larger than 8K, the data is truncated, without an error notification, leading to potential data loss. It's a good idea to modify the schema to fit the page size limit. --You can set this parameter at a session level, by using `init_connect`. To set `innodb_strict_mode` at a session level, refer to [setting parameter not listed](./how-to-server-parameters.md#setting-parameters-not-listed). --> [!NOTE] -> If you have a read replica server, setting `innodb_strict_mode` to `OFF` at the session-level on a source server will break the replication. We suggest keeping the parameter set to `ON` if you have read replicas. --### sort_buffer_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sort_buffer_size) to learn more about this parameter. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|524288|32768|4194304| -|General Purpose|4|524288|32768|8388608| -|General Purpose|8|524288|32768|16777216| -|General Purpose|16|524288|32768|33554432| -|General Purpose|32|524288|32768|33554432| -|General Purpose|64|524288|32768|33554432| -|Memory Optimized|2|524288|32768|8388608| -|Memory Optimized|4|524288|32768|16777216| -|Memory Optimized|8|524288|32768|33554432| -|Memory Optimized|16|524288|32768|33554432| -|Memory Optimized|32|524288|32768|33554432| --### tmp_table_size --Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_tmp_table_size) to learn more about this parameter. --|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| -|||||| -|Basic|1|Not configurable in Basic tier|N/A|N/A| -|Basic|2|Not configurable in Basic tier|N/A|N/A| -|General Purpose|2|16777216|1024|67108864| -|General Purpose|4|16777216|1024|134217728| -|General Purpose|8|16777216|1024|268435456| -|General Purpose|16|16777216|1024|536870912| -|General Purpose|32|16777216|1024|1073741824| -|General Purpose|64|16777216|1024|1073741824| -|Memory Optimized|2|16777216|1024|134217728| -|Memory Optimized|4|16777216|1024|268435456| -|Memory Optimized|8|16777216|1024|536870912| -|Memory Optimized|16|16777216|1024|1073741824| -|Memory Optimized|32|16777216|1024|1073741824| --### InnoDB buffer pool warmup --After you restart Azure Database for MySQL, the data pages that reside in the disk are loaded, as the tables are queried. This leads to increased latency and slower performance for the first run of the queries. For workloads that are sensitive to latency, you might find this slower performance unacceptable. --You can use `InnoDB` buffer pool warmup to shorten the warmup period. This process reloads disk pages that were in the buffer pool *before* the restart, rather than waiting for DML or SELECT operations to access corresponding rows. For more information, see [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html). --However, improved performance comes at the expense of longer start-up time for the server. When you enable this parameter, the server startup and restart times are expected to increase, depending on the IOPS provisioned on the server. It's a good idea to test and monitor the restart time, to ensure that the start-up or restart performance is acceptable, because the server is unavailable during that time. Don't use this parameter when the number of IOPS provisioned is less than 1000 IOPS (in other words, when the storage provisioned is less than 335 GB). --To save the state of the buffer pool at server shutdown, set the server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set the server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up or restart by lowering and fine-tuning the value of the server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`. --> [!NOTE] -> `InnoDB` buffer pool warmup parameters are only supported in general purpose storage servers with up to 16 TB storage. For more information, see [Azure Database for MySQL storage options](./concepts-pricing-tiers.md#storage). --### time_zone --Upon initial deployment, a server running Azure Database for MySQL includes systems tables for time zone information, but these tables aren't populated. You can populate the tables by calling the `mysql.az_load_timezone` stored procedure from tools like the MySQL command line or MySQL Workbench. For information about how to call the stored procedures and set the global or session-level time zones, see [Working with the time zone parameter (Azure portal)](how-to-server-parameters.md#working-with-the-time-zone-parameter) or [Working with the time zone parameter (Azure CLI)](how-to-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter). --### binlog_expire_logs_seconds --In Azure Database for MySQL, this parameter specifies the number of seconds the service waits before purging the binary log file. --The *binary log* contains events that describe database changes, such as table creation operations or changes to table data. It also contains events for statements that can potentially make changes. The binary log is used mainly for two purposes, replication and data recovery operations. --Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before being purged. If you want binary logs to persist longer, you can configure the parameter `binlog_expire_logs_seconds`. If you set `binlog_expire_logs_seconds` to `0`, which is the default value, it purges as soon as the handle to the binary log is freed. If you set `binlog_expire_logs_seconds` to greater than 0, then the binary log only purges after that period of time. --For Azure Database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data out from the Azure Database for MySQL service, you must set this parameter in the primary to avoid purging binary logs before the replica reads from the changes from the primary. If you set the `binlog_expire_logs_seconds` to a higher value, then the binary logs won't get purged soon enough. This can lead to an increase in the storage billing. --### event_scheduler --In Azure Database for MySQL, the `event_schedule` server parameter manages creating, scheduling, and running events, i.e., tasks that run according to a schedule, and they're run by a special event scheduler thread. When the `event_scheduler` parameter is set to ON, the event scheduler thread is listed as a daemon process in the output of SHOW PROCESSLIST. You can create and schedule events using the following SQL syntax: --```sql -CREATE EVENT <event name> -ON SCHEDULE EVERY _ MINUTE / HOUR / DAY -STARTS TIMESTAMP / CURRENT_TIMESTAMP -ENDS TIMESTAMP / CURRENT_TIMESTAMP + INTERVAL 1 MINUTE / HOUR / DAY -COMMENT ΓÇÿ<comment>ΓÇÖ -DO -<your statement>; -``` --> [!NOTE] -> For more information about creating an event, see the MySQL Event Scheduler documentation here: -> -> - [MySQL :: MySQL 5.7 Reference Manual :: 23.4 Using the Event Scheduler](https://dev.mysql.com/doc/refman/5.7/en/event-scheduler.html) -> - [MySQL :: MySQL 8.0 Reference Manual :: 25.4 Using the Event Scheduler](https://dev.mysql.com/doc/refman/8.0/en/event-scheduler.html) -> --#### Configuring the event_scheduler server parameter --The following scenario illustrates one way to use the `event_scheduler` parameter in Azure Database for MySQL. To demonstrate the scenario, consider the following example, a simple table: --```azurecli -mysql> describe tab1; -+--+-++--++-+ -| Field | Type | Null | Key | Default | Extra | -+--+-++--++-+ -| id | int(11) | NO | PRI | NULL | auto_increment | -| CreatedAt | timestamp | YES | | NULL | | -| CreatedBy | varchar(16) | YES | | NULL | | -+--+-++--++-+ -3 rows in set (0.23 sec) -``` --To configure the `event_scheduler` server parameter in Azure Database for MySQL, perform the following steps: --1. In the Azure portal, navigate to your server, and then, under **Settings**, select **Server parameters**. -2. On the **Server parameters** blade, search for `event_scheduler`, in the **VALUE** drop-down list, select **ON**, and then select **Save**. -- > [!NOTE] - > The dynamic server parameter configuration change will be deployed without a restart. --3. Then to create an event, connect to the MySQL server, and run the following SQL command: -- ```sql - CREATE EVENT test_event_01 - ON SCHEDULE EVERY 1 MINUTE - STARTS CURRENT_TIMESTAMP - ENDS CURRENT_TIMESTAMP + INTERVAL 1 HOUR - COMMENT ΓÇÿInserting record into the table tab1 with current timestampΓÇÖ - DO - INSERT INTO tab1(id,createdAt,createdBy) - VALUES('',NOW(),CURRENT_USER()); - ``` --4. To view the Event Scheduler Details, run the following SQL statement: -- ```sql - SHOW EVENTS; - ``` -- The following output appears: -- ```azurecli - mysql> show events; - +--++-+--+--++-+-+++++-+-+--+ - | Db | Name | Definer | Time zone | Type | Execute at | Interval value | Interval field | Starts | Ends | Status | Originator | character_set_client | collation_connection | Database Collation | - +--++-+--+--++-+-+++++-+-+--+ - | db1 | test_event_01 | azureuser@% | SYSTEM | RECURRING | NULL | 1 | MINUTE | 2023-04-05 14:47:04 | 2023-04-05 15:47:04 | ENABLED | 3221153808 | latin1 | latin1_swedish_ci | latin1_swedish_ci | - +--++-+--+--++-+-+++++-+-+--+ - 1 row in set (0.23 sec) - ``` --5. After few minutes, query the rows from the table to begin viewing the rows inserted every minute as per the `event_scheduler` parameter you configured: -- ```azurecli - mysql> select * from tab1; - +-++-+ - | id | CreatedAt | CreatedBy | - +-++-+ - | 1 | 2023-04-05 14:47:04 | azureuser@% | - | 2 | 2023-04-05 14:48:04 | azureuser@% | - | 3 | 2023-04-05 14:49:04 | azureuser@% | - | 4 | 2023-04-05 14:50:04 | azureuser@% | - +-++-+ - 4 rows in set (0.23 sec) - ``` --6. After an hour, run a Select statement on the table to view the complete result of the values inserted into table every minute for an hour as the `event_scheduler` is configured in our case. -- ```azurecli - mysql> select * from tab1; - +-++-+ - | id | CreatedAt | CreatedBy | - +-++-+ - | 1 | 2023-04-05 14:47:04 | azureuser@% | - | 2 | 2023-04-05 14:48:04 | azureuser@% | - | 3 | 2023-04-05 14:49:04 | azureuser@% | - | 4 | 2023-04-05 14:50:04 | azureuser@% | - | 5 | 2023-04-05 14:51:04 | azureuser@% | - | 6 | 2023-04-05 14:52:04 | azureuser@% | - ..< 50 lines trimmed to compact output >.. - | 56 | 2023-04-05 15:42:04 | azureuser@% | - | 57 | 2023-04-05 15:43:04 | azureuser@% | - | 58 | 2023-04-05 15:44:04 | azureuser@% | - | 59 | 2023-04-05 15:45:04 | azureuser@% | - | 60 | 2023-04-05 15:46:04 | azureuser@% | - | 61 | 2023-04-05 15:47:04 | azureuser@% | - +-++-+ - 61 rows in set (0.23 sec) - ``` --#### Other scenarios --You can set up an event based on the requirements of your specific scenario. A few similar examples of scheduling SQL statements to run at different time intervals follow. --**Run a SQL statement now and repeat one time per day with no end** --```sql -CREATE EVENT <event name> -ON SCHEDULE -EVERY 1 DAY -STARTS (TIMESTAMP(CURRENT_DATE) + INTERVAL 1 DAY + INTERVAL 1 HOUR) -COMMENT 'Comment' -DO -<your statement>; -``` --**Run a SQL statement every hour with no end** --```sql -CREATE EVENT <event name> -ON SCHEDULE -EVERY 1 HOUR -COMMENT 'Comment' -DO -<your statement>; -``` --**Run a SQL statement every day with no end** --```sql -CREATE EVENT <event name> -ON SCHEDULE -EVERY 1 DAY -STARTS str_to_date( date_format(now(), '%Y%m%d 0200'), '%Y%m%d %H%i' ) + INTERVAL 1 DAY -COMMENT 'Comment' -DO -<your statement>; -``` --## Nonconfigurable server parameters --The following server parameters aren't configurable in the service: --|**Parameter**|**Fixed value**| -| : | :-- | -|`innodb_file_per_table` in the basic tier|OFF| -|`innodb_flush_log_at_trx_commit`|1| -|`sync_binlog`|1| -|`innodb_log_file_size`|256 MB| -|`innodb_log_files_in_group`|2| --Other variables not listed here are set to the default MySQL values. Refer to the MySQL docs for versions [8.0](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html), [5.7](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html), and [5.6](https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html). --## Next steps --- Learn how to [configure server parameters by using the Azure portal](./how-to-server-parameters.md)-- Learn how to [configure server parameters by using the Azure CLI](./how-to-configure-server-parameters-using-cli.md)-- Learn how to [configure server parameters by using PowerShell](./how-to-configure-server-parameters-using-powershell.md) |
mysql | Concepts Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-servers.md | - Title: Server concepts - Azure Database for MySQL -description: This topic provides considerations and guidelines for working with Azure Database for MySQL servers. ----- Previously updated : 06/20/2022---# Server concepts in Azure Database for MySQL ----This article provides considerations and guidelines for working with Azure Database for MySQL servers. --## What is an Azure Database for MySQL server? --An Azure Database for MySQL server is a central administrative point for multiple databases. It is the same MySQL server construct that you may be familiar with in the on-premises world. Specifically, the Azure Database for MySQL service is managed, provides performance guarantees, and exposes access and features at server-level. --An Azure Database for MySQL server: --- Is created within an Azure subscription.-- Is the parent resource for databases.-- Provides a namespace for databases.-- Is a container with strong lifetime semantics - delete a server and it deletes the contained databases.-- Collocates resources in a region.-- Provides a connection endpoint for server and database access.-- Provides the scope for management policies that apply to its databases: login, firewall, users, roles, configurations, etc.-- Is available in multiple versions. For more information, see [Supported Azure Database for MySQL database versions](./concepts-supported-versions.md).--Within an Azure Database for MySQL server, you can create one or multiple databases. You can opt to create a single database per server to use all the resources or to create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Pricing tiers](./concepts-pricing-tiers.md). --## How do I connect and authenticate to an Azure Database for MySQL server? --The following elements help ensure safe access to your database. --| Security concept | Description | -| :-- | :-- | -| **Authentication and authorization** | Azure Database for MySQL server supports native MySQL authentication. You can connect and authenticate to a server with the server's admin login. | -| **Protocol** | The service supports a message-based protocol used by MySQL. | -| **TCP/IP** | The protocol is supported over TCP/IP and over Unix-domain sockets. | -| **Firewall** | To help protect your data, a firewall rule prevents all access to your database server, until you specify which computers have permission. See [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md). | -| **SSL** | The service supports enforcing SSL connections between your applications and your database server. See [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). | --## Stop/Start an Azure Database for MySQL --Azure Database for MySQL gives you the ability to **Stop** the server when not in use and **Start** the server when you resume activity. This is essentially done to save costs on the database servers and only pay for the resource when in use. This becomes even more important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. Later, when you want to bring the server back online, you can either use the [Azure portal](how-to-stop-start-server.md) or [CLI](how-to-stop-start-server.md). --When the server is in the **Stopped** state, the server's compute is not billed. However, storage continues to be billed as the server's storage remains to ensure that data files are available when the server is started again. --> [!IMPORTANT] -> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can chose to **Stop** it again if you are not using the server. --During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to [start the server](how-to-stop-start-server.md). --### Limitations of Stop/start operation -- Not supported with read replica configurations (both source and replicas).--## How do I manage a server? --You can manage the creation, deletion, server parameter configuration (my.cnf), scaling, networking, security, high availability, backup & restore, monitoring of your Azure Database for MySQL servers by using the Azure portal or the Azure CLI. In addition, following stored procedures are available in Azure Database for MySQL to perform certain database administration tasks required as SUPER user privilege is not supported on the server. --|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**| -|--|--|--|--| -|*mysql.az_kill*|processlist_id|N/A|Equivalent to [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the connection associated with the provided processlist_id after terminating any statement the connection is executing.| -|*mysql.az_kill_query*|processlist_id|N/A|Equivalent to [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the statement the connection is currently executing. Leaves the connection itself alive.| -|*mysql.az_load_timezone*|N/A|N/A|Loads [time zone tables](how-to-server-parameters.md#working-with-the-time-zone-parameter) to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").| --## Next steps --- For an overview of the service, see [Azure Database for MySQL Overview](./overview.md)-- For information about specific resource quotas and limitations based on your **pricing tier**, see [Pricing tiers](./concepts-pricing-tiers.md)-- For information about connecting to the service, see [Connection libraries for Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md). |
mysql | Concepts Ssl Connection Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-ssl-connection-security.md | - Title: SSL/TLS connectivity - Azure Database for MySQL -description: Information for configuring Azure Database for MySQL and associated applications to properly use SSL connections ----- Previously updated : 06/20/2022---# SSL/TLS connectivity in Azure Database for MySQL ----Azure Database for MySQL supports connecting your database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application. --> [!NOTE] -> Updating the `require_secure_transport` server parameter value does not affect the MySQL service's behavior. Use the SSL and TLS enforcement features outlined in this article to secure connections to your database. -->[!NOTE] -> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021). --> [!IMPORTANT] -> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md) --## SSL Default settings --By default, the database service should be configured to require SSL connections when connecting to MySQL. We recommend to avoid disabling the SSL option whenever possible. --When provisioning a new Azure Database for MySQL server through the Azure portal and CLI, enforcement of SSL connections is enabled by default. --Connection strings for various programming languages are shown in the Azure portal. Those connection strings include the required SSL parameters to connect to your database. In the Azure portal, select your server. Under the **Settings** heading, select the **Connection strings**. The SSL parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations. --In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MySQL server, which is located at https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem. --Similarly, the following links point to the certificates for servers in sovereign clouds: [Azure Government](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem), [Microsoft Azure operated by 21Vianet](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt). --To learn how to enable or disable SSL connection when developing application, refer to [How to configure SSL](how-to-configure-ssl.md). --## TLS enforcement in Azure Database for MySQL --Azure Database for MySQL supports encryption for clients connecting to your database server using Transport Layer Security (TLS). TLS is an industry standard protocol that ensures secure network connections between your database server and client applications, allowing you to adhere to compliance requirements. --### TLS settings --Azure Database for MySQL provides the ability to enforce the TLS version for the client connections. To enforce the TLS version, use the **Minimum TLS version** option setting. The following values are allowed for this option setting: --| Minimum TLS setting | Client TLS version supported | -|:|-:| -| TLSEnforcementDisabled (default) | No TLS required | -| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher | -| TLS1_1 | TLS 1.1, TLS 1.2 and higher | -| TLS1_2 | TLS version 1.2 and higher | ---For example, setting the value of minimum TLS setting version to TLS 1.0 means your server allows connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected. --> [!NOTE] -> By default, Azure Database for MySQL does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`). -> -> Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement. --The minimum TLS version setting doesn't require any restart of the server can be set while the server is online. To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](how-to-tls-configurations.md). --## Cipher support by Azure Database for MySQL single server --As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database server. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites don't match one of suites listed below, incoming client connections will be rejected. --### Cipher suite supported --* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 -* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 -* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 --## Next steps --- [Connection libraries for Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md)-- Learn how to [configure SSL](how-to-configure-ssl.md)-- Learn how to [configure TLS](how-to-tls-configurations.md) |
mysql | Connect Cpp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-cpp.md | - Title: 'Quickstart: Connect using C++ - Azure Database for MySQL' -description: This quickstart provides a C++ code sample you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Quickstart: Use Connector/C++ to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C++ application. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes you're familiar with developing using C++ and you're new to working with Azure Database for MySQL. --## Prerequisites --This quickstart uses the resources created in either of the following guides as a starting point: -- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)--You also need to: -- Install [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework)-- Install [Visual Studio](https://www.visualstudio.com/downloads/)-- Install [MySQL Connector/C++](https://dev.mysql.com/downloads/connector/cpp/) -- Install [Boost](https://www.boost.org/)--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install Visual Studio and .NET -The steps in this section assume that you're familiar with developing using .NET. --### **Windows** -- Install Visual Studio 2019 Community. Visual Studio 2019 Community is a full featured, extensible, free IDE. With this IDE, you can create modern applications for Android, iOS, Windows, web and database applications, and cloud services. You can install either the full .NET Framework or just .NET Core: the code snippets in the Quickstart work with either. If you already have Visual Studio installed on your computer, skip the next two steps.- 1. Download the [Visual Studio 2019 installer](https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15). - 2. Run the installer and follow the installation prompts to complete the installation. --### **Configure Visual Studio** -1. From Visual Studio, Project -> Properties -> Linker -> General > Additional Library Directories, add the "\lib\opt" directory (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\lib\opt) of the C++ connector. -2. From Visual Studio, Project -> Properties -> C/C++ -> General -> Additional Include Directories: - - Add the "\include" directory of c++ connector (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\include\). - - Add the Boost library's root directory (for example: C:\boost_1_64_0\). -3. From Visual Studio, Project -> Properties -> Linker -> Input > Additional Dependencies, add **mysqlcppconn.lib** into the text field. -4. Either copy **mysqlcppconn.dll** from the C++ connector library folder in step 3 to the same directory as the application executable or add it to the environment variable so your application can find it. --## Get connection information -Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Sign in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Click the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-cpp/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: --## Connect, create table, and insert data -Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method createStatement() and execute() to run the database commands. --Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database. --```c++ -#include <stdlib.h> -#include <iostream> -#include "stdafx.h" --#include "mysql_connection.h" -#include <cppconn/driver.h> -#include <cppconn/exception.h> -#include <cppconn/prepared_statement.h> -using namespace std; --//for demonstration only. never save your password in the code! -const string server = "tcp://yourservername.mysql.database.azure.com:3306"; -const string username = "username@servername"; -const string password = "yourpassword"; --int main() -{ - sql::Driver *driver; - sql::Connection *con; - sql::Statement *stmt; - sql::PreparedStatement *pstmt; -- try - { - driver = get_driver_instance(); - con = driver->connect(server, username, password); - } - catch (sql::SQLException e) - { - cout << "Could not connect to server. Error message: " << e.what() << endl; - system("pause"); - exit(1); - } -- //please create database "quickstartdb" ahead of time - con->setSchema("quickstartdb"); -- stmt = con->createStatement(); - stmt->execute("DROP TABLE IF EXISTS inventory"); - cout << "Finished dropping table (if existed)" << endl; - stmt->execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);"); - cout << "Finished creating table" << endl; - delete stmt; -- pstmt = con->prepareStatement("INSERT INTO inventory(name, quantity) VALUES(?,?)"); - pstmt->setString(1, "banana"); - pstmt->setInt(2, 150); - pstmt->execute(); - cout << "One row inserted." << endl; -- pstmt->setString(1, "orange"); - pstmt->setInt(2, 154); - pstmt->execute(); - cout << "One row inserted." << endl; -- pstmt->setString(1, "apple"); - pstmt->setInt(2, 100); - pstmt->execute(); - cout << "One row inserted." << endl; -- delete pstmt; - delete con; - system("pause"); - return 0; -} -``` --## Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the select commands. Next, the code uses next() to advance to the records in the results. Finally, the code uses getInt() and getString() to parse the values in the record. --Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database. --```c++ -#include <stdlib.h> -#include <iostream> -#include "stdafx.h" --#include "mysql_connection.h" -#include <cppconn/driver.h> -#include <cppconn/exception.h> -#include <cppconn/resultset.h> -#include <cppconn/prepared_statement.h> -using namespace std; --//for demonstration only. never save your password in the code! -const string server = "tcp://yourservername.mysql.database.azure.com:3306"; -const string username = "username@servername"; -const string password = "yourpassword"; --int main() -{ - sql::Driver *driver; - sql::Connection *con; - sql::PreparedStatement *pstmt; - sql::ResultSet *result; -- try - { - driver = get_driver_instance(); - //for demonstration only. never save password in the code! - con = driver->connect(server, username, password); - } - catch (sql::SQLException e) - { - cout << "Could not connect to server. Error message: " << e.what() << endl; - system("pause"); - exit(1); - } -- con->setSchema("quickstartdb"); -- //select - pstmt = con->prepareStatement("SELECT * FROM inventory;"); - result = pstmt->executeQuery(); -- while (result->next()) - printf("Reading from table=(%d, %s, %d)\n", result->getInt(1), result->getString(2).c_str(), result->getInt(3)); -- delete result; - delete pstmt; - delete con; - system("pause"); - return 0; -} -``` --## Update data -Use the following code to connect and read the data by using an **UPDATE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the update commands. --Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database. --```c++ -#include <stdlib.h> -#include <iostream> -#include "stdafx.h" --#include "mysql_connection.h" -#include <cppconn/driver.h> -#include <cppconn/exception.h> -#include <cppconn/resultset.h> -#include <cppconn/prepared_statement.h> -using namespace std; --//for demonstration only. never save your password in the code! -const string server = "tcp://yourservername.mysql.database.azure.com:3306"; -const string username = "username@servername"; -const string password = "yourpassword"; --int main() -{ - sql::Driver *driver; - sql::Connection *con; - sql::PreparedStatement *pstmt; -- try - { - driver = get_driver_instance(); - //for demonstration only. never save password in the code! - con = driver->connect(server, username, password); - } - catch (sql::SQLException e) - { - cout << "Could not connect to server. Error message: " << e.what() << endl; - system("pause"); - exit(1); - } - - con->setSchema("quickstartdb"); -- //update - pstmt = con->prepareStatement("UPDATE inventory SET quantity = ? WHERE name = ?"); - pstmt->setInt(1, 200); - pstmt->setString(2, "banana"); - pstmt->executeQuery(); - printf("Row updated\n"); -- delete con; - delete pstmt; - system("pause"); - return 0; -} -``` ---## Delete data -Use the following code to connect and read the data by using a **DELETE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the delete commands. --Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database. --```c++ -#include <stdlib.h> -#include <iostream> -#include "stdafx.h" --#include "mysql_connection.h" -#include <cppconn/driver.h> -#include <cppconn/exception.h> -#include <cppconn/resultset.h> -#include <cppconn/prepared_statement.h> -using namespace std; --//for demonstration only. never save your password in the code! -const string server = "tcp://yourservername.mysql.database.azure.com:3306"; -const string username = "username@servername"; -const string password = "yourpassword"; --int main() -{ - sql::Driver *driver; - sql::Connection *con; - sql::PreparedStatement *pstmt; - sql::ResultSet *result; -- try - { - driver = get_driver_instance(); - //for demonstration only. never save password in the code! - con = driver->connect(server, username, password); - } - catch (sql::SQLException e) - { - cout << "Could not connect to server. Error message: " << e.what() << endl; - system("pause"); - exit(1); - } - - con->setSchema("quickstartdb"); - - //delete - pstmt = con->prepareStatement("DELETE FROM inventory WHERE name = ?"); - pstmt->setString(1, "orange"); - result = pstmt->executeQuery(); - printf("Row deleted\n"); - - delete pstmt; - delete con; - delete result; - system("pause"); - return 0; -} -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps -> [!div class="nextstepaction"] -> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md) |
mysql | Connect Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-csharp.md | - Title: 'Quickstart: Connect using C# - Azure Database for MySQL' -description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for MySQL." ------ Previously updated : 06/20/2022---# Quickstart: Use .NET (C#) to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. --## Prerequisites -For this quickstart you need: --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-- Install the [.NET SDK for your platform](https://dotnet.microsoft.com/download) (Windows, Ubuntu Linux, or macOS) for your platform.--|Action| Connectivity method|How-to guide| -|: |: |: | -| **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)| -| **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)| -| **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) | --- [Create a database and non-admin user](./how-to-create-users.md)---## Create a C# project -At a command prompt, run: --``` -mkdir AzureMySqlExample -cd AzureMySqlExample -dotnet new console -dotnet add package MySqlConnector -``` --## Get connection information -Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Click the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-csharp/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: --## Step 1: Connect and insert data -Use the following code to connect and load the data by using `CREATE TABLE` and `INSERT INTO` SQL statements. The code uses the methods of the `MySqlConnection` class: -- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand), sets the CommandText property-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. --Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. --```csharp -using System; -using System.Threading.Tasks; -using MySqlConnector; --namespace AzureMySqlExample -{ - class MySqlCreate - { - static async Task Main(string[] args) - { - var builder = new MySqlConnectionStringBuilder - { - Server = "YOUR-SERVER.mysql.database.azure.com", - Database = "YOUR-DATABASE", - UserID = "USER@YOUR-SERVER", - Password = "PASSWORD", - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.WriteLine("Opening connection"); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "DROP TABLE IF EXISTS inventory;"; - await command.ExecuteNonQueryAsync(); - Console.WriteLine("Finished dropping table (if existed)"); -- command.CommandText = "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);"; - await command.ExecuteNonQueryAsync(); - Console.WriteLine("Finished creating table"); -- command.CommandText = @"INSERT INTO inventory (name, quantity) VALUES (@name1, @quantity1), - (@name2, @quantity2), (@name3, @quantity3);"; - command.Parameters.AddWithValue("@name1", "banana"); - command.Parameters.AddWithValue("@quantity1", 150); - command.Parameters.AddWithValue("@name2", "orange"); - command.Parameters.AddWithValue("@quantity2", 154); - command.Parameters.AddWithValue("@name3", "apple"); - command.Parameters.AddWithValue("@quantity3", 100); -- int rowCount = await command.ExecuteNonQueryAsync(); - Console.WriteLine(String.Format("Number of rows inserted={0}", rowCount)); - } -- // connection will be closed by the 'using' block - Console.WriteLine("Closing connection"); - } -- Console.WriteLine("Press RETURN to exit"); - Console.ReadLine(); - } - } -} -``` --## Step 2: Read data --Use the following code to connect and read the data by using a `SELECT` SQL statement. The code uses the `MySqlConnection` class with methods: -- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.-- [ExecuteReaderAsync()](/dotnet/api/system.data.common.dbcommand.executereaderasync) to run the database commands. -- [ReadAsync()](/dotnet/api/system.data.common.dbdatareader.readasync#System_Data_Common_DbDataReader_ReadAsync) to advance to the records in the results. Then the code uses GetInt32 and GetString to parse the values in the record.---Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. --```csharp -using System; -using System.Threading.Tasks; -using MySqlConnector; --namespace AzureMySqlExample -{ - class MySqlRead - { - static async Task Main(string[] args) - { - var builder = new MySqlConnectionStringBuilder - { - Server = "YOUR-SERVER.mysql.database.azure.com", - Database = "YOUR-DATABASE", - UserID = "USER@YOUR-SERVER", - Password = "PASSWORD", - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.WriteLine("Opening connection"); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "SELECT * FROM inventory;"; -- using (var reader = await command.ExecuteReaderAsync()) - { - while (await reader.ReadAsync()) - { - Console.WriteLine(string.Format( - "Reading from table=({0}, {1}, {2})", - reader.GetInt32(0), - reader.GetString(1), - reader.GetInt32(2))); - } - } - } -- Console.WriteLine("Closing connection"); - } -- Console.WriteLine("Press RETURN to exit"); - Console.ReadLine(); - } - } -} -``` --## Step 3: Update data -Use the following code to connect and read the data by using an `UPDATE` SQL statement. The code uses the `MySqlConnection` class with method: -- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL. -- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. ---Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. --```csharp -using System; -using System.Threading.Tasks; -using MySqlConnector; --namespace AzureMySqlExample -{ - class MySqlUpdate - { - static async Task Main(string[] args) - { - var builder = new MySqlConnectionStringBuilder - { - Server = "YOUR-SERVER.mysql.database.azure.com", - Database = "YOUR-DATABASE", - UserID = "USER@YOUR-SERVER", - Password = "PASSWORD", - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.WriteLine("Opening connection"); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "UPDATE inventory SET quantity = @quantity WHERE name = @name;"; - command.Parameters.AddWithValue("@quantity", 200); - command.Parameters.AddWithValue("@name", "banana"); -- int rowCount = await command.ExecuteNonQueryAsync(); - Console.WriteLine(String.Format("Number of rows updated={0}", rowCount)); - } -- Console.WriteLine("Closing connection"); - } -- Console.WriteLine("Press RETURN to exit"); - Console.ReadLine(); - } - } -} -``` --## Step 4: Delete data -Use the following code to connect and delete the data by using a `DELETE` SQL statement. --The code uses the `MySqlConnection` class with method -- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. ---Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. --```csharp -using System; -using System.Threading.Tasks; -using MySqlConnector; --namespace AzureMySqlExample -{ - class MySqlDelete - { - static async Task Main(string[] args) - { - var builder = new MySqlConnectionStringBuilder - { - Server = "YOUR-SERVER.mysql.database.azure.com", - Database = "YOUR-DATABASE", - UserID = "USER@YOUR-SERVER", - Password = "PASSWORD", - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.WriteLine("Opening connection"); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "DELETE FROM inventory WHERE name = @name;"; - command.Parameters.AddWithValue("@name", "orange"); -- int rowCount = await command.ExecuteNonQueryAsync(); - Console.WriteLine(String.Format("Number of rows deleted={0}", rowCount)); - } -- Console.WriteLine("Closing connection"); - } -- Console.WriteLine("Press RETURN to exit"); - Console.ReadLine(); - } - } -} -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps -> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/> --> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md) --[Cannot find what you are looking for?Let us know.](https://aka.ms/mysql-doc-feedback) |
mysql | Connect Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-go.md | - Title: 'Quickstart: Connect using Go - Azure Database for MySQL' -description: This quickstart provides several Go code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 05/03/2023---# Quickstart: Use Go language to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL from Windows, Ubuntu Linux, and Apple macOS platforms by using code written in the [Go](https://go.dev/) language. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Go and that you are new to working with Azure Database for MySQL. --## Prerequisites --This quickstart uses the resources created in either of these guides as a starting point: --- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install Go and MySQL connector --Install [Go](https://go.dev/doc/install) and the [go-sql-driver for MySQL](https://github.com/go-sql-driver/mysql#installation) on your own computer. Depending on your platform, follow the steps in the appropriate section: --### [Windows](#tab/windows) --1. [Download](https://go.dev/dl/) and install Go for Microsoft Windows according to the [installation instructions](https://go.dev/doc/install). -2. Launch the command prompt from the start menu. -3. Make a folder for your project such. `mkdir %USERPROFILE%\go\src\mysqlgo`. -4. Change directory into the project folder, such as `cd %USERPROFILE%\go\src\mysqlgo`. -5. Set the environment variable for GOPATH to point to the source code directory. `set GOPATH=%USERPROFILE%\go`. -6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command. -- In summary, install Go, then run these commands in the command prompt: -- ```cmd - mkdir %USERPROFILE%\go\src\mysqlgo - cd %USERPROFILE%\go\src\mysqlgo - set GOPATH=%USERPROFILE%\go - go get github.com/go-sql-driver/mysql - ``` --### [Linux (Ubuntu)](#tab/ubuntu) --1. Launch the Bash shell. -2. Install Go by running `sudo apt-get install golang-go`. -3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`. -4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`. -5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session. -6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command. -- In summary, run these bash commands: -- ```bash - sudo apt-get install golang-go git -y - mkdir -p ~/go/src/mysqlgo/ - cd ~/go/src/mysqlgo/ - export GOPATH=~/go/ - go get github.com/go-sql-driver/mysql - ``` --### [Apple macOS](#tab/macos) --1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform. -2. Launch the Bash shell. -3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`. -4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`. -5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session. -6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command. -- In summary, install Go, then run these bash commands: -- ```bash - mkdir -p ~/go/src/mysqlgo/ - cd ~/go/src/mysqlgo/ - export GOPATH=~/go/ - go get github.com/go-sql-driver/mysql - ``` ----## Get connection information --Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Click the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-go/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: --## Build and run Go code --1. To write Golang code, you can use a simple text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE), try [Gogland](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/). -2. Paste the Go code from the sections below into text files, and then save them into your project folder with file extension \*.go (such as Windows path `%USERPROFILE%\go\src\mysqlgo\createtable.go` or Linux path `~/go/src/mysqlgo/createtable.go`). -3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and then replace the example values with your own values. -4. Launch the command prompt or Bash shell. Change directory into your project folder. For example, on Windows `cd %USERPROFILE%\go\src\mysqlgo\`. On Linux `cd ~/go/src/mysqlgo/`. Some of the IDE editors mentioned offer debug and runtime capabilities without requiring shell commands. -5. Run the code by typing the command `go run createtable.go` to compile the application and run it. -6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application. --## Connect, create table, and insert data --Use the following code to connect to the server, create a table, and load the data by using an **INSERT** SQL statement. --The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. --The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and it checks the connection by using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several DDL commands. The code also uses [Prepare()](http://go-database-sql.org/prepared.html) and Exec() to run prepared statements with different parameters to insert three rows. Each time, a custom checkError() method is used to check if an error occurred and panic to exit. --Replace the `host`, `database`, `user`, and `password` constants with your own values. --```Go -package main --import ( - "database/sql" - "fmt" -- _ "github.com/go-sql-driver/mysql" -) --const ( - host = "mydemoserver.mysql.database.azure.com" - database = "quickstartdb" - user = "myadmin@mydemoserver" - password = "yourpassword" -) --func checkError(err error) { - if err != nil { - panic(err) - } -} --func main() { -- // Initialize connection string. - var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database) -- // Initialize connection object. - db, err := sql.Open("mysql", connectionString) - checkError(err) - defer db.Close() -- err = db.Ping() - checkError(err) - fmt.Println("Successfully created connection to database.") -- // Drop previous table of same name if one exists. - _, err = db.Exec("DROP TABLE IF EXISTS inventory;") - checkError(err) - fmt.Println("Finished dropping table (if existed).") -- // Create table. - _, err = db.Exec("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);") - checkError(err) - fmt.Println("Finished creating table.") -- // Insert some data into table. - sqlStatement, err := db.Prepare("INSERT INTO inventory (name, quantity) VALUES (?, ?);") - res, err := sqlStatement.Exec("banana", 150) - checkError(err) - rowCount, err := res.RowsAffected() - fmt.Printf("Inserted %d row(s) of data.\n", rowCount) -- res, err = sqlStatement.Exec("orange", 154) - checkError(err) - rowCount, err = res.RowsAffected() - fmt.Printf("Inserted %d row(s) of data.\n", rowCount) -- res, err = sqlStatement.Exec("apple", 100) - checkError(err) - rowCount, err = res.RowsAffected() - fmt.Printf("Inserted %d row(s) of data.\n", rowCount) - fmt.Println("Done.") -} --``` --## Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. --The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. --The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Query()](https://go.dev/pkg/database/sql/#DB.Query) method to run the select command. Then it runs [Next()](https://go.dev/pkg/database/sql/#Rows.Next) to iterate through the result set and [Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) to parse the column values, saving the value into variables. Each time a custom checkError() method is used to check if an error occurred and panic to exit. --Replace the `host`, `database`, `user`, and `password` constants with your own values. --```Go -package main --import ( - "database/sql" - "fmt" -- _ "github.com/go-sql-driver/mysql" -) --const ( - host = "mydemoserver.mysql.database.azure.com" - database = "quickstartdb" - user = "myadmin@mydemoserver" - password = "yourpassword" -) --func checkError(err error) { - if err != nil { - panic(err) - } -} --func main() { -- // Initialize connection string. - var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database) -- // Initialize connection object. - db, err := sql.Open("mysql", connectionString) - checkError(err) - defer db.Close() -- err = db.Ping() - checkError(err) - fmt.Println("Successfully created connection to database.") -- // Variables for printing column data when scanned. - var ( - id int - name string - quantity int - ) -- // Read some data from the table. - rows, err := db.Query("SELECT id, name, quantity from inventory;") - checkError(err) - defer rows.Close() - fmt.Println("Reading data:") - for rows.Next() { - err := rows.Scan(&id, &name, &quantity) - checkError(err) - fmt.Printf("Data row = (%d, %s, %d)\n", id, name, quantity) - } - err = rows.Err() - checkError(err) - fmt.Println("Done.") -} -``` --## Update data --Use the following code to connect and update the data using a **UPDATE** SQL statement. --The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. --The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the update command. Each time a custom checkError() method is used to check if an error occurred and panic to exit. --Replace the `host`, `database`, `user`, and `password` constants with your own values. --```Go -package main --import ( - "database/sql" - "fmt" -- _ "github.com/go-sql-driver/mysql" -) --const ( - host = "mydemoserver.mysql.database.azure.com" - database = "quickstartdb" - user = "myadmin@mydemoserver" - password = "yourpassword" -) --func checkError(err error) { - if err != nil { - panic(err) - } -} --func main() { -- // Initialize connection string. - var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database) -- // Initialize connection object. - db, err := sql.Open("mysql", connectionString) - checkError(err) - defer db.Close() -- err = db.Ping() - checkError(err) - fmt.Println("Successfully created connection to database.") -- // Modify some data in table. - rows, err := db.Exec("UPDATE inventory SET quantity = ? WHERE name = ?", 200, "banana") - checkError(err) - rowCount, err := rows.RowsAffected() - fmt.Printf("Updated %d row(s) of data.\n", rowCount) - fmt.Println("Done.") -} -``` --## Delete data --Use the following code to connect and remove data using a **DELETE** SQL statement. --The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. --The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the delete command. Each time a custom checkError() method is used to check if an error occurred and panic to exit. --Replace the `host`, `database`, `user`, and `password` constants with your own values. --```Go -package main --import ( - "database/sql" - "fmt" - _ "github.com/go-sql-driver/mysql" -) --const ( - host = "mydemoserver.mysql.database.azure.com" - database = "quickstartdb" - user = "myadmin@mydemoserver" - password = "yourpassword" -) --func checkError(err error) { - if err != nil { - panic(err) - } -} --func main() { -- // Initialize connection string. - var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database) -- // Initialize connection object. - db, err := sql.Open("mysql", connectionString) - checkError(err) - defer db.Close() -- err = db.Ping() - checkError(err) - fmt.Println("Successfully created connection to database.") -- // Modify some data in table. - rows, err := db.Exec("DELETE FROM inventory WHERE name = ?", "orange") - checkError(err) - rowCount, err := rows.RowsAffected() - fmt.Printf("Deleted %d row(s) of data.\n", rowCount) - fmt.Println("Done.") -} -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli-interactive -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your database using Export and Import](../flexible-server/concepts-migrate-import-export.md) |
mysql | Connect Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md | - Title: 'Quickstart: Use Java and JDBC with Azure Database for MySQL' -description: Learn how to use Java and JDBC with an Azure Database for MySQL database. ------ Previously updated : 05/03/2023---# Quickstart: Use Java and JDBC with Azure Database for MySQL ----This article demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL](./index.yml). --JDBC is the standard Java API to connect to traditional relational databases. --In this article, we'll include two authentication methods: Microsoft Entra authentication and MySQL authentication. The **Passwordless** tab shows the Microsoft Entra authentication and the **Password** tab shows the MySQL authentication. --Microsoft Entra authentication is a mechanism for connecting to Azure Database for MySQL using identities defined in Microsoft Entra ID. With Microsoft Entra authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. --MySQL authentication uses accounts stored in MySQL. If you choose to use passwords as credentials for the accounts, these credentials will be stored in the `user` table. Because these passwords are stored in MySQL, you'll need to manage the rotation of the passwords by yourself. --## Prerequisites --- An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).-- [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.-- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).-- The [Apache Maven](https://maven.apache.org/) build tool.-- MySQL command line client. You can connect to your server using the [mysql.exe](https://dev.mysql.com/downloads/) command-line tool with Azure Cloud Shell. Alternatively, you can use the `mysql` command line in your local environment.--## Prepare the working environment --First, set up some environment variables. In [Azure Cloud Shell](https://shell.azure.com/), run the following commands: --### [Passwordless (Recommended)](#tab/passwordless) --```bash -export AZ_RESOURCE_GROUP=database-workshop -export AZ_DATABASE_SERVER_NAME=<YOUR_DATABASE_SERVER_NAME> -export AZ_DATABASE_NAME=demo -export AZ_LOCATION=<YOUR_AZURE_REGION> -export AZ_MYSQL_AD_NON_ADMIN_USERNAME=demo-non-admin -export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS> -export CURRENT_USERNAME=$(az ad signed-in-user show --query userPrincipalName -o tsv) -export CURRENT_USER_OBJECTID=$(az ad signed-in-user show --query id -o tsv) -``` --Replace the placeholders with the following values, which are used throughout this article: --- `<YOUR_DATABASE_SERVER_NAME>`: The name of your MySQL server, which should be unique across Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).--### [Password](#tab/password) --```bash -export AZ_RESOURCE_GROUP=database-workshop -export AZ_DATABASE_SERVER_NAME=<YOUR_DATABASE_SERVER_NAME> -export AZ_DATABASE_NAME=demo -export AZ_LOCATION=<YOUR_AZURE_REGION> -export AZ_MYSQL_ADMIN_USERNAME=demo -export AZ_MYSQL_ADMIN_PASSWORD=<YOUR_MYSQL_ADMIN_PASSWORD> -export AZ_MYSQL_NON_ADMIN_USERNAME=demo-non-admin -export AZ_MYSQL_NON_ADMIN_PASSWORD=<YOUR_MYSQL_NON_ADMIN_PASSWORD> -export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS> -``` --Replace the placeholders with the following values, which are used throughout this article: --- `<YOUR_DATABASE_SERVER_NAME>`: The name of your MySQL server, which should be unique across Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.-- `<YOUR_MYSQL_ADMIN_PASSWORD>` and `<YOUR_MYSQL_NON_ADMIN_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).----Next, create a resource group by using the following command: --```azurecli-interactive -az group create \ - --name $AZ_RESOURCE_GROUP \ - --location $AZ_LOCATION \ - --output tsv -``` --## Create an Azure Database for MySQL instance --### Create a MySQL server and set up admin user --The first thing you'll create is a managed MySQL server. --> [!NOTE] -> You can read more detailed information about creating MySQL servers in [Quickstart: Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md). --#### [Passwordless (Recommended)](#tab/passwordless) --If you're using Azure CLI, run the following command to make sure it has sufficient permission: --```azurecli-interactive -az login --scope https://graph.microsoft.com/.default -``` --Then, run the following command to create the server: --```azurecli-interactive -az mysql server create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_SERVER_NAME \ - --location $AZ_LOCATION \ - --sku-name B_Gen5_1 \ - --storage-size 5120 \ - --output tsv -``` --Next, run the following command to set the Microsoft Entra admin user: --```azurecli-interactive -az mysql server ad-admin create \ - --resource-group $AZ_RESOURCE_GROUP \ - --server-name $AZ_DATABASE_SERVER_NAME \ - --display-name $CURRENT_USERNAME \ - --object-id $CURRENT_USER_OBJECTID -``` --> [!IMPORTANT] -> When setting the administrator, a new user is added to the Azure Database for MySQL server with full administrator permissions. You can only create one Microsoft Entra admin per MySQL server. Selection of another user will overwrite the existing Microsoft Entra admin configured for the server. --This command creates a small MySQL server and sets the Active Directory admin to the signed-in user. --#### [Password](#tab/password) --```azurecli-interactive -az mysql server create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_SERVER_NAME \ - --location $AZ_LOCATION \ - --sku-name B_Gen5_1 \ - --storage-size 5120 \ - --admin-user $AZ_MYSQL_ADMIN_USERNAME \ - --admin-password $AZ_MYSQL_ADMIN_PASSWORD \ - --output tsv -``` --This command creates a small MySQL server. ----### Configure a firewall rule for your MySQL server --Azure Databases for MySQL instances are secured by default. These instances have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server. --Because you configured your local IP address at the beginning of this article, you can open the server's firewall by running the following command: --```azurecli-interactive -az mysql server firewall-rule create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_SERVER_NAME-database-allow-local-ip \ - --server $AZ_DATABASE_SERVER_NAME \ - --start-ip-address $AZ_LOCAL_IP_ADDRESS \ - --end-ip-address $AZ_LOCAL_IP_ADDRESS \ - --output tsv -``` --If you're connecting to your MySQL server from Windows Subsystem for Linux (WSL) on a Windows computer, you'll need to add the WSL host ID to your firewall. --Obtain the IP address of your host machine by running the following command in WSL: --```bash -cat /etc/resolv.conf -``` --Copy the IP address following the term `nameserver`, then use the following command to set an environment variable for the WSL IP Address: --```bash -AZ_WSL_IP_ADDRESS=<the-copied-IP-address> -``` --Then, use the following command to open the server's firewall to your WSL-based app: --```azurecli-interactive -az mysql server firewall-rule create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_SERVER_NAME-database-allow-local-ip-wsl \ - --server $AZ_DATABASE_SERVER_NAME \ - --start-ip-address $AZ_WSL_IP_ADDRESS \ - --end-ip-address $AZ_WSL_IP_ADDRESS \ - --output tsv -``` --### Configure a MySQL database --The MySQL server that you created earlier is empty. Use the following command to create a new database. --```azurecli-interactive -az mysql db create \ - --resource-group $AZ_RESOURCE_GROUP \ - --name $AZ_DATABASE_NAME \ - --server-name $AZ_DATABASE_SERVER_NAME \ - --output tsv -``` --### Create a MySQL non-admin user and grant permission --Next, create a non-admin user and grant all permissions to the database. --> [!NOTE] -> You can read more detailed information about creating MySQL users in [Create users in Azure Database for MySQL](./how-to-create-users.md). --#### [Passwordless (Recommended)](#tab/passwordless) --Create a SQL script called *create_ad_user.sql* for creating a non-admin user. Add the following contents and save it locally: --```bash -export AZ_MYSQL_AD_NON_ADMIN_USERID=$CURRENT_USER_OBJECTID --cat << EOF > create_ad_user.sql -SET aad_auth_validate_oids_in_tenant = OFF; --CREATE AADUSER '$AZ_MYSQL_AD_NON_ADMIN_USERNAME' IDENTIFIED BY '$AZ_MYSQL_AD_NON_ADMIN_USERID'; --GRANT ALL PRIVILEGES ON $AZ_DATABASE_NAME.* TO '$AZ_MYSQL_AD_NON_ADMIN_USERNAME'@'%'; --FLUSH privileges; --EOF -``` --Then, use the following command to run the SQL script to create the Microsoft Entra non-admin user: --```bash -mysql -h $AZ_DATABASE_SERVER_NAME.mysql.database.azure.com --user $CURRENT_USERNAME@$AZ_DATABASE_SERVER_NAME --enable-cleartext-plugin --password=$(az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken) < create_ad_user.sql -``` --Now use the following command to remove the temporary SQL script file: --```bash -rm create_ad_user.sql -``` --#### [Password](#tab/password) --Create a SQL script called *create_user.sql* for creating a non-admin user. Add the following contents and save it locally: --```bash -cat << EOF > create_user.sql --CREATE USER '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%' IDENTIFIED BY '$AZ_MYSQL_NON_ADMIN_PASSWORD'; --GRANT ALL PRIVILEGES ON $AZ_DATABASE_NAME.* TO '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%'; --FLUSH PRIVILEGES; --EOF -``` --Then, use the following command to run the SQL script to create the Microsoft Entra non-admin user: --```bash -mysql -h $AZ_DATABASE_SERVER_NAME.mysql.database.azure.com --user $AZ_MYSQL_ADMIN_USERNAME@$AZ_DATABASE_SERVER_NAME --enable-cleartext-plugin --password=$AZ_MYSQL_ADMIN_PASSWORD < create_user.sql -``` --Now use the following command to remove the temporary SQL script file: --```bash -rm create_user.sql -``` ----### Create a new Java project --Using your favorite IDE, create a new Java project using Java 8 or above. Create a *pom.xml* file in its root directory and add the following contents: --#### [Passwordless (Recommended)](#tab/passwordless) --```xml -<?xml version="1.0" encoding="UTF-8"?> -<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" - xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> - <modelVersion>4.0.0</modelVersion> - <groupId>com.example</groupId> - <artifactId>demo</artifactId> - <version>0.0.1-SNAPSHOT</version> - <name>demo</name> -- <properties> - <java.version>1.8</java.version> - <maven.compiler.source>1.8</maven.compiler.source> - <maven.compiler.target>1.8</maven.compiler.target> - </properties> -- <dependencies> - <dependency> - <groupId>mysql</groupId> - <artifactId>mysql-connector-java</artifactId> - <version>8.0.30</version> - </dependency> - <dependency> - <groupId>com.azure</groupId> - <artifactId>azure-identity-extensions</artifactId> - <version>1.0.0</version> - </dependency> - </dependencies> -</project> -``` --#### [Password](#tab/password) --```xml -<?xml version="1.0" encoding="UTF-8"?> -<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" - xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> - <modelVersion>4.0.0</modelVersion> - <groupId>com.example</groupId> - <artifactId>demo</artifactId> - <version>0.0.1-SNAPSHOT</version> - <name>demo</name> -- <properties> - <java.version>1.8</java.version> - <maven.compiler.source>1.8</maven.compiler.source> - <maven.compiler.target>1.8</maven.compiler.target> - </properties> -- <dependencies> - <dependency> - <groupId>mysql</groupId> - <artifactId>mysql-connector-java</artifactId> - <version>8.0.30</version> - </dependency> - </dependencies> -</project> -``` ----This file is an [Apache Maven](https://maven.apache.org/) file that configures your project to use Java 8 and a recent MySQL driver for Java. --### Prepare a configuration file to connect to Azure Database for MySQL --Run the following script in the project root directory to create a *src/main/resources/database.properties* file and add configuration details: --#### [Passwordless (Recommended)](#tab/passwordless) --```bash -mkdir -p src/main/resources && touch src/main/resources/database.properties --cat << EOF > src/main/resources/database.properties -url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin -user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} -EOF -``` --> [!NOTE] -> If you are using MysqlConnectionPoolDataSource class as the datasource in your application, please remove "defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin" in the url. --```bash -mkdir -p src/main/resources && touch src/main/resources/database.properties --cat << EOF > src/main/resources/database.properties -url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin -user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} -EOF -``` --#### [Password](#tab/password) --```bash -mkdir -p src/main/resources && touch src/main/resources/database.properties --cat << EOF > src/main/resources/database.properties -url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?useSSL=true&sslMode=REQUIRED&serverTimezone=UTC -user=${AZ_MYSQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} -password=${AZ_MYSQL_NON_ADMIN_PASSWORD} -EOF -``` ----> [!NOTE] -> The configuration property `url` has `?serverTimezone=UTC` appended to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, your Java server would not use the same date format as the database, which would result in an error. --### Create an SQL file to generate the database schema --Next, you'll use a *src/main/resources/schema.sql* file to create a database schema. Create that file, then add the following contents: --```bash -touch src/main/resources/schema.sql --cat << EOF > src/main/resources/schema.sql -DROP TABLE IF EXISTS todo; -CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN); -EOF -``` --## Code the application --### Connect to the database --Next, add the Java code that will use JDBC to store and retrieve data from your MySQL server. --Create a *src/main/java/DemoApplication.java* file and add the following contents: --```java -package com.example.demo; --import com.mysql.cj.jdbc.AbandonedConnectionCleanupThread; --import java.sql.*; -import java.util.*; -import java.util.logging.Logger; --public class DemoApplication { -- private static final Logger log; -- static { - System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n"); - log =Logger.getLogger(DemoApplication.class.getName()); - } -- public static void main(String[] args) throws Exception { - log.info("Loading application properties"); - Properties properties = new Properties(); - properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("database.properties")); -- log.info("Connecting to the database"); - Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties); - log.info("Database connection test: " + connection.getCatalog()); -- log.info("Create database schema"); - Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql")); - Statement statement = connection.createStatement(); - while (scanner.hasNextLine()) { - statement.execute(scanner.nextLine()); - } -- /* Prepare to store and retrieve data from the MySQL server. - Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true); - insertData(todo, connection); - todo = readData(connection); - todo.setDetails("congratulations, you have updated data!"); - updateData(todo, connection); - deleteData(todo, connection); - */ -- log.info("Closing database connection"); - connection.close(); - AbandonedConnectionCleanupThread.uncheckedShutdown(); - } -} -``` --This Java code will use the *database.properties* and the *schema.sql* files that you created earlier. After connecting to the MySQL server, you can create a schema to store your data. --In this file, you can see that we commented methods to insert, read, update and delete data. You'll implement those methods in the rest of this article, and you'll be able to uncomment them one after each other. --> [!NOTE] -> The database credentials are stored in the *user* and *password* properties of the *database.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument. --> [!NOTE] -> The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver command to destroy an internal thread when shutting down the application. You can safely ignore this line. --You can now execute this main class with your favorite tool: --- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.-- Using Maven, you can run the application with the following command: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.--The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection. You should see output similar to the following example in the console logs: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Closing database connection -``` --### Create a domain class --Create a new `Todo` Java class, next to the `DemoApplication` class, and add the following code: --```java -package com.example.demo; --public class Todo { -- private Long id; - private String description; - private String details; - private boolean done; -- public Todo() { - } -- public Todo(Long id, String description, String details, boolean done) { - this.id = id; - this.description = description; - this.details = details; - this.done = done; - } -- public Long getId() { - return id; - } -- public void setId(Long id) { - this.id = id; - } -- public String getDescription() { - return description; - } -- public void setDescription(String description) { - this.description = description; - } -- public String getDetails() { - return details; - } -- public void setDetails(String details) { - this.details = details; - } -- public boolean isDone() { - return done; - } -- public void setDone(boolean done) { - this.done = done; - } -- @Override - public String toString() { - return "Todo{" + - "id=" + id + - ", description='" + description + '\'' + - ", details='" + details + '\'' + - ", done=" + done + - '}'; - } -} -``` --This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script. --### Insert data into Azure Database for MySQL --In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database: --```java -private static void insertData(Todo todo, Connection connection) throws SQLException { - log.info("Insert data"); - PreparedStatement insertStatement = connection - .prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);"); -- insertStatement.setLong(1, todo.getId()); - insertStatement.setString(2, todo.getDescription()); - insertStatement.setString(3, todo.getDetails()); - insertStatement.setBoolean(4, todo.isDone()); - insertStatement.executeUpdate(); -} -``` --You can now uncomment the two following lines in the `main` method: --```java -Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true); -insertData(todo, connection); -``` --Executing the main class should now produce the following output: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Insert data -[INFO ] Closing database connection -``` --### Reading data from Azure Database for MySQL --Next, read the data previously inserted to validate that your code works correctly. --In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database: --```java -private static Todo readData(Connection connection) throws SQLException { - log.info("Read data"); - PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;"); - ResultSet resultSet = readStatement.executeQuery(); - if (!resultSet.next()) { - log.info("There is no data in the database!"); - return null; - } - Todo todo = new Todo(); - todo.setId(resultSet.getLong("id")); - todo.setDescription(resultSet.getString("description")); - todo.setDetails(resultSet.getString("details")); - todo.setDone(resultSet.getBoolean("done")); - log.info("Data read from the database: " + todo.toString()); - return todo; -} -``` --You can now uncomment the following line in the `main` method: --```java -todo = readData(connection); -``` --Executing the main class should now produce the following output: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Insert data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true} -[INFO ] Closing database connection -``` --### Updating data in Azure Database for MySQL --Next, update the data you previously inserted. --Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database: --```java -private static void updateData(Todo todo, Connection connection) throws SQLException { - log.info("Update data"); - PreparedStatement updateStatement = connection - .prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;"); -- updateStatement.setString(1, todo.getDescription()); - updateStatement.setString(2, todo.getDetails()); - updateStatement.setBoolean(3, todo.isDone()); - updateStatement.setLong(4, todo.getId()); - updateStatement.executeUpdate(); - readData(connection); -} -``` --You can now uncomment the two following lines in the `main` method: --```java -todo.setDetails("congratulations, you have updated data!"); -updateData(todo, connection); -``` --Executing the main class should now produce the following output: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Insert data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true} -[INFO ] Update data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true} -[INFO ] Closing database connection -``` --### Deleting data in Azure Database for MySQL --Finally, delete the data you previously inserted. --Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database: --```java -private static void deleteData(Todo todo, Connection connection) throws SQLException { - log.info("Delete data"); - PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;"); - deleteStatement.setLong(1, todo.getId()); - deleteStatement.executeUpdate(); - readData(connection); -} -``` --You can now uncomment the following line in the `main` method: --```java -deleteData(todo, connection); -``` --Executing the main class should now produce the following output: --```output -[INFO ] Loading application properties -[INFO ] Connecting to the database -[INFO ] Database connection test: demo -[INFO ] Create database schema -[INFO ] Insert data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true} -[INFO ] Update data -[INFO ] Read data -[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true} -[INFO ] Delete data -[INFO ] Read data -[INFO ] There is no data in the database! -[INFO ] Closing database connection -``` --## Clean up resources --Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for MySQL. --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli-interactive -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md) |
mysql | Connect Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md | - Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL' -description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 05/03/2023---# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md). ----In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Linux, and Windows platforms. --This article assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL. --## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Database for MySQL server. [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install Node.js and the MySQL connector --Depending on your platform, follow the instructions in the appropriate section to install [Node.js](https://nodejs.org). Use npm to install the [mysql2](https://www.npmjs.com/package/mysql2) package and its dependencies into your project folder. --### [Windows](#tab/windows) --1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your desired Windows installer option. -2. Make a local project folder such as `nodejsmysql`. -3. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\` -4. Run the NPM tool to install the mysql2 library into the project folder. -- ```cmd - cd c:\nodejsmysql\ - "C:\Program Files\nodejs\npm" install mysql2 - "C:\Program Files\nodejs\npm" list - ``` --5. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released. --### [Linux (Ubuntu/Debian)](#tab/ubuntu) --1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js. -- ```bash - # Using Ubuntu - sudo curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash - - sudo apt-get install -y nodejs -- # Using Debian - sudo curl -sL https://deb.nodesource.com/setup_14.x | bash - - sudo apt-get install -y nodejs - ``` --2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql2 package into that folder. -- ```bash - mkdir nodejsmysql - cd nodejsmysql - npm install --save mysql2 - npm list - ``` --3. Verify the installation by checking npm list output text. The version number may vary as new patches are released. --### [Linux (RHEL/CentOS)](#tab/rhel) --1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js. -- **RHEL/CentOS 7.x** -- ```bash - sudo yum install -y rh-nodejs8 - scl enable rh-nodejs8 bash - ``` -- **RHEL/CentOS 8.x** -- ```bash - sudo yum install -y nodejs - ``` --1. Run the following commands to create a project folder `mysqlnodejs` and install the mysql2 package into that folder. -- ```bash - mkdir nodejsmysql - cd nodejsmysql - npm install --save mysql2 - npm list - ``` --1. Verify the installation by checking npm list output text. The version number may vary as new patches are released. --### [Linux (SUSE)](#tab/sles) --1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js. -- ```bash - sudo zypper install nodejs - ``` --1. Run the following commands to create a project folder `mysqlnodejs` and install the mysql2 package into that folder. -- ```bash - mkdir nodejsmysql - cd nodejsmysql - npm install --save mysql2 - npm list - ``` --1. Verify the installation by checking npm list output text. The version number may vary as new patches are released. --### [macOS](#tab/macos) --1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your macOS installer. --2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql2 package into that folder. -- ```bash - mkdir nodejsmysql - cd nodejsmysql - npm install --save mysql2 - npm list - ``` --3. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released. ----## Get connection information --Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Select the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-nodejs/server-name-azure-database-mysql.png" alt-text="Azure Database for MySQL server name"::: --## Running the code samples --1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js). -1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the server and database. -1. **Obtain SSL certificate**: Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) and save the certificate file to your local drive. -- **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to DigiCertGlobalRootCA.crt.pem. -- See the following links for certificates for servers in sovereign clouds: [Azure Government](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem), [Microsoft Azure operated by 21Vianet](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt). -1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file. -1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`. -1. To run the application, enter the node command followed by the file name, such as `node createtable.js`. -1. On Windows, if the node application isn't in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js` --## Connect, create table, and insert data --Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements. --The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) function is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) function is used to execute the SQL query against MySQL database. --```javascript -const mysql = require('mysql2'); -const fs = require('fs'); --var config = -{ - host: 'mydemoserver.mysql.database.azure.com', - user: 'myadmin@mydemoserver', - password: 'your_password', - database: 'quickstartdb', - port: 3306, - ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")} -}; --const conn = new mysql.createConnection(config); --conn.connect( - function (err) { - if (err) { - console.log("!!! Cannot connect !!! Error:"); - throw err; - } - else - { - console.log("Connection established."); - queryDatabase(); - } -}); --function queryDatabase(){ - conn.query('DROP TABLE IF EXISTS inventory;', function (err, results, fields) { - if (err) throw err; - console.log('Dropped inventory table if existed.'); - }) - conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);', - function (err, results, fields) { - if (err) throw err; - console.log('Created inventory table.'); - }) - conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150], - function (err, results, fields) { - if (err) throw err; - else console.log('Inserted ' + results.affectedRows + ' row(s).'); - }) - conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 154], - function (err, results, fields) { - if (err) throw err; - console.log('Inserted ' + results.affectedRows + ' row(s).'); - }) - conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100], - function (err, results, fields) { - if (err) throw err; - console.log('Inserted ' + results.affectedRows + ' row(s).'); - }) - conn.end(function (err) { - if (err) throw err; - else console.log('Done.') - }); -}; -``` --## Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. --The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query. --```javascript -const mysql = require('mysql2'); -const fs = require('fs'); --var config = -{ - host: 'mydemoserver.mysql.database.azure.com', - user: 'myadmin@mydemoserver', - password: 'your_password', - database: 'quickstartdb', - port: 3306, - ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")} -}; --const conn = new mysql.createConnection(config); --conn.connect( - function (err) { - if (err) { - console.log("!!! Cannot connect !!! Error:"); - throw err; - } - else { - console.log("Connection established."); - readData(); - } - }); --function readData(){ - conn.query('SELECT * FROM inventory', - function (err, results, fields) { - if (err) throw err; - else console.log('Selected ' + results.length + ' row(s).'); - for (i = 0; i < results.length; i++) { - console.log('Row: ' + JSON.stringify(results[i])); - } - console.log('Done.'); - }) - conn.end( - function (err) { - if (err) throw err; - else console.log('Closing connection.') - }); -}; -``` --## Update data --Use the following code to connect and update data by using an **UPDATE** SQL statement. --The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database. --```javascript -const mysql = require('mysql2'); -const fs = require('fs'); --var config = -{ - host: 'mydemoserver.mysql.database.azure.com', - user: 'myadmin@mydemoserver', - password: 'your_password', - database: 'quickstartdb', - port: 3306, - ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")} -}; --const conn = new mysql.createConnection(config); --conn.connect( - function (err) { - if (err) { - console.log("!!! Cannot connect !!! Error:"); - throw err; - } - else { - console.log("Connection established."); - updateData(); - } - }); --function updateData(){ - conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [200, 'banana'], - function (err, results, fields) { - if (err) throw err; - else console.log('Updated ' + results.affectedRows + ' row(s).'); - }) - conn.end( - function (err) { - if (err) throw err; - else console.log('Done.') - }); -}; -``` --## Delete data --Use the following code to connect and delete data by using a **DELETE** SQL statement. --The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database. --```javascript -const mysql = require('mysql2'); -const fs = require('fs'); --var config = -{ - host: 'mydemoserver.mysql.database.azure.com', - user: 'myadmin@mydemoserver', - password: 'your_password', - database: 'quickstartdb', - port: 3306, - ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")} -}; --const conn = new mysql.createConnection(config); --conn.connect( - function (err) { - if (err) { - console.log("!!! Cannot connect !!! Error:"); - throw err; - } - else { - console.log("Connection established."); - deleteData(); - } - }); --function deleteData(){ - conn.query('DELETE FROM inventory WHERE name = ?', ['orange'], - function (err, results, fields) { - if (err) throw err; - else console.log('Deleted ' + results.affectedRows + ' row(s).'); - }) - conn.end( - function (err) { - if (err) throw err; - else console.log('Done.') - }); -}; -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli-interactive -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your database using Export and Import](../flexible-server/concepts-migrate-import-export.md) |
mysql | Connect Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-php.md | - Title: 'Quickstart: Connect using PHP - Azure Database for MySQL' -description: This quickstart provides several PHP code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Quickstart: Use PHP to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. --## Prerequisites -For this quickstart you need: --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-- |Action| Connectivity method|How-to guide| - |: |: |: | - | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)| - | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)| - | **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) | --- [Create a database and non-admin user](./how-to-create-users.md?tabs=single-server)-- Install latest PHP version for your operating system- - [PHP on macOS](https://secure.php.net/manual/install.macosx.php) - - [PHP on Linux](https://secure.php.net/manual/install.unix.php) - - [PHP on Windows](https://secure.php.net/manual/install.windows.php) --> [!NOTE] -> We are using [MySQLi](https://www.php.net/manual/en/book.mysqli.php) library to manage connect and query the server in this quickstart. --## Get connection information -You can get the database server connection information from the Azure portal by following these steps: --1. Log in to the [Azure portal](https://portal.azure.com/). -2. Navigate to the Azure Databases for MySQL page. You can search for and select **Azure Database for MySQL**. --2. Select your MySQL server (such as **mydemoserver**). -3. In the **Overview** page, copy the fully qualified server name next to **Server name** and the admin user name next to **Server admin login name**. To copy the server name or host name, hover over it and select the **Copy** icon. --> [!IMPORTANT] -> - If you forgot your password, you can [reset the password](./how-to-create-manage-server-portal.md#update-admin-password). -> - Replace the **host, username, password,** and **db_name** parameters with your own values** --## Step 1: Connect to the server -SSL is enabled by default. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. This code calls: -- [mysqli_init](https://secure.php.net/manual/mysqli.init.php) to initialize MySQLi.-- [mysqli_ssl_set](https://www.php.net/manual/en/mysqli.ssl-set.php) to point to the SSL certificate path. This is required for your local environment but not required for App Service Web App or Azure Virtual machines.-- [mysqli_real_connect](https://secure.php.net/manual/mysqli.real-connect.php) to connect to MySQL.-- [mysqli_close](https://secure.php.net/manual/mysqli.close.php) to close the connection.---```php -$host = 'mydemoserver.mysql.database.azure.com'; -$username = 'myadmin@mydemoserver'; -$password = 'your_password'; -$db_name = 'your_database'; --//Initializes MySQLi -$conn = mysqli_init(); --mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootG2.crt.pem", NULL, NULL); --// Establish the connection -mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL); --//If connection failed, show the error -if (mysqli_connect_errno()) -{ - die('Failed to connect to MySQL: '.mysqli_connect_error()); -} -``` --## Step 2: Create a Table -Use the following code to connect. This code calls: -- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) to run the query.-```php -// Run the create table query -if (mysqli_query($conn, ' -CREATE TABLE Products ( -`Id` INT NOT NULL AUTO_INCREMENT , -`ProductName` VARCHAR(200) NOT NULL , -`Color` VARCHAR(50) NOT NULL , -`Price` DOUBLE NOT NULL , -PRIMARY KEY (`Id`) -); -')) { -printf("Table created\n"); -} -``` --## Step 3: Insert data -Use the following code to insert data by using an **INSERT** SQL statement. This code uses the methods: -- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared insert statement-- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) to bind the parameters for each inserted column value.-- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php)-- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) to close the statement by using method---```php -//Create an Insert prepared statement and run it -$product_name = 'BrandNewProduct'; -$product_color = 'Blue'; -$product_price = 15.5; -if ($stmt = mysqli_prepare($conn, "INSERT INTO Products (ProductName, Color, Price) VALUES (?, ?, ?)")) -{ - mysqli_stmt_bind_param($stmt, 'ssd', $product_name, $product_color, $product_price); - mysqli_stmt_execute($stmt); - printf("Insert: Affected %d rows\n", mysqli_stmt_affected_rows($stmt)); - mysqli_stmt_close($stmt); -} --``` --## Step 4: Read data -Use the following code to read the data by using a **SELECT** SQL statement. The code uses the method: -- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) execute the **SELECT** query-- [mysqli_fetch_assoc](https://secure.php.net/manual/mysqli-result.fetch-assoc.php) to fetch the resulting rows.--```php -//Run the Select query -printf("Reading data from table: \n"); -$res = mysqli_query($conn, 'SELECT * FROM Products'); -while ($row = mysqli_fetch_assoc($res)) - { - var_dump($row); - } --``` ---## Step 5: Delete data -Use the following code delete rows by using a **DELETE** SQL statement. The code uses the methods: -- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared delete statement-- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) binds the parameters-- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php) executes the prepared delete statement-- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) closes the statement--```php -//Run the Delete statement -$product_name = 'BrandNewProduct'; -if ($stmt = mysqli_prepare($conn, "DELETE FROM Products WHERE ProductName = ?")) { -mysqli_stmt_bind_param($stmt, 's', $product_name); -mysqli_stmt_execute($stmt); -printf("Delete: Affected %d rows\n", mysqli_stmt_affected_rows($stmt)); -mysqli_stmt_close($stmt); -} -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps -> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/> --> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md) - |
mysql | Connect Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-python.md | - Title: 'Quickstart: Connect using Python - Azure Database for MySQL' -description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Quickstart: Use Python to connect and query data in Azure Database for MySQL ----In this quickstart, you connect to an Azure Database for MySQL by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms. --## Prerequisites -For this quickstart you need: --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-- |Action| Connectivity method|How-to guide| - |: |: |: | - | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)| - | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)| - | **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) | --- [Create a database and non-admin user](./how-to-create-users.md)--## Install Python and the MySQL connector --Install Python and the MySQL connector for Python on your computer by using the following steps: --> [!NOTE] -> This quickstart is using [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/). --1. Download and install [Python 3.7 or above](https://www.python.org/downloads/) for your OS. Make sure to add Python to your `PATH`, because the MySQL connector requires that. - -2. Open a command prompt or `bash` shell, and check your Python version by running `python -V` with the uppercase V switch. - -3. The `pip` package installer is included in the latest versions of Python. Update `pip` to the latest version by running `pip install -U pip`. - - If `pip` isn't installed, you can download and install it with `get-pip.py`. For more information, see [Installation](https://pip.pypa.io/en/stable/installing/). - -4. Use `pip` to install the MySQL connector for Python and its dependencies: - - ```bash - pip install mysql-connector-python - ``` - -## Get connection information --Get the connection information you need to connect to Azure Database for MySQL from the Azure portal. You need the server name, database name, and login credentials. --1. Sign in to the [Azure portal](https://portal.azure.com/). - -1. In the portal search bar, search for and select the Azure Database for MySQL server you created, such as **mydemoserver**. - - :::image type="content" source="./media/connect-python/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: - -1. From the server's **Overview** page, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this page. - - :::image type="content" source="./media/connect-python/azure-database-for-mysql-server-overview-name-login.png" alt-text="Azure Database for MySQL server name 2"::: --## Running the Python code samples --For each code example in this article: --1. Create a new file in a text editor. -2. Add the code example to the file. In the code, replace the `<mydemoserver>`, `<myadmin>`, `<mypassword>`, and `<mydatabase>` placeholders with the values for your MySQL server and database. -1. SSL is enabled by default on Azure Database for MySQL servers. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. Replace the `ssl_ca` value in the code with path to this file on your computer. -1. Save the file in a project folder with a *.py* extension, such as *C:\pythonmysql\createtable.py* or */home/username/pythonmysql/createtable.py*. -1. To run the code, open a command prompt or `bash` shell and change directory into your project folder, for example `cd pythonmysql`. Type the `python` command followed by the file name, for example `python createtable.py`, and press Enter. - - > [!NOTE] - > On Windows, if *python.exe* is not found, you may need to add the Python path into your PATH environment variable, or provide the full path to *python.exe*, for example `C:\python27\python.exe createtable.py`. --## Step 1: Create a table and insert data --Use the following code to connect to the server and database, create a table, and load data by using an **INSERT** SQL statement.The code imports the mysql.connector library, and uses the method: -- [connect()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysql-connector-connect.html) function to connect to Azure Database for MySQL using the [arguments](https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html) in the config collection. -- [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. -- [cursor.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-close.html) when you are done using a cursor.-- [conn.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-close.html) to close the connection the connection.--```python -import mysql.connector -from mysql.connector import errorcode --# Obtain connection string information from the portal --config = { - 'host':'<mydemoserver>.mysql.database.azure.com', - 'user':'<myadmin>@<mydemoserver>', - 'password':'<mypassword>', - 'database':'<mydatabase>', - 'client_flags': [mysql.connector.ClientFlag.SSL], - 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem' -} --# Construct connection string --try: - conn = mysql.connector.connect(**config) - print("Connection established") -except mysql.connector.Error as err: - if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: - print("Something is wrong with the user name or password") - elif err.errno == errorcode.ER_BAD_DB_ERROR: - print("Database does not exist") - else: - print(err) -else: - cursor = conn.cursor() -- # Drop previous table of same name if one exists - cursor.execute("DROP TABLE IF EXISTS inventory;") - print("Finished dropping table (if existed).") -- # Create table - cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);") - print("Finished creating table.") -- # Insert some data into table - cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150)) - print("Inserted",cursor.rowcount,"row(s) of data.") - cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154)) - print("Inserted",cursor.rowcount,"row(s) of data.") - cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100)) - print("Inserted",cursor.rowcount,"row(s) of data.") -- # Cleanup - conn.commit() - cursor.close() - conn.close() - print("Done.") -``` --## Step 2: Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. --The code reads the data rows using the [fetchall()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchall.html) method, keeps the result set in a collection row, and uses a `for` iterator to loop over the rows. --```python -import mysql.connector -from mysql.connector import errorcode --# Obtain connection string information from the portal --config = { - 'host':'<mydemoserver>.mysql.database.azure.com', - 'user':'<myadmin>@<mydemoserver>', - 'password':'<mypassword>', - 'database':'<mydatabase>', - 'client_flags': [mysql.connector.ClientFlag.SSL], - 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem' -} --# Construct connection string --try: - conn = mysql.connector.connect(**config) - print("Connection established") -except mysql.connector.Error as err: - if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: - print("Something is wrong with the user name or password") - elif err.errno == errorcode.ER_BAD_DB_ERROR: - print("Database does not exist") - else: - print(err) -else: - cursor = conn.cursor() -- # Read data - cursor.execute("SELECT * FROM inventory;") - rows = cursor.fetchall() - print("Read",cursor.rowcount,"row(s) of data.") -- # Print all rows - for row in rows: - print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2]))) -- # Cleanup - conn.commit() - cursor.close() - conn.close() - print("Done.") -``` --## Step 3: Update data --Use the following code to connect and update the data by using an **UPDATE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. --```python -import mysql.connector -from mysql.connector import errorcode --# Obtain connection string information from the portal --config = { - 'host':'<mydemoserver>.mysql.database.azure.com', - 'user':'<myadmin>@<mydemoserver>', - 'password':'<mypassword>', - 'database':'<mydatabase>', - 'client_flags': [mysql.connector.ClientFlag.SSL], - 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem' -} --# Construct connection string --try: - conn = mysql.connector.connect(**config) - print("Connection established") -except mysql.connector.Error as err: - if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: - print("Something is wrong with the user name or password") - elif err.errno == errorcode.ER_BAD_DB_ERROR: - print("Database does not exist") - else: - print(err) -else: - cursor = conn.cursor() -- # Update a data row in the table - cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (300, "apple")) - print("Updated",cursor.rowcount,"row(s) of data.") -- # Cleanup - conn.commit() - cursor.close() - conn.close() - print("Done.") -``` --## Step 4: Delete data --Use the following code to connect and remove data by using a **DELETE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. --```python -import mysql.connector -from mysql.connector import errorcode --# Obtain connection string information from the portal --config = { - 'host':'<mydemoserver>.mysql.database.azure.com', - 'user':'<myadmin>@<mydemoserver>', - 'password':'<mypassword>', - 'database':'<mydatabase>', - 'client_flags': [mysql.connector.ClientFlag.SSL], - 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem' -} --# Construct connection string --try: - conn = mysql.connector.connect(**config) - print("Connection established") -except mysql.connector.Error as err: - if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: - print("Something is wrong with the user name or password") - elif err.errno == errorcode.ER_BAD_DB_ERROR: - print("Database does not exist") - else: - print(err) -else: - cursor = conn.cursor() -- # Delete a data row in the table - cursor.execute("DELETE FROM inventory WHERE name=%(param1)s;", {'param1':"orange"}) - print("Deleted",cursor.rowcount,"row(s) of data.") - - # Cleanup - conn.commit() - cursor.close() - conn.close() - print("Done.") -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps -> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/> --> [!div class="nextstepaction"] -> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md) |
mysql | Connect Ruby | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-ruby.md | - Title: 'Quickstart: Connect using Ruby - Azure Database for MySQL' -description: This quickstart provides several Ruby code samples you can use to connect and query data from Azure Database for MySQL. ------ Previously updated : 05/03/2023---# Quickstart: Use Ruby to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL using a [Ruby](https://www.ruby-lang.org) application and the [mysql2](https://rubygems.org/gems/mysql2) gem from Windows, Linux, and Mac platforms. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Ruby and that you are new to working with Azure Database for MySQL. --## Prerequisites --This quickstart uses the resources created in either of these guides as a starting point: --- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install Ruby --Install Ruby, Gem, and the MySQL2 library on your own computer. --### [Windows](#tab/windows) --1. Download and Install the 2.3 version of [Ruby](https://rubyinstaller.org/downloads/). -2. Launch a new command prompt (cmd) from the Start menu. -3. Change directory into the Ruby directory for version 2.3. `cd c:\Ruby23-x64\bin` -4. Test the Ruby installation by running the command `ruby -v` to see the version installed. -5. Test the Gem installation by running the command `gem -v` to see the version installed. -6. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`. --### [macOS](#tab/macos) --1. Install Ruby using Homebrew by running the command `brew install ruby`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/#homebrew). -2. Test the Ruby installation by running the command `ruby -v` to see the version installed. -3. Test the Gem installation by running the command `gem -v` to see the version installed. -4. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`. --### [Linux (Ubuntu/Debian)](#tab/ubuntu) --1. Install Ruby by running the command `sudo apt-get install ruby-full`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/). -2. Test the Ruby installation by running the command `ruby -v` to see the version installed. -3. Install the latest updates for Gem by running the command `sudo gem update --system`. -4. Test the Gem installation by running the command `gem -v` to see the version installed. -5. Install the gcc, make, and other build tools by running the command `sudo apt-get install build-essential`. -6. Install the MySQL client developer libraries by running the command `sudo apt-get install libmysqlclient-dev`. -7. Build the mysql2 module for Ruby using Gem by running the command `sudo gem install mysql2`. ----## Get connection information --Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). -2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). -3. Click the server name. -4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-ruby/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: --## Run Ruby code --1. Paste the Ruby code from the sections below into text files, and then save the files into a project folder with file extension .rb (such as `C:\rubymysql\createtable.rb` or `/home/username/rubymysql/createtable.rb`). -2. To run the code, launch the command prompt or Bash shell. Change directory into your project folder `cd rubymysql` -3. Then type the Ruby command followed by the file name, such as `ruby createtable.rb` to run the application. -4. On the Windows OS, if the Ruby application is not in your path environment variable, you may need to use the full path to launch the node application, such as `"c:\Ruby23-x64\bin\ruby.exe" createtable.rb` --## Connect and create a table --Use the following code to connect and create a table by using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table. --The code uses a mysql2::client class to connect to MySQL server. Then it calls method ```query()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. Finally, call the ```close()``` to close the connection before terminating. --Replace the `host`, `database`, `username`, and `password` strings with your own values. --```ruby -require 'mysql2' --begin - # Initialize connection variables. - host = String('mydemoserver.mysql.database.azure.com') - database = String('quickstartdb') - username = String('myadmin@mydemoserver') - password = String('yourpassword') -- # Initialize connection object. - client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password) - puts 'Successfully created connection to database.' -- # Drop previous table of same name if one exists - client.query('DROP TABLE IF EXISTS inventory;') - puts 'Finished dropping table (if existed).' -- # Drop previous table of same name if one exists. - client.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);') - puts 'Finished creating table.' -- # Insert some data into table. - client.query("INSERT INTO inventory VALUES(1, 'banana', 150)") - client.query("INSERT INTO inventory VALUES(2, 'orange', 154)") - client.query("INSERT INTO inventory VALUES(3, 'apple', 100)") - puts 'Inserted 3 rows of data.' --# Error handling --rescue Exception => e - puts e.message --# Cleanup --ensure - client.close if client - puts 'Done.' -end -``` --## Read data --Use the following code to connect and read the data by using a **SELECT** SQL statement. --The code uses a mysql2::client class to connect to Azure Database for MySQL with ```new()```method. Then it calls method ```query()``` to run the SELECT commands. Then it calls method ```close()``` to close the connection before terminating. --Replace the `host`, `database`, `username`, and `password` strings with your own values. --```ruby -require 'mysql2' --begin - # Initialize connection variables. - host = String('mydemoserver.mysql.database.azure.com') - database = String('quickstartdb') - username = String('myadmin@mydemoserver') - password = String('yourpassword') -- # Initialize connection object. - client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password) - puts 'Successfully created connection to database.' -- # Read data - resultSet = client.query('SELECT * from inventory;') - resultSet.each do |row| - puts 'Data row = (%s, %s, %s)' % [row['id'], row['name'], row['quantity']] - end - puts 'Read ' + resultSet.count.to_s + ' row(s).' --# Error handling --rescue Exception => e - puts e.message --# Cleanup --ensure - client.close if client - puts 'Done.' -end -``` --## Update data --Use the following code to connect and update the data by using an **UPDATE** SQL statement. --The code uses a [mysql2::client](https://rubygems.org/gems/mysql2-client-general_log) class .new() method to connect to Azure Database for MySQL. Then it calls method ```query()``` to run the UPDATE commands. Then it calls method ```close()``` to close the connection before terminating. --Replace the `host`, `database`, `username`, and `password` strings with your own values. --```ruby -require 'mysql2' --begin - # Initialize connection variables. - host = String('mydemoserver.mysql.database.azure.com') - database = String('quickstartdb') - username = String('myadmin@mydemoserver') - password = String('yourpassword') -- # Initialize connection object. - client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password) - puts 'Successfully created connection to database.' -- # Update data - client.query('UPDATE inventory SET quantity = %d WHERE name = %s;' % [200, '\'banana\'']) - puts 'Updated 1 row of data.' --# Error handling --rescue Exception => e - puts e.message --# Cleanup --ensure - client.close if client - puts 'Done.' -end -``` --## Delete data --Use the following code to connect and read the data by using a **DELETE** SQL statement. --The code uses a [mysql2::client](https://rubygems.org/gems/mysql2/) class to connect to MySQL server, run the DELETE command and then close the connection to the server. --Replace the `host`, `database`, `username`, and `password` strings with your own values. --```ruby -require 'mysql2' --begin - # Initialize connection variables. - host = String('mydemoserver.mysql.database.azure.com') - database = String('quickstartdb') - username = String('myadmin@mydemoserver') - password = String('yourpassword') -- # Initialize connection object. - client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password) - puts 'Successfully created connection to database.' -- # Delete data - resultSet = client.query('DELETE FROM inventory WHERE name = %s;' % ['\'orange\'']) - puts 'Deleted 1 row.' --# Error handling ---rescue Exception => e - puts e.message --# Cleanup ---ensure - client.close if client - puts 'Done.' -end -``` --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli-interactive -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your database using Export and Import](../flexible-server/concepts-migrate-import-export.md) --> [!div class="nextstepaction"] -> [Learn more about MySQL2 client](https://rubygems.org/gems/mysql2-client-general_log) |
mysql | Connect Workbench | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-workbench.md | - Title: "Quickstart: Connect - MySQL Workbench - Azure Database for MySQL" -description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL. --- Previously updated : 04/18/2023----- - mvc - - mode-other ---# Quickstart: Use MySQL Workbench to connect and query data in Azure Database for MySQL ----This quickstart demonstrates how to connect to an Azure Database for MySQL using the MySQL Workbench application. --## Prerequisites --This quickstart uses the resources created in either of these guides as a starting point: --- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)--> [!IMPORTANT] -> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) --## Install MySQL Workbench --Download and install MySQL Workbench on your computer from [the MySQL website](https://dev.mysql.com/downloads/workbench/). --## Get connection information --Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. --1. Log in to the [Azure portal](https://portal.azure.com/). --1. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**). --1. Select the server name. --1. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. - :::image type="content" source="./media/connect-php/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name" lightbox="./media/connect-php/1-server-overview-name-login.png"::: --## Connect to the server by using MySQL Workbench --To connect to Azure MySQL Server by using the GUI tool MySQL Workbench: --1. Launch the MySQL Workbench application on your computer. --1. In **Setup New Connection** dialog box, enter the following information on the **Parameters** tab: -- :::image type="content" source="./media/connect-workbench/2-setup-new-connection.png" alt-text="setup new connection" lightbox="./media/connect-workbench/2-setup-new-connection.png"::: -- | **Setting** | **Suggested value** | **Field description** | - | | | | - | Connection Name | Demo Connection | Specify a label for this connection. | - | Connection Method | Standard (TCP/IP) | Standard (TCP/IP) is sufficient. | - | Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you don't remember your server name. | - | Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. | - | Username | *server admin login name* | Type in the server admin login username supplied when you created the Azure Database for MySQL earlier. Our example username is myadmin@mydemoserver. Follow the steps in the previous section to get the connection information if you don't remember the username. The format is *username\@servername*. - | Password | your password | Select **Store in Vault...** button to save the password. | - -1. Select **Test Connection** to test if all parameters are correctly configured. --1. Select **OK** to save the connection. --1. In the listing of **MySQL Connections**, select the tile corresponding to your server, and then wait for the connection to be established. -- A new SQL tab opens with a blank editor where you can type your queries. -- > [!NOTE] - > By default, SSL connection security is required and enforced on your Azure Database for MySQL server. Although typically no additional configuration with SSL certificates is required for MySQL Workbench to connect to your server, we recommend binding the SSL CA certification with MySQL Workbench. For more information on how to download and bind the certification, see [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). If you need to disable SSL, visit the Azure portal and select the Connection security page to disable the Enforce SSL connection toggle button. --## Create a table, insert data, read data, update data, delete data --1. Copy and paste the sample SQL code into a blank SQL tab to illustrate some sample data. -- This code creates an empty database named quickstartdb, and then creates a sample table named inventory. It inserts some rows, then reads the rows. It changes the data with an update statement, and reads the rows again. Finally it deletes a row, and then reads the rows again. -- ```sql - -- Create a database - -- DROP DATABASE IF EXISTS quickstartdb; - CREATE DATABASE quickstartdb; - USE quickstartdb; - - -- Create a table and insert rows - DROP TABLE IF EXISTS inventory; - CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER); - INSERT INTO inventory (name, quantity) VALUES ('banana', 150); - INSERT INTO inventory (name, quantity) VALUES ('orange', 154); - INSERT INTO inventory (name, quantity) VALUES ('apple', 100); - - -- Read - SELECT * FROM inventory; - - -- Update - UPDATE inventory SET quantity = 200 WHERE id = 1; - SELECT * FROM inventory; - - -- Delete - DELETE FROM inventory WHERE id = 2; - SELECT * FROM inventory; - ``` -- The screenshot shows an example of the SQL code in SQL Workbench and the output after it has been run. -- :::image type="content" source="media/connect-workbench/3-workbench-sql-tab.png" alt-text="MySQL Workbench SQL Tab to run sample SQL code" lightbox="media/connect-workbench/3-workbench-sql-tab.png"::: --1. To run the sample SQL Code, select the lightening bolt icon in the toolbar of the **SQL File** tab. --1. Notice the three tabbed results in the **Result Grid** section in the middle of the page. --1. Notice the **Output** list at the bottom of the page. The status of each command is shown. --Now, you have connected to Azure Database for MySQL by using MySQL Workbench, and you have queried data using the SQL language. --## Clean up resources --To clean up all resources used during this quickstart, delete the resource group using the following command: --```azurecli -az group delete \ - --name $AZ_RESOURCE_GROUP \ - --yes -``` --## Next steps --> [!div class="nextstepaction"] -> [Migrate your database using Export and Import](../flexible-server/concepts-migrate-import-export.md) |
mysql | How To Alert On Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-alert-on-metric.md | - Title: Configure metric alerts - Azure portal - Azure Database for MySQL -description: This article describes how to configure and access metric alerts for Azure Database for MySQL from the Azure portal. ----- Previously updated : 06/20/2022---# Use the Azure portal to set up alerts on metrics for Azure Database for MySQL ----This article shows you how to set up Azure Database for MySQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services. --The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met. --You can configure an alert to do the following actions when it triggers: -* Send email notifications to the service administrator and co-administrators -* Send email to additional emails that you specify. -* Call a webhook --You can configure and get information about alert rules using: -* [Azure portal](../../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal) -* [Azure CLI](../../azure-monitor/alerts/alerts-metric.md#with-azure-cli) -* [Azure Monitor REST API](/rest/api/monitor/metricalerts) --## Create an alert rule on a metric from the Azure portal -1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for MySQL server you want to monitor. --2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown: -- :::image type="content" source="./media/how-to-alert-on-metric/2-alert-rules.png" alt-text="Select Alert Rules"::: --3. Select **Add metric alert** (+ icon). --4. The **Create rule** page opens as shown below. Fill in the required information: -- :::image type="content" source="./media/how-to-alert-on-metric/4-add-rule-form.png" alt-text="Add metric alert form"::: --5. Within the **Condition** section, select **Add condition**. --6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent". - - :::image type="content" source="./media/how-to-alert-on-metric/6-configure-signal-logic.png" alt-text="Select metric"::: --7. Configure the alert logic including the **Condition** (ex. "Greater than"), **Threshold** (ex. 85 percent), **Time Aggregation**, **Period** of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes"), and **Frequency**. - - Select **Done** when complete. -- :::image type="content" source="./media/how-to-alert-on-metric/7-set-threshold-time.png" alt-text="Select metric 2"::: --8. Within the **Action Groups** section, select **Create New** to create a new group to receive notifications on the alert. --9. Fill out the "Add action group" form with a name, short name, subscription, and resource group. --10. Configure an **Email/SMS/Push/Voice** action type. - - Choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications. - - Optionally, provide a valid URI in the **Webhook** field if you want it called when the alert fires. -- Select **OK** when completed. -- :::image type="content" source="./media/how-to-alert-on-metric/10-action-group-type.png" alt-text="Action group"::: --11. Specify an Alert rule name, Description, and Severity. -- :::image type="content" source="./media/how-to-alert-on-metric/11-name-description-severity.png" alt-text="Action group 2"::: --12. Select **Create alert rule** to create the alert. -- Within a few minutes, the alert is active and triggers as previously described. --## Manage your alerts -Once you have created an alert, you can select it and do the following actions: --* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert. -* **Edit** or **Delete** the alert rule. -* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications. ---## Next steps -* Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md). -* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive. |
mysql | How To Auto Grow Storage Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-cli.md | - Title: Auto grow storage - Azure CLI - Azure Database for MySQL -description: This article describes how you can enable auto grow storage using the Azure CLI in Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Auto-grow Azure Database for MySQL storage using the Azure CLI ----This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload. --The server [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply. --## Prerequisites --To complete this how-to guide: --- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Enable MySQL server storage auto-grow --Enable server auto-grow storage on an existing server with the following command: --```azurecli-interactive -az mysql server update --name mydemoserver --resource-group myresourcegroup --auto-grow Enabled -``` --Enable server auto-grow storage while creating a new server with the following command: --```azurecli-interactive -az mysql server create --resource-group myresourcegroup --name mydemoserver --auto-grow Enabled --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7 -``` --## Next steps --Learn about [how to create alerts on metrics](how-to-alert-on-metric.md). |
mysql | How To Auto Grow Storage Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-portal.md | - Title: Auto grow storage - Azure portal - Azure Database for MySQL -description: This article describes how you can enable auto grow storage for Azure Database for MySQL using Azure portal ----- Previously updated : 06/20/2022---# Auto grow storage in Azure Database for MySQL using the Azure portal ----This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload. --When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply. --## Prerequisites -To complete this how-to guide, you need: -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)--## Enable storage auto grow --Follow these steps to set MySQL server storage auto grow: --1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server. --2. On the MySQL server page, under **Settings** heading, click **Pricing tier** to open the Pricing tier page. --3. In the Auto-growth section, select **Yes** to enable storage auto grow. -- :::image type="content" source="./media/how-to-auto-grow-storage-portal/3-auto-grow.png" alt-text="Azure Database for MySQL - Settings_Pricing_tier - Auto-growth"::: --4. Click **OK** to save the changes. --5. A notification will confirm that auto grow was successfully enabled. -- :::image type="content" source="./media/how-to-auto-grow-storage-portal/5-auto-grow-success.png" alt-text="Azure Database for MySQL - auto-growth success"::: --## Next steps --Learn about [how to create alerts on metrics](how-to-alert-on-metric.md). |
mysql | How To Auto Grow Storage Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md | - Title: Auto grow storage - Azure PowerShell - Azure Database for MySQL -description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Auto grow storage in Azure Database for MySQL server using PowerShell ----This article describes how you can configure an Azure Database for MySQL server storage to grow -without impacting the workload. --Storage auto grow prevents your server from -[reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit) and -becoming read-only. For servers with 100 GB or less of provisioned storage, the size is increased by -5 GB when the free space is below 10%. For servers with more than 100 GB of provisioned storage, the -size is increased by 5% when the free space is below 10 GB. Maximum storage limits apply as -specified in the storage section of the -[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage). --> [!IMPORTANT] -> Remember that storage can only be scaled up, not down. --## Prerequisites --To complete this how-to guide, you need: --- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or- [Azure Cloud Shell](https://shell.azure.com/) in the browser -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)--> [!IMPORTANT] -> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az -> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`. -> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az -> PowerShell module releases and available natively from within Azure Cloud Shell. --If you choose to use PowerShell locally, connect to your Azure account using the -[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. --## Enable MySQL server storage auto grow --Enable server auto grow storage on an existing server with the following command: --```azurepowershell-interactive -Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -StorageAutogrow Enabled -``` --Enable server auto grow storage while creating a new server with the following command: --```azurepowershell-interactive -$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString -New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -StorageAutogrow Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password -``` --## Next steps --> [!div class="nextstepaction"] -> [How to create and manage read replicas in Azure Database for MySQL using PowerShell](how-to-read-replicas-powershell.md). |
mysql | How To Configure Audit Logs Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-cli.md | - Title: Access audit logs - Azure CLI - Azure Database for MySQL -description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure CLI. ------ Previously updated : 06/20/2022---# Configure and access audit logs in the Azure CLI ----You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) from the Azure CLI. --## Prerequisites --To step through this how-to guide: --- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Configure audit logging --> [!IMPORTANT] -> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted. --Enable and configure audit logging using the following steps: --1. Turn on audit logs by setting the **audit_logs_enabled** parameter to "ON". - ```azurecli-interactive - az mysql server configuration set --name audit_log_enabled --resource-group myresourcegroup --server mydemoserver --value ON - ``` --2. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter. - ```azurecli-interactive - az mysql server configuration set --name audit_log_events --resource-group myresourcegroup --server mydemoserver --value "ADMIN,CONNECTION" - ``` --3. Add any MySQL users to be excluded from logging by updating the **audit_log_exclude_users** parameter. Specify users by providing their MySQL user name. - ```azurecli-interactive - az mysql server configuration set --name audit_log_exclude_users --resource-group myresourcegroup --server mydemoserver --value "azure_superuser" - ``` --4. Add any specific MySQL users to be included for logging by updating the **audit_log_include_users** parameter. Specify users by providing their MySQL user name. -- ```azurecli-interactive - az mysql server configuration set --name audit_log_include_users --resource-group myresourcegroup --server mydemoserver --value "sampleuser" - ``` --## Next steps -- Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MySQL-- Learn how to configure audit logs in the [Azure portal](how-to-configure-audit-logs-portal.md) |
mysql | How To Configure Audit Logs Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-portal.md | - Title: Access audit logs - Azure portal - Azure Database for MySQL -description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure portal. ----- Previously updated : 06/20/2022---# Configure and access audit logs for Azure Database for MySQL in the Azure portal ----You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) and diagnostic settings from the Azure portal. --## Prerequisites --To step through this how-to guide, you need: --- [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)--## Configure audit logging -->[!IMPORTANT] -> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted. --Enable and configure audit logging. --1. Sign in to the [Azure portal](https://portal.azure.com/). --1. Select your Azure Database for MySQL server. --1. Under the **Settings** section in the sidebar, select **Server parameters**. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/server-parameters.png" alt-text="Server parameters"::: --1. Update the **audit_log_enabled** parameter to ON. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-enabled.png" alt-text="Enable audit logs"::: --1. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-events.png" alt-text="Audit log events"::: --1. Add any MySQL users to be included or excluded from logging by updating the **audit_log_exclude_users** and **audit_log_include_users** parameters. Specify users by providing their MySQL user name. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-exclude-users.png" alt-text="Audit log exclude users"::: --1. Once you have changed the parameters, you can click **Save**. Or you can **Discard** your changes. - :::image type="content" source="./media/how-to-configure-audit-logs-portal/save-parameters.png" alt-text="Save"::: --## Set up diagnostic logs --1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings**. --1. Click on "+ Add diagnostic setting" --1. Provide a diagnostic setting name. --1. Specify which data sinks to send the audit logs (storage account, event hub, and/or Log Analytics workspace). --1. Select "MySqlAuditLogs" as the log type. --1. Once you've configured the data sinks to pipe the audit logs to, you can click **Save**. --1. Access the audit logs by exploring them in the data sinks you configured. It may take up to 10 minutes for the logs to appear. --## Next steps --- Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MySQL-- Learn how to configure audit logs in the [Azure CLI](how-to-configure-audit-logs-cli.md) |
mysql | How To Configure Private Link Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-cli.md | - Title: Private Link - Azure CLI - Azure Database for MySQL -description: Learn how to configure private link for Azure Database for MySQL from Azure CLI ------ Previously updated : 06/20/2022---# Create and manage Private Link for Azure Database for MySQL using CLI ----A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure CLI to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint. --> [!NOTE] -> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. ---- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Create a resource group --Before you can create any resource, you have to create a resource group to host the Virtual Network. Create a resource group with [az group create](/cli/azure/group). This example creates a resource group named *myResourceGroup* in the *westeurope* location: --```azurecli-interactive -az group create --name myResourceGroup --location westeurope -``` --## Create a Virtual Network --Create a Virtual Network with [az network vnet create](/cli/azure/network/vnet). This example creates a default Virtual Network named *myVirtualNetwork* with one subnet named *mySubnet*: --```azurecli-interactive -az network vnet create \ - --name myVirtualNetwork \ - --resource-group myResourceGroup \ - --subnet-name mySubnet -``` --## Disable subnet private endpoint policies --Azure deploys resources to a subnet within a virtual network, so you need to create or update the subnet to disable private endpoint [network policies](../../private-link/disable-private-endpoint-network-policy.md). Update a subnet configuration named *mySubnet* with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update): --```azurecli-interactive -az network vnet subnet update \ - --name mySubnet \ - --resource-group myResourceGroup \ - --vnet-name myVirtualNetwork \ - --disable-private-endpoint-network-policies true -``` --## Create the VM --Create a VM with az vm create. When prompted, provide a password to be used as the sign-in credentials for the VM. This example creates a VM named *myVm*: --```azurecli-interactive -az vm create \ - --resource-group myResourceGroup \ - --name myVm \ - --image Win2019Datacenter -``` --> [!NOTE] -> The public IP address of the VM. You use this address to connect to the VM from the internet in the next step. --## Create an Azure Database for MySQL server --Create an Azure Database for MySQL with the az mysql server create command. Remember that the name of your MySQL Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value: --```azurecli-interactive -# Create a server in the resource group --az mysql server create \ name mydemoserver \resource-group myResourcegroup \location westeurope \admin-user mylogin \admin-password <server_admin_password> \sku-name GP_Gen5_2-``` --> [!NOTE] -> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations: -> -> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] --## Create the Private Endpoint --Create a private endpoint for the MySQL server in your Virtual Network: --```azurecli-interactive -az network private-endpoint create \ - --name myPrivateEndpoint \ - --resource-group myResourceGroup \ - --vnet-name myVirtualNetwork \ - --subnet mySubnet \ - --private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforMySQL/servers" --query "id" -o tsv) \ - --group-id mysqlServer \ - --connection-name myConnection - ``` --## Configure the Private DNS Zone --Create a Private DNS Zone for MySQL server domain and create an association link with the Virtual Network. --```azurecli-interactive -az network private-dns zone create --resource-group myResourceGroup \ - --name "privatelink.mysql.database.azure.com" -az network private-dns link vnet create --resource-group myResourceGroup \ - --zone-name "privatelink.mysql.database.azure.com"\ - --name MyDNSLink \ - --virtual-network myVirtualNetwork \ - --registration-enabled false --# Query for the network interface ID -$networkInterfaceId=$(az network private-endpoint show --name myPrivateEndpoint --resource-group myResourceGroup --query 'networkInterfaces[0].id' -o tsv) --az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json -# Copy the content for privateIPAddress and FQDN matching the Azure database for MySQL name --# Create DNS records -az network private-dns record-set a create --name myserver --zone-name privatelink.mysql.database.azure.com --resource-group myResourceGroup -az network private-dns record-set a add-record --record-set-name myserver --zone-name privatelink.mysql.database.azure.com --resource-group myResourceGroup -a <Private IP Address> -``` --> [!NOTE] -> The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../../dns/dns-operations-recordsets-portal.md). --## Connect to a VM from the internet --Connect to the VM *myVm* from the internet as follows: --1. In the portal's search bar, enter *myVm*. --1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens. --1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer. --1. Open the *downloaded.rdp* file. -- 1. If prompted, select **Connect**. -- 1. Enter the username and password you specified when creating the VM. -- > [!NOTE] - > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM. --1. Select **OK**. --1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**. --1. Once the VM desktop appears, minimize it to go back to your local desktop. --## Access the MySQL server privately from the VM --1. In the Remote Desktop of *myVM*, open PowerShell. --2. Enter  `nslookup mydemomysqlserver.privatelink.mysql.database.azure.com`. -- You'll receive a message similar to this: -- ```azurepowershell - Server: UnKnown - Address: 168.63.129.16 - Non-authoritative answer: - Name: mydemomysqlserver.privatelink.mysql.database.azure.com - Address: 10.1.3.4 - ``` --3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation. --4. In **New connection**, enter or select this information: -- | Setting | Value | - | - | -- | - | Connection Name| Select the connection name of your choice.| - | Hostname | Select *mydemoserver.privatelink.mysql.database.azure.com* | - | Username | Enter username as *username@servername* which is provided during the MySQL server creation. | - | Password | Enter a password provided during the MySQL server creation. | - || --5. Select Connect. --6. Browse databases from left menu. --7. (Optionally) Create or query information from the MySQL database. --8. Close the remote desktop connection to myVm. --## Clean up resources --When no longer needed, you can use az group delete to remove the resource group and all the resources it has: --```azurecli-interactive -az group delete --name myResourceGroup --yes -``` --## Next steps --- Learn more about [What is Azure private endpoint](../../private-link/private-endpoint-overview.md)--<!-- Link references, to text, Within this same GitHub repo. --> -[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | How To Configure Private Link Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-portal.md | - Title: Private Link - Azure portal - Azure Database for MySQL -description: Learn how to configure private link for Azure Database for MySQL from Azure portal ----- Previously updated : 06/20/2022---# Create and manage Private Link for Azure Database for MySQL using Portal ----A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure portal to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --> [!NOTE] -> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. --## Sign in to Azure -Sign in to the [Azure portal](https://portal.azure.com). --## Create an Azure VM --In this section, you will create virtual network and the subnet to host the VM that is used to access your Private Link resource (a MySQL server in Azure). --### Create the virtual network -In this section, you will create a Virtual Network and the subnet to host the VM that is used to access your Private Link resource. --1. On the upper-left side of the screen, select **Create a resource** > **Networking** > **Virtual network**. -2. In **Create virtual network**, enter or select this information: -- | Setting | Value | - | - | -- | - | Name | Enter *MyVirtualNetwork*. | - | Address space | Enter *10.1.0.0/16*. | - | Subscription | Select your subscription.| - | Resource group | Select **Create new**, enter *myResourceGroup*, then select **OK**. | - | Location | Select **West Europe**.| - | Subnet - Name | Enter *mySubnet*. | - | Subnet - Address range | Enter *10.1.0.0/24*. | - ||| -3. Leave the rest as default and select **Create**. --### Create Virtual Machine --1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual Machine**. --2. In **Create a virtual machine - Basics**, enter or select this information: -- | Setting | Value | - | - | -- | - | **PROJECT DETAILS** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. You created this in the previous section. | - | **INSTANCE DETAILS** | | - | Virtual machine name | Enter *myVm*. | - | Region | Select **West Europe**. | - | Availability options | Leave the default **No infrastructure redundancy required**. | - | Image | Select **Windows Server 2019 Datacenter**. | - | Size | Leave the default **Standard DS1 v2**. | - | **ADMINISTRATOR ACCOUNT** | | - | Username | Enter a username of your choosing. | - | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).| - | Confirm Password | Reenter password. | - | **INBOUND PORT RULES** | | - | Public inbound ports | Leave the default **None**. | - | **SAVE MONEY** | | - | Already have a Windows license? | Leave the default **No**. | - ||| --1. Select **Next: Disks**. --1. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**. --1. In **Create a virtual machine - Networking**, select this information: -- | Setting | Value | - | - | -- | - | Virtual network | Leave the default **MyVirtualNetwork**. | - | Address space | Leave the default **10.1.0.0/24**.| - | Subnet | Leave the default **mySubnet (10.1.0.0/24)**.| - | Public IP | Leave the default **(new) myVm-ip**. | - | Public inbound ports | Select **Allow selected ports**. | - | Select inbound ports | Select **HTTP** and **RDP**.| - ||| ---1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. --1. When you see the **Validation passed** message, select **Create**. --## Create an Azure Database for MySQL --In this section, you will create an Azure Database for MySQL server in Azure. --1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **Azure Database for MySQL**. --1. In **Azure Database for MySQL** provide these information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. You created this in the previous section.| - | **Server details** | | - |Server name | Enter *myServer*. If this name is taken, create a unique name.| - | Admin username| Enter an administrator name of your choosing. | - | Password | Enter a password of your choosing. The password must be at least 8 characters long and meet the defined requirements. | - | Location | Select an Azure region where you want to want your MySQL Server to reside. | - |Version | Select the database version of the MySQL server that is required.| - | Compute + Storage| Select the pricing tier that is needed for the server based on the workload. | - ||| - -7. Select **OK**. -8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. -9. When you see the Validation passed message, select **Create**. -10. When you see the Validation passed message, select Create. --> [!NOTE] -> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations: -> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] --## Create a private endpoint --In this section, you will create a MySQL server and add a private endpoint to it. --1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Networking** > **Private Link**. --2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**. -- :::image type="content" source="media/concepts-data-access-and-security-private-link/private-link-overview.png" alt-text="Private Link overview"::: --1. In **Create a private endpoint - Basics**, enter or select this information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. You created this in the previous section.| - | **Instance Details** | | - | Name | Enter *myPrivateEndpoint*. If this name is taken, create a unique name. | - |Region|Select **West Europe**.| - ||| --5. Select **Next: Resource**. -6. In **Create a private endpoint - Resource**, enter or select this information: -- | Setting | Value | - | - | -- | - |Connection method | Select connect to an Azure resource in my directory.| - | Subscription| Select your subscription. | - | Resource type | Select **Microsoft.DBforMySQL/servers**. | - | Resource |Select *myServer*| - |Target sub-resource |Select *mysqlServer*| - ||| -7. Select **Next: Configuration**. -8. In **Create a private endpoint - Configuration**, enter or select this information: -- | Setting | Value | - | - | -- | - |**NETWORKING**| | - | Virtual network| Select *MyVirtualNetwork*. | - | Subnet | Select *mySubnet*. | - |**PRIVATE DNS INTEGRATION**|| - |Integrate with private DNS zone |Select **Yes**. | - |Private DNS Zone |Select *(New)privatelink.mysql.database.azure.com* | - ||| -- > [!NOTE] - > Use the predefined private DNS zone for your service or provide your preferred DNS zone name. Refer to the [Azure services DNS zone configuration](../../private-link/private-endpoint-dns.md) for details. --1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. -2. When you see the **Validation passed** message, select **Create**. -- :::image type="content" source="media/concepts-data-access-and-security-private-link/show-mysql-private-link.png" alt-text="Private Link created"::: -- > [!NOTE] - > The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../../dns/dns-operations-recordsets-portal.md). --## Connect to a VM using Remote Desktop (RDP) ---After you've created **myVm**, connect to it from the internet as follows: --1. In the portal's search bar, enter *myVm*. --1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens. --1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer. --1. Open the *downloaded.rdp* file. -- 1. If prompted, select **Connect**. -- 1. Enter the username and password you specified when creating the VM. -- > [!NOTE] - > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM. --1. Select **OK**. --1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**. --1. Once the VM desktop appears, minimize it to go back to your local desktop. --## Access the MySQL server privately from the VM --1. In the Remote Desktop of *myVM*, open PowerShell. --2. Enter `nslookup myServer.privatelink.mysql.database.azure.com`. -- You'll receive a message similar to this: - ```azurepowershell - Server: UnKnown - Address: 168.63.129.16 - Non-authoritative answer: - Name: myServer.privatelink.mysql.database.azure.com - Address: 10.1.3.4 - ``` - > [!NOTE] - > If public access is disabled in the firewall settings in Azure Database for MySQL - Single Server. These ping and telnet tests will succeed regardless of the firewall settings. Those tests will ensure the network connectivity. --3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation. --4. In **New connection**, enter or select this information: -- | Setting | Value | - | - | -- | - | Server type| Select **MySQL**.| - | Server name| Select *myServer.privatelink.mysql.database.azure.com* | - | User name | Enter username as username@servername which is provided during the MySQL server creation. | - |Password |Enter a password provided during the MySQL server creation. | - |SSL|Select **Required**.| - || --5. Select Connect. --6. Browse databases from left menu. --7. (Optionally) Create or query information from the MySQL server. --8. Close the remote desktop connection to myVm. --## Clean up resources -When you're done using the private endpoint, MySQL server, and the VM, delete the resource group and all of the resources it contains: --1. Enter *myResourceGroup* in the **Search** box at the top of the portal and select *myResourceGroup* from the search results. -2. Select **Delete resource group**. -3. Enter myResourceGroup for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. --## Next steps --In this how-to, you created a VM on a virtual network, an Azure Database for MySQL, and a private endpoint for private access. You connected to one VM from the internet and securely communicated to the MySQL server using Private Link. To learn more about private endpoints, see [What is Azure private endpoint](../../private-link/private-endpoint-overview.md). --<!-- Link references, to text, Within this same GitHub repo. --> -[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | How To Configure Server Logs In Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-cli.md | - Title: Access slow query logs - Azure CLI - Azure Database for MySQL -description: This article describes how to access the slow query logs in Azure Database for MySQL by using the Azure CLI. ------ Previously updated : 06/20/2022---# Configure and access slow query logs by using Azure CLI ----You can download the Azure Database for MySQL slow query logs by using Azure CLI, the Azure command-line utility. --## Prerequisites -To step through this how-to guide, you need: -- [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md)-- The [Azure CLI](/cli/azure/install-azure-cli) or Azure Cloud Shell in the browser--## Configure logging -You can configure the server to access the MySQL slow query log by taking the following steps: -1. Turn on slow query logging by setting the **slow\_query\_log** parameter to ON. -2. Select where to output the logs to using **log\_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**. To send logs only to Azure Monitor Logs, select **None** -3. Adjust other parameters, such as **long\_query\_time** and **log\_slow\_admin\_statements**. --To learn how to set the value of these parameters through Azure CLI, see [How to configure server parameters](how-to-configure-server-parameters-using-cli.md). --For example, the following CLI command turns on the slow query log, sets the long query time to 10 seconds, and then turns off the logging of the slow admin statement. Finally, it lists the configuration options for your review. -```azurecli-interactive -az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver --value ON -az mysql server configuration set --name log_output --resource-group myresourcegroup --server mydemoserver --value FILE -az mysql server configuration set --name long_query_time --resource-group myresourcegroup --server mydemoserver --value 10 -az mysql server configuration set --name log_slow_admin_statements --resource-group myresourcegroup --server mydemoserver --value OFF -az mysql server configuration list --resource-group myresourcegroup --server mydemoserver -``` --## List logs for Azure Database for MySQL server -If **log_output** is configured to "File", you can access logs directly from the server's local storage. To list the available slow query log files for your server, run the [az mysql server-logs list](/cli/azure/mysql/server-logs#az-mysql-server-logs-list) command. --You can list the log files for server **mydemoserver.mysql.database.azure.com** under the resource group **myresourcegroup**. Then direct the list of log files to a text file called **log\_files\_list.txt**. -```azurecli-interactive -az mysql server-logs list --resource-group myresourcegroup --server mydemoserver > log_files_list.txt -``` -## Download logs from the server -If **log_output** is configured to "File", you can download individual log files from your server with the [az mysql server-logs download](/cli/azure/mysql/server-logs#az-mysql-server-logs-download) command. --Use the following example to download the specific log file for the server **mydemoserver.mysql.database.azure.com** under the resource group **myresourcegroup** to your local environment. -```azurecli-interactive -az mysql server-logs download --name 20170414-mydemoserver-mysql.log --resource-group myresourcegroup --server mydemoserver -``` --## Next steps -- Learn about [slow query logs in Azure Database for MySQL](concepts-server-logs.md). |
mysql | How To Configure Server Logs In Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-portal.md | - Title: Access slow query logs - Azure portal - Azure Database for MySQL -description: This article describes how to configure and access the slow logs in Azure Database for MySQL from the Azure portal. ----- Previously updated : 06/20/2022---# Configure and access slow query logs from the Azure portal ----You can configure, list, and download the [Azure Database for MySQL slow query logs](concepts-server-logs.md) from the Azure portal. --## Prerequisites -The steps in this article require that you have [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md). --## Configure logging -Configure access to the MySQL slow query log. --1. Sign in to the [Azure portal](https://portal.azure.com/). --2. Select your Azure Database for MySQL server. --3. Under the **Monitoring** section in the sidebar, select **Server logs**. - :::image type="content" source="./media/how-to-configure-server-logs-in-portal/1-select-server-logs-configure.png" alt-text="Screenshot of Server logs options"::: --4. To see the server parameters, select **Click here to enable logs and configure log parameters**. --5. Turn **slow_query_log** to **ON**. --6. Select where to output the logs to using **log_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**. --7. Consider setting "long_query_time" which represents query time threshold for the queries that will be collected in the slow query log file, The minimum and default values of long_query_time are 0 and 10, respectively. --8. Adjust other parameters, such as log_slow_admin_statements to log administrative statements. By default, administrative statements are not logged, nor are queries that do not use indexes for lookups. --9. Select **Save**. -- :::image type="content" source="./media/how-to-configure-server-logs-in-portal/3-save-discard.png" alt-text="Screenshot of slow query log parameters and save."::: --From the **Server Parameters** page, you can return to the list of logs by closing the page. --## View list and download logs -After logging begins, you can view a list of available slow query logs, and download individual log files. --1. Open the Azure portal. --2. Select your Azure Database for MySQL server. --3. Under the **Monitoring** section in the sidebar, select **Server logs**. The page shows a list of your log files. -- :::image type="content" source="./media/how-to-configure-server-logs-in-portal/4-server-logs-list.png" alt-text="Screenshot of Server logs page, with list of logs highlighted"::: -- > [!TIP] - > The naming convention of the log is **mysql-slow-< your server name>-yyyymmddhh.log**. The date and time used in the file name is the time when the log was issued. Log files are rotated every 24 hours or 7.5 GB, whichever comes first. --4. If needed, use the search box to quickly narrow down to a specific log, based on date and time. The search is on the name of the log. --5. To download individual log files, select the down-arrow icon next to each log file in the table row. -- :::image type="content" source="./media/how-to-configure-server-logs-in-portal/5-download.png" alt-text="Screenshot of Server logs page, with down-arrow icon highlighted"::: --## Set up diagnostic logs --1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings** > **Add diagnostic settings**. -- :::image type="content" source="./media/how-to-configure-server-logs-in-portal/add-diagnostic-setting.png" alt-text="Screenshot of Diagnostic settings options"::: --2. Provide a diagnostic setting name. --3. Specify which data sinks to send the slow query logs (storage account, event hub, or Log Analytics workspace). --4. Select **MySqlSlowLogs** as the log type. --5. After you've configured the data sinks to pipe the slow query logs to, select **Save**. --6. Access the slow query logs by exploring them in the data sinks you configured. It can take up to 10 minutes for the logs to appear. --## Next steps -- See [Access slow query Logs in CLI](how-to-configure-server-logs-in-cli.md) to learn how to download slow query logs programmatically.-- Learn more about [slow query logs](concepts-server-logs.md) in Azure Database for MySQL.-- For more information about the parameter definitions and MySQL logging, see the MySQL documentation on [logs](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html). |
mysql | How To Configure Server Parameters Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-cli.md | - Title: Configure server parameters - Azure CLI - Azure Database for MySQL -description: This article describes how to configure the service parameters in Azure Database for MySQL using the Azure CLI command line utility. ----- Previously updated : 06/20/2022----# Configure server parameters in Azure Database for MySQL using the Azure CLI ----You can list, show, and update configuration parameters for an Azure Database for MySQL server by using Azure CLI, the Azure command-line utility. A subset of engine configurations is exposed at the server-level and can be modified. -->[!NOTE] -> Server parameters can be updated globally at the server-level, use the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), [PowerShell](./how-to-configure-server-parameters-using-powershell.md), or [Azure portal](./how-to-server-parameters.md) --## Prerequisites -To step through this how-to guide, you need: -- [An Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md)-- [Azure CLI](/cli/azure/install-azure-cli) command-line utility or use the Azure Cloud Shell in the browser.--## List server configuration parameters for Azure Database for MySQL server -To list all modifiable parameters in a server and their values, run the [az mysql server configuration list](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-list) command. --You can list the server configuration parameters for the server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup**. -```azurecli-interactive -az mysql server configuration list --resource-group myresourcegroup --server mydemoserver -``` -For the definition of each of the listed parameters, see the MySQL reference section on [Server System Variables](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html). --## Show server configuration parameter details -To show details about a particular configuration parameter for a server, run the [az mysql server configuration show](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-show) command. --This example shows details of the **slow\_query\_log** server configuration parameter for server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.** -```azurecli-interactive -az mysql server configuration show --name slow_query_log --resource-group myresourcegroup --server mydemoserver -``` -## Modify a server configuration parameter value -You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the MySQL server engine. To update the configuration, use the [az mysql server configuration set](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-set) command. --To update the **slow\_query\_log** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.** -```azurecli-interactive -az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver --value ON -``` -If you want to reset the value of a configuration parameter, omit the optional `--value` parameter, and the service applies the default value. For the example above, it would look like: -```azurecli-interactive -az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver -``` -This code resets the **slow\_query\_log** configuration to the default value **OFF**. --## Setting parameters not listed -If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server. --Update the **init\_connect** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup** to set values such as character set. -```azurecli-interactive -az mysql server configuration set --name init_connect --resource-group myresourcegroup --server mydemoserver --value "SET character_set_client=utf8;SET character_set_database=utf8mb4;SET character_set_connection=latin1;SET character_set_results=latin1;" -``` --## Working with the time zone parameter --### Populating the time zone tables --The time zone tables on your server can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. --> [!NOTE] -> If you are running the `mysql.az_load_timezone` command from MySQL Workbench, you may need to turn off safe update mode first using `SET SQL_SAFE_UPDATES=0;`. --```sql -CALL mysql.az_load_timezone(); -``` --> [!IMPORTANT] -> You should restart the server to ensure the time zone tables are properly populated. To restart the server, use the [Azure portal](how-to-restart-server-portal.md) or [CLI](how-to-restart-server-cli.md). --To view available time zone values, run the following command: --```sql -SELECT name FROM mysql.time_zone_name; -``` --### Setting the global level time zone --The global level time zone can be set using the [az mysql server configuration set](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-set) command. --The following command updates the **time\_zone** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup** to **US/Pacific**. --```azurecli-interactive -az mysql server configuration set --name time_zone --resource-group myresourcegroup --server mydemoserver --value "US/Pacific" -``` --### Setting the session level time zone --The session level time zone can be set by running the `SET time_zone` command from a tool like the MySQL command line or MySQL Workbench. The example below sets the time zone to the **US/Pacific** time zone. --```sql -SET time_zone = 'US/Pacific'; -``` --Refer to the MySQL documentation for [Date and Time Functions](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz). ---## Next steps --- How to configure [server parameters in Azure portal](how-to-server-parameters.md) |
mysql | How To Configure Server Parameters Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-powershell.md | - Title: Configure server parameters - Azure PowerShell - Azure Database for MySQL -description: This article describes how to configure the service parameters in Azure Database for MySQL using PowerShell. ----- Previously updated : 06/20/2022----# Configure server parameters in Azure Database for MySQL using PowerShell ----You can list, show, and update configuration parameters for an Azure Database for MySQL server using -PowerShell. A subset of engine configurations is exposed at the server-level and can be modified. -->[!NOTE] -> Server parameters can be updated globally at the server-level, use the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), [PowerShell](./how-to-configure-server-parameters-using-powershell.md), or [Azure portal](./how-to-server-parameters.md). --## Prerequisites --To complete this how-to guide, you need: --- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or- [Azure Cloud Shell](https://shell.azure.com/) in the browser -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)--> [!IMPORTANT] -> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az -> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`. -> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az -> PowerShell module releases and available natively from within Azure Cloud Shell. --If you choose to use PowerShell locally, connect to your Azure account using the -[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. ---## List server configuration parameters for Azure Database for MySQL server --To list all modifiable parameters in a server and their values, run the `Get-AzMySqlConfiguration` -cmdlet. --The following example lists the server configuration parameters for the server **mydemoserver** in -resource group **myresourcegroup**. --```azurepowershell-interactive -Get-AzMySqlConfiguration -ResourceGroupName myresourcegroup -ServerName mydemoserver -``` --For the definition of each of the listed parameters, see the MySQL reference section on -[Server System Variables](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html). --## Show server configuration parameter details --To show details about a particular configuration parameter for a server, run the -`Get-AzMySqlConfiguration` cmdlet and specify the **Name** parameter. --This example shows details of the **slow\_query\_log** server configuration parameter for server -**mydemoserver** under resource group **myresourcegroup**. --```azurepowershell-interactive -Get-AzMySqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver -``` --## Modify a server configuration parameter value --You can also modify the value of a certain server configuration parameter, which updates the -underlying configuration value for the MySQL server engine. To update the configuration, use the -`Update-AzMySqlConfiguration` cmdlet. --To update the **slow\_query\_log** server configuration parameter of server -**mydemoserver** under resource group **myresourcegroup**. --```azurepowershell-interactive -Update-AzMySqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver -Value On -``` --## Next steps --> [!div class="nextstepaction"] -> [Auto grow storage in Azure Database for MySQL server using PowerShell](how-to-auto-grow-storage-powershell.md). |
mysql | How To Configure Sign In Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-sign-in-azure-ad-authentication.md | - Title: Use Microsoft Entra ID - Azure Database for MySQL -description: Learn about how to set up Microsoft Entra ID for authentication with Azure Database for MySQL ----- Previously updated : 06/20/2022---# Use Microsoft Entra ID for authentication with MySQL ----This article will walk you through the steps how to configure Microsoft Entra ID access with Azure Database for MySQL, and how to connect using a Microsoft Entra token. --> [!IMPORTANT] -> Microsoft Entra authentication is only available for MySQL 5.7 and newer. --<a name='setting-the-azure-ad-admin-user'></a> --## Setting the Microsoft Entra Admin user --Only a Microsoft Entra Admin user can create/enable users for Microsoft Entra ID-based authentication. To create a Microsoft Entra Admin user, please follow the following steps --1. In the Azure portal, select the instance of Azure Database for MySQL that you want to enable for Microsoft Entra ID. -2. Under Settings, select Active Directory Admin: --![set Microsoft Entra administrator][2] --3. Select a valid Microsoft Entra user in the customer tenant to be Microsoft Entra administrator. --> [!IMPORTANT] -> When setting the administrator, a new user is added to the Azure Database for MySQL server with full administrator permissions. --Only one Microsoft Entra admin can be created per MySQL server and selection of another one will overwrite the existing Microsoft Entra admin configured for the server. --After configuring the administrator, you can now sign in: --<a name='connecting-to-azure-database-for-mysql-using-azure-ad'></a> --## Connecting to Azure Database for MySQL using Microsoft Entra ID --The following high-level diagram summarizes the workflow of using Microsoft Entra authentication with Azure Database for MySQL: --![authentication flow][1] --WeΓÇÖve designed the Microsoft Entra integration to work with common MySQL tools like the mysql CLI, which are not Microsoft Entra aware and only support specifying username and password when connecting to MySQL. We pass the Microsoft Entra token as the password as shown in the picture above. --We currently have tested the following clients: --- MySQLWorkbench -- MySQL CLI--We have also tested most common application drivers, you can see details at the end of this page. --These are the steps that a user/application will need to do authenticate with Microsoft Entra ID described below: --### Prerequisites --You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine. Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli). --<a name='step-1-authenticate-with-azure-ad'></a> --### Step 1: Authenticate with Microsoft Entra ID --Start by authenticating with Microsoft Entra ID using the Azure CLI tool. This step is not required in Azure Cloud Shell. --``` -az login -``` --The command will launch a browser window to the Microsoft Entra authentication page. It requires you to give your Microsoft Entra user ID and the password. --<a name='step-2-retrieve-azure-ad-access-token'></a> --### Step 2: Retrieve Microsoft Entra access token --Invoke the Azure CLI tool to acquire an access token for the Microsoft Entra authenticated user from step 1 to access Azure Database for MySQL. --Example (for Public Cloud): --```azurecli-interactive -az account get-access-token --resource https://ossrdbms-aad.database.windows.net -``` -The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using: --```azurecli-interactive -az cloud show -``` --For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds: --```azurecli-interactive -az account get-access-token --resource-type oss-rdbms -``` -Using PowerShell, you can use the following command to acquire access token: --```azurepowershell-interactive -$accessToken = Get-AzAccessToken -ResourceUrl https://ossrdbms-aad.database.windows.net -$accessToken.Token | out-file C:\temp\MySQLAccessToken.txt -``` ---After authentication is successful, Microsoft Entra ID will return an access token: --```json -{ - "accessToken": "TOKEN", - "expiresOn": "...", - "subscription": "...", - "tenant": "...", - "tokenType": "Bearer" -} -``` --The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for MySQL service. --The access token validity is anywhere between ***5 minutes to 60 minutes***. We recommend you get the access token just before initiating the login to Azure Database for MySQL. You can use the following PowerShell command to see the token validity. --```azurepowershell-interactive -$accessToken.ExpiresOn.DateTime -``` --### Step 3: Use token as password for logging in with MySQL --When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method described above to retrieve the token. --#### Using MySQL CLI -When using the CLI, you can use this short-hand to connect: --**Example (Linux/macOS):** -``` -mysql -h mydb.mysql.database.azure.com \ - --user user@tenant.onmicrosoft.com@mydb \ - --enable-cleartext-plugin \ - --password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken` -``` -#### Using MySQL Workbench -* Launch MySQL Workbench and Click the Database option, then click "Connect to database" -* In the hostname field, enter the MySQL FQDN eg. mydb.mysql.database.azure.com -* In the username field, enter the MySQL Microsoft Entra administrator name and append this with MySQL server name, not the FQDN e.g. user@tenant.onmicrosoft.com@mydb -* In the password field, click "Store in Vault" and paste in the access token from file e.g. C:\temp\MySQLAccessToken.txt -* Click the advanced tab and ensure that you check "Enable Cleartext Authentication Plugin" -* Click OK to connect to the database --#### Important considerations when connecting: --* `user@tenant.onmicrosoft.com` is the name of the Microsoft Entra user or group you are trying to connect as -* Always append the server name after the Microsoft Entra user/group name (e.g. `@mydb`) -* Make sure to use the exact way the Microsoft Entra user or group name is spelled -* Microsoft Entra user and group names are case sensitive -* When connecting as a group, use only the group name (e.g. `GroupName@mydb`) -* If the name contains spaces, use `\` before each space to escape it --Note the ΓÇ£enable-cleartext-pluginΓÇ¥ setting ΓÇô you need to use a similar configuration with other clients to make sure the token gets sent to the server without being hashed. --You are now authenticated to your MySQL server using Microsoft Entra authentication. --<a name='creating-azure-ad-users-in-azure-database-for-mysql'></a> --## Creating Microsoft Entra users in Azure Database for MySQL --To add a Microsoft Entra user to your Azure Database for MySQL database, perform the following steps after connecting (see later section on how to connect): --1. First ensure that the Microsoft Entra user `<user>@yourtenant.onmicrosoft.com` is a valid user in Microsoft Entra tenant. -2. Sign in to your Azure Database for MySQL instance as the Microsoft Entra Admin user. -3. Create user `<user>@yourtenant.onmicrosoft.com` in Azure Database for MySQL. --**Example:** --```sql -CREATE AADUSER 'user1@yourtenant.onmicrosoft.com'; -``` --For user names that exceed 32 characters, it is recommended you use an alias instead, to be used when connecting: --Example: --```sql -CREATE AADUSER 'userWithLongName@yourtenant.onmicrosoft.com' as 'userDefinedShortName'; -``` -> [!NOTE] -> 1. MySQL ignores leading and trailing spaces so user name should not have any leading or trailing spaces. -> 2. Authenticating a user through Microsoft Entra ID does not give the user any permissions to access objects within the Azure Database for MySQL database. You must grant the user the required permissions manually. --<a name='creating-azure-ad-groups-in-azure-database-for-mysql'></a> --## Creating Microsoft Entra groups in Azure Database for MySQL --To enable a Microsoft Entra group for access to your database, use the same mechanism as for users, but instead specify the group name: --**Example:** --```sql -CREATE AADUSER 'Prod_DB_Readonly'; -``` --When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username. --## Token Validation --Microsoft Entra authentication in Azure Database for MySQL ensures that the user exists in the MySQL server, and it checks the validity of the token by validating the contents of the token. The following token validation steps are performed: --- Token is signed by Microsoft Entra ID and has not been tampered with-- Token was issued by Microsoft Entra ID for the tenant associated with the server-- Token has not expired-- Token is for the Azure Database for MySQL resource (and not another Azure resource)--## Compatibility with application drivers --Most drivers are supported, however make sure to use the settings for sending the password in clear-text, so the token gets sent without modification. --* C/C++ - * libmysqlclient: Supported - * mysql-connector-c++: Supported -* Java - * Connector/J (mysql-connector-java): Supported, must utilize `useSSL` setting -* Python - * Connector/Python: Supported -* Ruby - * mysql2: Supported -* .NET - * mysql-connector-net: Supported, need to add plugin for mysql_clear_password - * mysql-net/MySqlConnector: Supported -* Node.js - * mysqljs: Not supported (does not send token in cleartext without patch) - * node-mysql2: Supported -* Perl - * DBD::mysql: Supported - * Net::MySQL: Not supported -* Go - * go-sql-driver: Supported, add `?tls=true&allowCleartextPasswords=true` to connection string --## Next steps --* Review the overall concepts for [Microsoft Entra authentication with Azure Database for MySQL](concepts-azure-ad-authentication.md) --<!--Image references--> --[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png -[2]: ./media/concepts-azure-ad-authentication/set-azure-ad-admin.png |
mysql | How To Configure Ssl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md | - Title: Configure SSL - Azure Database for MySQL -description: Instructions for how to properly configure Azure Database for MySQL and associated applications to correctly use SSL connections ------# ms.devlang: csharp, golang, java, javascript, php, python, ruby - Previously updated : 06/20/2022---# Configure SSL connectivity in your application to securely connect to Azure Database for MySQL ----Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application. --## Step 1: Obtain SSL certificate --Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and save the certificate file to your local drive (this tutorial uses c:\ssl for example). -**For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem. --See the following links for certificates for servers in sovereign clouds: [Azure Government](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt.pem), [Microsoft Azure operated by 21Vianet](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt). --## Step 2: Bind SSL --For specific programming language connection strings, please refer to the [sample code](how-to-configure-ssl.md#sample-code) below. --### Connecting to server using MySQL Workbench over SSL --Configure MySQL Workbench to connect securely over SSL. --1. From the Setup New Connection dialogue, navigate to the **SSL** tab. --1. Update the **Use SSL** field to "Require". --1. In the **SSL CA File:** field, enter the file location of the **DigiCertGlobalRootG2.crt.pem**. -- :::image type="content" source="./media/how-to-configure-ssl/mysql-workbench-ssl.png" alt-text="Save SSL configuration"::: --For existing connections, you can bind SSL by right-clicking on the connection icon and choose edit. Then navigate to the **SSL** tab and bind the cert file. --### Connecting to server using the MySQL CLI over SSL --Another way to bind the SSL certificate is to use the MySQL command-line interface by executing the following commands. --```bash -mysql.exe -h mydemoserver.mysql.database.azure.com -u Username@mydemoserver -p --ssl-mode=REQUIRED --ssl-ca=c:\ssl\DigiCertGlobalRootG2.crt.pem -``` --> [!NOTE] -> When using the MySQL command-line interface on Windows, you may receive an error `SSL connection error: Certificate signature check failed`. If this occurs, replace the `--ssl-mode=REQUIRED --ssl-ca={filepath}` parameters with `--ssl`. --## Step 3: Enforcing SSL connections in Azure --### Using the Azure portal --Using the Azure portal, visit your Azure Database for MySQL server, and then click **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting, and then click **Save**. Microsoft recommends to always enable the **Enforce SSL connection** setting for enhanced security. ---### Using Azure CLI --You can enable or disable the **ssl-enforcement** parameter by using Enabled or Disabled values respectively in Azure CLI. --```azurecli-interactive -az mysql server update --resource-group myresource --name mydemoserver --ssl-enforcement Enabled -``` --## Step 4: Verify the SSL connection --Execute the mysql **status** command to verify that you have connected to your MySQL server using SSL: --```dos -mysql> status -``` --Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is AES256-SHA** --## Sample code --To establish a secure connection to Azure Database for MySQL over SSL from your application, refer to the following code samples: --Refer to the list of [compatible drivers](concepts-compatibility.md) supported by the Azure Database for MySQL service. --### PHP --```php -$conn = mysqli_init(); -mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootG2.crt.pem", NULL, NULL); -mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, MYSQLI_CLIENT_SSL); -if (mysqli_connect_errno()) { -die('Failed to connect to MySQL: '.mysqli_connect_error()); -} -``` --### PHP (Using PDO) --```phppdo -$options = array( - PDO::MYSQL_ATTR_SSL_CA => '/var/www/html/DigiCertGlobalRootG2.crt.pem' -); -$db = new PDO('mysql:host=mydemoserver.mysql.database.azure.com;port=3306;dbname=databasename', 'username@mydemoserver', 'yourpassword', $options); -``` --### Python (MySQLConnector Python) --```python -try: - conn = mysql.connector.connect(user='myadmin@mydemoserver', - password='yourpassword', - database='quickstartdb', - host='mydemoserver.mysql.database.azure.com', - ssl_ca='/var/www/html/DigiCertGlobalRootG2.crt.pem') -except mysql.connector.Error as err: - print(err) -``` --### Python (PyMySQL) --```python -conn = pymysql.connect(user='myadmin@mydemoserver', - password='yourpassword', - database='quickstartdb', - host='mydemoserver.mysql.database.azure.com', - ssl={'ca': '/var/www/html/DigiCertGlobalRootG2.crt.pem'}) -``` --### Django (PyMySQL) --```python -DATABASES = { - 'default': { - 'ENGINE': 'django.db.backends.mysql', - 'NAME': 'quickstartdb', - 'USER': 'myadmin@mydemoserver', - 'PASSWORD': 'yourpassword', - 'HOST': 'mydemoserver.mysql.database.azure.com', - 'PORT': '3306', - 'OPTIONS': { - 'ssl': {'ca': '/var/www/html/DigiCertGlobalRootG2.crt.pem'} - } - } -} -``` --### Ruby --```ruby -client = Mysql2::Client.new( - :host => 'mydemoserver.mysql.database.azure.com', - :username => 'myadmin@mydemoserver', - :password => 'yourpassword', - :database => 'quickstartdb', - :sslca => '/var/www/html/DigiCertGlobalRootG2.crt.pem' - ) -``` --### Golang --```go -rootCertPool := x509.NewCertPool() -pem, _ := ioutil.ReadFile("/var/www/html/DigiCertGlobalRootG2.crt.pem") -if ok := rootCertPool.AppendCertsFromPEM(pem); !ok { - log.Fatal("Failed to append PEM.") -} -mysql.RegisterTLSConfig("custom", &tls.Config{RootCAs: rootCertPool}) -var connectionString string -connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true&tls=custom","myadmin@mydemoserver" , "yourpassword", "mydemoserver.mysql.database.azure.com", 'quickstartdb') -db, _ := sql.Open("mysql", connectionString) -``` --### Java (MySQL Connector for Java) --```java -# generate truststore and keystore in code --String importCert = " -import "+ - " -alias mysqlServerCACert "+ - " -file " + ssl_ca + - " -keystore truststore "+ - " -trustcacerts " + - " -storepass password -noprompt "; -String genKey = " -genkey -keyalg rsa " + - " -alias mysqlClientCertificate -keystore keystore " + - " -storepass password123 -keypass password " + - " -dname CN=MS "; -sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+")); -sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+")); --# use the generated keystore and truststore --System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file"); -System.setProperty("javax.net.ssl.keyStorePassword","password"); -System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file"); -System.setProperty("javax.net.ssl.trustStorePassword","password"); --url = String.format("jdbc:mysql://%s/%s?serverTimezone=UTC&useSSL=true", 'mydemoserver.mysql.database.azure.com', 'quickstartdb'); -properties.setProperty("user", 'myadmin@mydemoserver'); -properties.setProperty("password", 'yourpassword'); -conn = DriverManager.getConnection(url, properties); -``` --### Java (MariaDB Connector for Java) --```java -# generate truststore and keystore in code --String importCert = " -import "+ - " -alias mysqlServerCACert "+ - " -file " + ssl_ca + - " -keystore truststore "+ - " -trustcacerts " + - " -storepass password -noprompt "; -String genKey = " -genkey -keyalg rsa " + - " -alias mysqlClientCertificate -keystore keystore " + - " -storepass password123 -keypass password " + - " -dname CN=MS "; -sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+")); -sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+")); --# use the generated keystore and truststore ---System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file"); -System.setProperty("javax.net.ssl.keyStorePassword","password"); -System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file"); -System.setProperty("javax.net.ssl.trustStorePassword","password"); --url = String.format("jdbc:mariadb://%s/%s?useSSL=true&trustServerCertificate=true", 'mydemoserver.mysql.database.azure.com', 'quickstartdb'); -properties.setProperty("user", 'myadmin@mydemoserver'); -properties.setProperty("password", 'yourpassword'); -conn = DriverManager.getConnection(url, properties); -``` --### .NET (MySqlConnector) --```csharp -var builder = new MySqlConnectionStringBuilder -{ - Server = "mydemoserver.mysql.database.azure.com", - UserID = "myadmin@mydemoserver", - Password = "yourpassword", - Database = "quickstartdb", - SslMode = MySqlSslMode.VerifyCA, - SslCa = "DigiCertGlobalRootG2.crt.pem", -}; -using (var connection = new MySqlConnection(builder.ConnectionString)) -{ - connection.Open(); -} -``` --### Node.js --```node -var fs = require('fs'); -var mysql = require('mysql'); -const serverCa = [fs.readFileSync("/var/www/html/DigiCertGlobalRootG2.crt.pem", "utf8")]; -var conn=mysql.createConnection({ - host:"mydemoserver.mysql.database.azure.com", - user:"myadmin@mydemoserver", - password:"yourpassword", - database:"quickstartdb", - port:3306, - ssl: { - rejectUnauthorized: true, - ca: serverCa - } -}); -conn.connect(function(err) { - if (err) throw err; -}); -``` --## Next steps --* To learn about certificate expiry and rotation, refer [certificate rotation documentation](concepts-certificate-rotation.md) -* Review various application connectivity options following [Connection libraries for Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md) |
mysql | How To Connect Overview Single Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-overview-single-server.md | - Title: Connect and query - Single Server MySQL -description: Links to quickstarts showing how to connect to your Azure My SQL Database Single Server and run queries. ------ Previously updated : 06/20/2022---# Connect and query overview for Azure database for MySQL- Single Server ----The following document includes links to examples showing how to connect and query with Azure Database for MySQL single server. This guide also includes TLS recommendations and libraries that you can use to connect to the server in supported languages below. --## Quickstarts --| Quickstart | Description | -||| -|[MySQL workbench](connect-workbench.md)|This quickstart demonstrates how to use MySQL Workbench Client to connect to a database. You can then use MySQL statements to query, insert, update, and delete data in the database.| -|[Azure Cloud Shell](./quickstart-create-mysql-server-database-using-azure-cli.md#connect-to-azure-database-for-mysql-server-using-mysql-command-line-client)|This article shows how to run **mysql.exe** in [Azure Cloud Shell](../../cloud-shell/overview.md) to connect to your server and then run statements to query, insert, update, and delete data in the database.| -|[MySQL with Visual Studio](https://www.mysql.com/why-mysql/windows/visualstudio)|You can use MySQL for Visual Studio for connecting to your MySQL server. MySQL for Visual Studio integrates directly into Server Explorer making it easy to setup new connections and working with database objects.| -|[PHP](connect-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use MySQL statements to query data.| -|[Java](connect-java.md)|This quickstart demonstrates how to use Java to connect to a database and then use MySQL statements to query data.| -|[Node.js](connect-nodejs.md)|This quickstart demonstrates how to use Node.js to create a program to connect to a database and use MySQL statements to query data.| -|[.NET(C#)](connect-csharp.md)|This quickstart demonstrates how to use.NET (C#) to create a C# program to connect to a database and use MySQL statements to query data.| -|[Go](connect-go.md)|This quickstart demonstrates how to use Go to connect to a database. Transact-SQL statements to query and modify data are also demonstrated.| -|[Python](connect-python.md)|This quickstart demonstrates how to use Python to connect to a database and use MySQL statements to query data. | -|[Ruby](connect-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use MySQL statements to query data.| -|[C++](connect-cpp.md)|This quickstart demonstrates how to use C++ to create a program to connect to a database and use query data.| --## TLS considerations for database connectivity --Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to databases in Azure Database for MySQL. No special configuration is necessary but do enforce TLS 1.2 for newly created servers. We recommend if you are using TLS 1.0 and 1.1, then you update the TLS version for your servers. See [How to configure TLS](how-to-tls-configurations.md) --## Libraries --Azure Database for MySQL uses the world's most popular community edition of MySQL database. Hence, it is compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open source community to constantly improve the functionality and usability of MySQL drivers continue. --See what [drivers](concepts-compatibility.md) are compatible with Azure Database for MySQL single server. --## Next steps --- [Migrate data using dump and restore](concepts-migrate-dump-restore.md)-- [Migrate data using import and export](concepts-migrate-import-export.md) |
mysql | How To Connect Webapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-webapp.md | - Title: Connect to Azure App Service - Azure Database for MySQL -description: Instructions for how to properly connect an existing Azure App Service to Azure Database for MySQL ----- Previously updated : 06/20/2022---# Connect an existing Azure App Service to Azure Database for MySQL server ----This topic explains how to connect an existing Azure App Service to your Azure Database for MySQL server. --## Before you begin -Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Database for MySQL server. For details, refer to [How to create Azure Database for MySQL server from Portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [How to create Azure Database for MySQL server using CLI](quickstart-create-mysql-server-database-using-azure-cli.md). --Currently there are two solutions to enable access from an Azure App Service to an Azure Database for MySQL. Both solutions involve setting up server-level firewall rules. --## Solution 1 - Allow Azure services -Azure Database for MySQL provides access security using a firewall to protect your data. When connecting from an Azure App Service to Azure Database for MySQL server, keep in mind that the outbound IPs of App Service are dynamic in nature. Choosing the "Allow access to Azure services" option will allow the app service to connect to the MySQL server. --1. On the MySQL server blade, under the Settings heading, click **Connection Security** to open the Connection Security blade for Azure Database for MySQL. -- :::image type="content" source="./media/how-to-connect-webapp/1-connection-security.png" alt-text="Azure portal - click Connection Security"::: --2. Select **ON** in **Allow access to Azure services**, then **Save**. - :::image type="content" source="./media/how-to-connect-webapp/allow-azure.png" alt-text="Azure portal - Allow Azure access"::: --## Solution 2 - Create a firewall rule to explicitly allow outbound IPs -You can explicitly add all the outbound IPs of your Azure App Service. --1. On the App Service Properties blade, view your **OUTBOUND IP ADDRESS**. -- :::image type="content" source="./media/how-to-connect-webapp/2-1-outbound-ip-address.png" alt-text="Azure portal - View outbound IPs"::: --2. On the MySQL Connection security blade, add outbound IPs one by one. -- :::image type="content" source="./media/how-to-connect-webapp/2-2-add-explicit-ips.png" alt-text="Azure portal - Add explicit IPs"::: --3. Remember to **Save** your firewall rules. --Though the Azure App service attempts to keep IP addresses constant over time, there are cases where the IP addresses may change. For example, this can occur when the app recycles or a scale operation occurs, or when new computers are added in Azure regional data centers to increase capacity. When the IP addresses change, the app could experience downtime in the event it can no longer connect to the MySQL server. Keep this consideration in mind when choosing one of the preceding solutions. --## SSL configuration -Azure Database for MySQL has SSL enabled by default. If your application is not using SSL to connect to the database, then you need to disable SSL on the MySQL server. For details on how to configure SSL, see [Using SSL with Azure Database for MySQL](how-to-configure-ssl.md). --### Django (PyMySQL) -```python -DATABASES = { - 'default': { - 'ENGINE': 'django.db.backends.mysql', - 'NAME': 'quickstartdb', - 'USER': 'myadmin@mydemoserver', - 'PASSWORD': 'yourpassword', - 'HOST': 'mydemoserver.mysql.database.azure.com', - 'PORT': '3306', - 'OPTIONS': { - 'ssl': {'ssl-ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'} - } - } -} -``` --## Next steps -For more information about connection strings, refer to [Connection Strings](how-to-connection-string.md). |
mysql | How To Connect With Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-with-managed-identity.md | - Title: Connect with Managed Identity - Azure Database for MySQL -description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for MySQL ------ Previously updated : 05/03/2023---# Connect with Managed Identity to Azure Database for MySQL ----This article shows you how to use a user-assigned identity for an Azure Virtual Machine (VM) to access an Azure Database for MySQL server. Managed Service Identities are automatically managed by Azure and enable you to authenticate to services that support Microsoft Entra authentication, without needing to insert credentials into your code. --You learn how to: --- Grant your VM access to an Azure Database for MySQL server-- Create a user in the database that represents the VM's user-assigned identity-- Get an access token using the VM identity and use it to query an Azure Database for MySQL server-- Implement the token retrieval in a C# example application--> [!IMPORTANT] -> Connecting with Managed Identity is only available for MySQL 5.7 and newer. --## Prerequisites --- If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.-- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../../articles/role-based-access-control/role-assignments-portal.yml).-- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity-- You need an Azure Database for MySQL database server that has [Microsoft Entra authentication](how-to-configure-sign-in-azure-ad-authentication.md) configured-- To follow the C# example, first complete the guide how to [Connect using C#](connect-csharp.md)--## Creating a user-assigned managed identity for your VM --Create an identity in your subscription using the [az identity create](/cli/azure/identity#az-identity-create) command. You can use the same resource group that your virtual machine runs in, or a different one. --```azurecli-interactive -az identity create --resource-group myResourceGroup --name myManagedIdentity -``` --To configure the identity in the following steps, use the [az identity show](/cli/azure/identity#az-identity-show) command to store the identity's resource ID and client ID in variables. --```azurecli-interactive -# Get resource ID of the user-assigned identity --RESOURCE_ID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query id --output tsv) --# Get client ID of the user-assigned identity ---CLIENT_ID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query clientId --output tsv) -``` --We can now assign the user-assigned identity to the VM with the [az vm identity assign](/cli/azure/vm/identity#az-vm-identity-assign) command: --```azurecli-interactive -az vm identity assign --resource-group myResourceGroup --name myVM --identities $RESOURCE_ID -``` --To finish setup, show the value of the Client ID, which you'll need in the next few steps: --```bash -echo $CLIENT_ID -``` --## Creating a MySQL user for your Managed Identity --Now, connect as the Microsoft Entra administrator user to your MySQL database, and run the following SQL statements: --```sql -SET aad_auth_validate_oids_in_tenant = OFF; -CREATE AADUSER 'myuser' IDENTIFIED BY 'CLIENT_ID'; -``` --The managed identity now has access when authenticating with the username `myuser` (replace with a name of your choice). --## Retrieving the access token from Azure Instance Metadata service --Your application can now retrieve an access token from the Azure Instance Metadata service and use it for authenticating with the database. --This token retrieval is done by making an HTTP request to `http://169.254.169.254/metadata/identity/oauth2/token` and passing the following parameters: --- `api-version` = `2018-02-01`-- `resource` = `https://ossrdbms-aad.database.windows.net`-- `client_id` = `CLIENT_ID` (that you retrieved earlier)--You'll get back a JSON result that contains an `access_token` field - this long text value is the Managed Identity access token, that you should use as the password when connecting to the database. --For testing purposes, you can run the following commands in your shell. Note you need `curl`, `jq`, and the `mysql` client installed. --```bash -# Retrieve the access token ---ACCESS_TOKEN=$(curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=CLIENT_ID' -H Metadata:true | jq -r .access_token) --# Connect to the database ---mysql -h SERVER --user USER@SERVER --enable-cleartext-plugin --password=$accessToken -``` --You are now connected to the database you've configured earlier. --## Connecting using Managed Identity in C# --This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for MySQL. Azure Database for MySQL natively supports Microsoft Entra authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to MySQL, you pass the access token in the password field. --Here's a .NET code example of opening a connection to MySQL using an access token. This code must run on the VM to access the VM's user-assigned managed identity's endpoint. .NET Framework 4.6 or higher or .NET Core 2.2 or higher is required to use the access token method. Replace the values of HOST, USER, DATABASE, and CLIENT_ID. --```csharp -using System; -using System.Net; -using System.IO; -using System.Collections; -using System.Collections.Generic; -using System.Text.Json; -using System.Text.Json.Serialization; -using System.Threading.Tasks; -using MySql.Data.MySqlClient; --namespace Driver -{ - class Script - { - // Obtain connection string information from the portal - // - private static string Host = "HOST"; - private static string User = "USER"; - private static string Database = "DATABASE"; - private static string ClientId = "CLIENT_ID"; -- static async Task Main(string[] args) - { - // - // Get an access token for MySQL. - // - Console.Out.WriteLine("Getting access token from Azure Instance Metadata service..."); -- // Azure AD resource ID for Azure Database for MySQL is https://ossrdbms-aad.database.windows.net/ - HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=" + ClientId); - request.Headers["Metadata"] = "true"; - request.Method = "GET"; - string accessToken = null; -- try - { - // Call managed identities for Azure resources endpoint. - HttpWebResponse response = (HttpWebResponse)request.GetResponse(); -- // Pipe response Stream to a StreamReader and extract access token. - StreamReader streamResponse = new StreamReader(response.GetResponseStream()); - string stringResponse = streamResponse.ReadToEnd(); - var list = JsonSerializer.Deserialize<Dictionary<string, string>>(stringResponse); - accessToken = list["access_token"]; - } - catch (Exception e) - { - Console.Out.WriteLine("{0} \n\n{1}", e.Message, e.InnerException != null ? e.InnerException.Message : "Acquire token failed"); - System.Environment.Exit(1); - } -- // - // Open a connection to the MySQL server using the access token. - // - var builder = new MySqlConnectionStringBuilder - { - Server = Host, - Database = Database, - UserID = User, - Password = accessToken, - SslMode = MySqlSslMode.Required, - }; -- using (var conn = new MySqlConnection(builder.ConnectionString)) - { - Console.Out.WriteLine("Opening connection using access token..."); - await conn.OpenAsync(); -- using (var command = conn.CreateCommand()) - { - command.CommandText = "SELECT VERSION()"; -- using (var reader = await command.ExecuteReaderAsync()) - { - while (await reader.ReadAsync()) - { - Console.WriteLine("\nConnected!\n\nMySQL version: {0}", reader.GetString(0)); - } - } - } - } - } - } -} -``` --When run, this command will give an output like this: --```output -Getting access token from Azure Instance Metadata service... -Opening connection using access token... --Connected! --MySQL version: 5.7.27 -``` --## Next steps --- Review the overall concepts for [Microsoft Entra authentication with Azure Database for MySQL](concepts-azure-ad-authentication.md) |
mysql | How To Connection String Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string-powershell.md | - Title: Generate a connection string with PowerShell - Azure Database for MySQL -description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for MySQL. ------ Previously updated : 06/20/2022---# How to generate an Azure Database for MySQL connection string with PowerShell ----This article demonstrates how to generate a connection string for an Azure Database for MySQL -server. You can use a connection string to connect to an Azure Database for MySQL from many -different applications. --## Requirements --This article uses the resources created in the following guide as a starting point: --* [Quickstart: Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md) --## Get the connection string --The `Get-AzMySqlConnectionString` cmdlet is used to generate a connection string for connecting -applications to Azure Database for MySQL. The following example returns the connection string for a -PHP client from **mydemoserver**. --```azurepowershell-interactive -Get-AzMySqlConnectionString -Client PHP -Name mydemoserver -ResourceGroupName myresourcegroup -``` --```Output -$con=mysqli_init();mysqli_ssl_set($con, NULL, NULL, {ca-cert filename}, NULL, NULL); mysqli_real_connect($con, "mydemoserver.mysql.database.azure.com", "myadmin@mydemoserver", {your_password}, {your_database}, 3306); -``` --Valid values for the `Client` parameter include: --* ADO.NET -* JDBC -* Node.js -* PHP -* Python -* Ruby -* WebApp --## Next steps --> [!div class="nextstepaction"] -> [Customize Azure Database for MySQL server parameters using PowerShell](how-to-configure-server-parameters-using-powershell.md) |
mysql | How To Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string.md | - Title: Connection strings - Azure Database for MySQL -description: This document lists the currently supported connection strings for applications to connect with Azure Database for MySQL, including ADO.NET (C#), JDBC, Node.js, ODBC, PHP, Python, and Ruby. ------ Previously updated : 06/20/2022---# How to connect applications to Azure Database for MySQL ----This topic lists the connection string types that are supported by Azure Database for MySQL, together with templates and examples. You might have different parameters and settings in your connection string. --- To obtain the certificate, see [How to configure SSL](./how-to-configure-ssl.md).-- {your_host} = \<servername>.mysql.database.azure.com-- {your_user}@{servername} = userID format for authentication correctly. If you only use the userID, the authentication will fail.--## ADO.NET -```ado.net -Server={your_host};Port={your_port};Database={your_database};Uid={username@servername};Pwd={your_password};[SslMode=Required;] -``` --In this example, the server name is `mydemoserver`, the database name is `wpdb`, the user name is `WPAdmin`, and the password is `mypassword!2`. As a result, the connection string should be: --```ado.net -Server= "mydemoserver.mysql.database.azure.com"; Port=3306; Database= "wpdb"; Uid= "WPAdmin@mydemoserver"; Pwd="mypassword!2"; SslMode=Required; -``` --## JDBC -```jdbc -String url ="jdbc:mysql://%s:%s/%s[?verifyServerCertificate=true&useSSL=true&requireSSL=true]",{your_host},{your_port},{your_database}"; myDbConn = DriverManager.getConnection(url, {username@servername}, {your_password}"; -``` --## Node.js -```node.js -var conn = mysql.createConnection({host: {your_host}, user: {username@servername}, password: {your_password}, database: {your_database}, Port: {your_port}[, ssl:{ca:fs.readFileSync({ca-cert filename})}}]); -``` --## ODBC -```odbc -DRIVER={MySQL ODBC 5.3 UNICODE Driver};Server={your_host};Port={your_port};Database={your_database};Uid={username@servername};Pwd={your_password}; [sslca={ca-cert filename}; sslverify=1; Option=3;] -``` --## PHP -```php -$con=mysqli_init(); [mysqli_ssl_set($con, NULL, NULL, {ca-cert filename}, NULL, NULL);] mysqli_real_connect($con, {your_host}, {username@servername}, {your_password}, {your_database}, {your_port}); -``` --## Python -```python -cnx = mysql.connector.connect(user={username@servername}, password={your_password}, host={your_host}, port={your_port}, database={your_database}[, ssl_ca={ca-cert filename}, ssl_verify_cert=true]) -``` --## Ruby -```ruby -client = Mysql2::Client.new(username: {username@servername}, password: {your_password}, database: {your_database}, host: {your_host}, port: {your_port}[, sslca:{ca-cert filename}, sslverify:false, sslcipher:'AES256-SHA']) -``` --## Get the connection string details from the Azure portal -In the [Azure portal](https://portal.azure.com), go to your Azure Database for MySQL server, and then click **Connection strings** to get the string list for your instance: --The string provides details such as the driver, server, and other database connection parameters. Modify these examples to use your own parameters, such as database name, password, and so on. You can then use this string to connect to the server from your code and applications. --## Next steps -- For more information about connection libraries, see [Concepts - Connection libraries](../flexible-server/concepts-connection-libraries.md). |
mysql | How To Create Manage Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-create-manage-server-portal.md | - Title: Manage server - Azure portal - Azure Database for MySQL -description: Learn how to manage an Azure Database for MySQL server from the Azure portal. ----- Previously updated : 06/20/2022---# Manage an Azure Database for MySQL server using the Azure portal ----This article shows you how to manage your Azure Database for MySQL servers. Management tasks include compute and storage scaling, admin password reset, and viewing server details. --> [!NOTE] -> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. -> --## Sign in --Sign in to the [Azure portal](https://portal.azure.com). --## Create a server --Visit the [quickstart](quickstart-create-mysql-server-database-using-azure-portal.md) to learn how to create and get started with an Azure Database for MySQL server. --## Scale compute and storage --After server creation you can scale between the General Purpose and Memory Optimized tiers as your needs change. You can also scale compute and memory by increasing or decreasing vCores. Storage can be scaled up (however, you cannot scale storage down). --### Scale between General Purpose and Memory Optimized tiers --You can scale from General Purpose to Memory Optimized and vice-versa. Changing to and from the Basic tier after server creation is not supported. --1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section. --2. Select **General Purpose** or **Memory Optimized**, depending on what you are scaling to. -- :::image type="content" source="./media/how-to-create-manage-server-portal/change-pricing-tier.png" alt-text="Screenshot of Azure portal to choose Basic, General Purpose, or Memory Optimized tier in Azure Database for MySQL"::: -- > [!NOTE] - > Changing tiers causes a server restart. --3. Select **OK** to save changes. --### Scale vCores up or down --1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section. --2. Change the **vCore** setting by moving the slider to your desired value. -- :::image type="content" source="./media/how-to-create-manage-server-portal/scaling-compute.png" alt-text="Screenshot of Azure portal to choose vCore option in Azure Database for MySQL"::: -- > [!NOTE] - > Scaling vCores causes a server restart. --3. Select **OK** to save changes. --### Scale storage up --1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section. --2. Change the **Storage** setting by moving the slider up to your desired value. -- :::image type="content" source="./media/how-to-create-manage-server-portal/scaling-storage.png" alt-text="Screenshot of Azure portal to choose Storage scale in Azure Database for MySQL"::: -- > [!NOTE] - > Storage cannot be scaled down. --3. Select **OK** to save changes. --## Update admin password --You can change the administrator role's password using the Azure portal. --1. Select your server in the Azure portal. In the **Overview** window select **Reset password**. -- :::image type="content" source="./media/how-to-create-manage-server-portal/overview-reset-password.png" alt-text="Screenshot of Azure portal to reset the password in Azure Database for MySQL"::: --2. Enter a new password and confirm the password. The textbox will prompt you about password complexity requirements. -- :::image type="content" source="./media/how-to-create-manage-server-portal/reset-password.png" alt-text="Screenshot of Azure portal to reset your password and save in Azure Database for MySQL"::: --3. Select **OK** to save the new password. - --> [!IMPORTANT] -> Resetting server admin password will automatically reset the server admin privileges to default. Consider resetting your server admin password if you accidentally revoked one or more of the server admin privileges. - -> [!NOTE] -> Server admin user has the following privileges by default: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER --## Delete a server --You can delete your server if you no longer need it. --1. Select your server in the Azure portal. In the **Overview** window select **Delete**. -- :::image type="content" source="./media/how-to-create-manage-server-portal/overview-delete.png" alt-text="Screenshot of Azure portal to Delete the server in Azure Database for MySQL"::: --2. Type the name of the server into the input box to confirm that this is the server you want to delete. -- :::image type="content" source="./media/how-to-create-manage-server-portal/confirm-delete.png" alt-text="Screenshot of Azure portal to confirm the server delete in Azure Database for MySQL"::: -- > [!NOTE] - > Deleting a server is irreversible. --3. Select **Delete**. --## Next steps --- Learn about [backups and server restore](how-to-restore-server-portal.md)-- Learn about [tuning and monitoring options in Azure Database for MySQL](concepts-monitoring.md) |
mysql | How To Data Encryption Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-cli.md | - Title: Data encryption - Azure CLI - Azure Database for MySQL -description: Learn how to set up and manage data encryption for your Azure Database for MySQL by using the Azure CLI. ----- Previously updated : 06/20/2022----# Data encryption for Azure Database for MySQL by using the Azure CLI ----Learn how to use the Azure CLI to set up and manage data encryption for your Azure Database for MySQL. --## Prerequisites for Azure CLI --* You must have an Azure subscription and be an administrator on that subscription. -* Create a key vault and a key to use for a customer-managed key. Also enable purge protection and soft delete on the key vault. -- ```azurecli-interactive - az keyvault create -g <resource_group> -n <vault_name> --enable-soft-delete true --enable-purge-protection true - ``` --* In the created Azure Key Vault, create the key that will be used for the data encryption of the Azure Database for MySQL. -- ```azurecli-interactive - az keyvault key create --name <key_name> -p software --vault-name <vault_name> - ``` --* In order to use an existing key vault, it must have the following properties to use as a customer-managed key: -- * [Soft delete](../../key-vault/general/soft-delete-overview.md) -- ```azurecli-interactive - az resource update --id $(az keyvault show --name \ <key_vault_name> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true - ``` -- * [Purge protected](../../key-vault/general/soft-delete-overview.md#purge-protection) -- ```azurecli-interactive - az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true - ``` - * Retention days set to 90 days - ```azurecli-interactive - az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --retention-days 90 - ``` --* The key must have the following attributes to use as a customer-managed key: - * No expiration date - * Not disabled - * Perform **get**, **wrap**, **unwrap** operations - * recoverylevel attribute set to **Recoverable** (this requires soft-delete enabled with retention period set to 90 days) - * Purge protection enabled --You can verify the above attributes of the key by using the following command: --```azurecli-interactive -az keyvault key show --vault-name <key_vault_name> -n <key_name> -``` -* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [data encryption with customer managed keys](concepts-data-encryption-mysql.md#limitations). --## Set the right permissions for key operations --1. There are two ways of getting the managed identity for your Azure Database for MySQL. -- ### Create an new Azure Database for MySQL server with a managed identity. -- ```azurecli-interactive - az mysql server create --name -g <resource_group> --location <locations> --storage-size size> -u <user>-p <pwd> --backup-retention <7> --sku-name <sku name> -geo-redundant-backup <Enabled/Disabled> --assign-identity - ``` -- ### Update an existing the Azure Database for MySQL server to get a managed identity. -- ```azurecli-interactive - az mysql server update --name <server name> -g <resource_group> --assign-identity - ``` --2. Set the **Key permissions** (**Get**, **Wrap**, **Unwrap**) for the **Principal**, which is the name of the MySQL server. -- ```azurecli-interactive - az keyvault set-policy --name -g <resource_group> --key-permissions get unwrapKey wrapKey --object-id <principal id of the server> - ``` --## Set data encryption for Azure Database for MySQL --1. Enable Data encryption for the Azure Database for MySQL using the key created in the Azure Key Vault. -- ```azurecli-interactive - az mysql server key create ΓÇôname <server name> -g <resource_group> --kid <key url> - ``` -- Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>` --## Using Data encryption for restore or replica servers --After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted MySQL server, you can use the following steps to create an encrypted restored server. --### Creating a restored/replica server --* [Create a restore server](how-to-restore-server-cli.md) -* [Create a read replica server](how-to-read-replicas-cli.md) --### Once the server is restored, revalidate data encryption the restored server --* Assign identity for the replica server -```azurecli-interactive -az mysql server update --name <server name> -g <resoure_group> --assign-identity -``` --* Get the existing key that has to be used for the restored/replica server --```azurecli-interactive -az mysql server key list --name '<server_name>' -g '<resource_group_name>' -``` --* Set the policy for the new identity for the restored/replica server - -```azurecli-interactive -az keyvault set-policy --name <keyvault> -g <resoure_group> --key-permissions get unwrapKey wrapKey --object-id <principl id of the server returned by the step 1> -``` --* Re-validate the restored/replica server with the encryption key --```azurecli-interactive -az mysql server key create ΓÇôname <server name> -g <resource_group> --kid <key url> -``` --## Additional capability for the key being used for the Azure Database for MySQL --### Get the Key used --```azurecli-interactive -az mysql server key show --name <server name> -g <resource_group> --kid <key url> -``` --Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>` --### List the Key used --```azurecli-interactive -az mysql server key list --name <server name> -g <resource_group> -``` --### Drop the key being used --```azurecli-interactive -az mysql server key delete -g <resource_group> --kid <key url> -``` --## Using an Azure Resource Manager template to enable data encryption --Apart from the Azure portal, you can also enable data encryption on your Azure Database for MySQL server using Azure Resource Manager templates for new and existing servers. --### For a new server --Use one of the pre-created Azure Resource Manager templates to provision the server with data encryption enabled: -[Example with Data encryption](https://github.com/Azure/azure-mysql/tree/master/arm-templates/ExampleWithDataEncryption) --This Azure Resource Manager template creates an Azure Database for MySQL server and uses the **KeyVault** and **Key** passed as parameters to enable data encryption on the server. --### For an existing server --Additionally, you can use Azure Resource Manager templates to enable data encryption on your existing Azure Database for MySQL servers. --* Pass the Resource ID of the Azure Key Vault key that you copied earlier under the `Uri` property in the properties object. --* Use *2020-01-01-preview* as the API version. --```json -{ - "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "location": { - "type": "string" - }, - "serverName": { - "type": "string" - }, - "keyVaultName": { - "type": "string", - "metadata": { - "description": "Key vault name where the key to use is stored" - } - }, - "keyVaultResourceGroupName": { - "type": "string", - "metadata": { - "description": "Key vault resource group name where it is stored" - } - }, - "keyName": { - "type": "string", - "metadata": { - "description": "Key name in the key vault to use as encryption protector" - } - }, - "keyVersion": { - "type": "string", - "metadata": { - "description": "Version of the key in the key vault to use as encryption protector" - } - } - }, - "variables": { - "serverKeyName": "[concat(parameters('keyVaultName'), '_', parameters('keyName'), '_', parameters('keyVersion'))]" - }, - "resources": [ - { - "type": "Microsoft.DBforMySQL/servers", - "apiVersion": "2017-12-01", - "kind": "", - "location": "[parameters('location')]", - "identity": { - "type": "SystemAssigned" - }, - "name": "[parameters('serverName')]", - "properties": { - } - }, - { - "type": "Microsoft.Resources/deployments", - "apiVersion": "2019-05-01", - "name": "addAccessPolicy", - "resourceGroup": "[parameters('keyVaultResourceGroupName')]", - "dependsOn": [ - "[resourceId('Microsoft.DBforMySQL/servers', parameters('serverName'))]" - ], - "properties": { - "mode": "Incremental", - "template": { - "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "resources": [ - { - "type": "Microsoft.KeyVault/vaults/accessPolicies", - "name": "[concat(parameters('keyVaultName'), '/add')]", - "apiVersion": "2018-02-14-preview", - "properties": { - "accessPolicies": [ - { - "tenantId": "[subscription().tenantId]", - "objectId": "[reference(resourceId('Microsoft.DBforMySQL/servers/', parameters('serverName')), '2017-12-01', 'Full').identity.principalId]", - "permissions": { - "keys": [ - "get", - "wrapKey", - "unwrapKey" - ] - } - } - ] - } - } - ] - } - } - }, - { - "name": "[concat(parameters('serverName'), '/', variables('serverKeyName'))]", - "type": "Microsoft.DBforMySQL/servers/keys", - "apiVersion": "2020-01-01-preview", - "dependsOn": [ - "addAccessPolicy", - "[resourceId('Microsoft.DBforMySQL/servers', parameters('serverName'))]" - ], - "properties": { - "serverKeyType": "AzureKeyVault", - "uri": "[concat(reference(resourceId(parameters('keyVaultResourceGroupName'), 'Microsoft.KeyVault/vaults/', parameters('keyVaultName')), '2018-02-14-preview', 'Full').properties.vaultUri, 'keys/', parameters('keyName'), '/', parameters('keyVersion'))]" - } - } - ] -} --``` --## Next steps --* [Validating data encryption for Azure Database for MySQL](how-to-data-encryption-validation.md) -* [Troubleshoot data encryption in Azure Database for MySQL](how-to-data-encryption-troubleshoot.md) -* [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md). |
mysql | How To Data Encryption Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-portal.md | - Title: Data encryption - Azure portal - Azure Database for MySQL -description: Learn how to set up and manage data encryption for your Azure Database for MySQL by using the Azure portal. ------ Previously updated : 06/20/2022---# Data encryption for Azure Database for MySQL by using the Azure portal ----Learn how to use the Azure portal to set up and manage data encryption for your Azure Database for MySQL. --## Prerequisites for Azure CLI --* You must have an Azure subscription and be an administrator on that subscription. -* In Azure Key Vault, create a key vault and a key to use for a customer-managed key. -* The key vault must have the following properties to use as a customer-managed key: - * [Soft delete](../../key-vault/general/soft-delete-overview.md) -- ```azurecli-interactive - az resource update --id $(az keyvault show --name \ <key_vault_name> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true - ``` -- * [Purge protected](../../key-vault/general/soft-delete-overview.md#purge-protection) -- ```azurecli-interactive - az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true - ``` - * Retention days set to 90 days - - ```azurecli-interactive - az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --retention-days 90 - ``` --* The key must have the following attributes to use as a customer-managed key: - * No expiration date - * Not disabled - * Perform **get**, **wrap**, **unwrap** operations - * recoverylevel attribute set to **Recoverable** (this requires soft-delete enabled with retention period set to 90 days) - * Purge protection enabled -- You can verify the above attributes of the key by using the following command: -- ```azurecli-interactive - az keyvault key show --vault-name <key_vault_name> -n <key_name> - ``` --* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [data encryption with customer managed keys](concepts-data-encryption-mysql.md#limitations). -## Set the right permissions for key operations --1. In Key Vault, select **Access policies** > **Add Access Policy**. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-access-policy-overview.png" alt-text="Screenshot of Key Vault, with Access policies and Add Access Policy highlighted"::: --2. Select **Key permissions**, and select **Get**, **Wrap**, **Unwrap**, and the **Principal**, which is the name of the MySQL server. If your server principal can't be found in the list of existing principals, you need to register it. You're prompted to register your server principal when you attempt to set up data encryption for the first time, and it fails. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/access-policy-wrap-unwrap.png" alt-text="Access policy overview"::: --3. Select **Save**. --## Set data encryption for Azure Database for MySQL --1. In Azure Database for MySQL, select **Data encryption** to set up the customer-managed key. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/data-encryption-overview.png" alt-text="Screenshot of Azure Database for MySQL, with Data encryption highlighted"::: --2. You can either select a key vault and key pair, or enter a key identifier. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/setting-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with data encryption options highlighted"::: --3. Select **Save**. --4. To ensure all files (including temp files) are fully encrypted, restart the server. --## Using Data encryption for restore or replica servers --After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted MySQL server, you can use the following steps to create an encrypted restored server. --1. On your server, select **Overview** > **Restore**. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore.png" alt-text="Screenshot of Azure Database for MySQL, with Overview and Restore highlighted"::: -- Or for a replication-enabled server, under the **Settings** heading, select **Replication**. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/mysql-replica.png" alt-text="Screenshot of Azure Database for MySQL, with Replication highlighted"::: --2. After the restore operation is complete, the new server created is encrypted with the primary server's key. However, the features and options on the server are disabled, and the server is inaccessible. This prevents any data manipulation, because the new server's identity hasn't yet been given permission to access the key vault. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with Inaccessible status highlighted"::: --3. To make the server accessible, revalidate the key on the restored server. Select **Data encryption** > **Revalidate key**. -- > [!NOTE] - > The first attempt to revalidate will fail, because the new server's service principal needs to be given access to the key vault. To generate the service principal, select **Revalidate key**, which will show an error but generates the service principal. Thereafter, refer to [these steps](#set-the-right-permissions-for-key-operations) earlier in this article. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-revalidate-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with revalidation step highlighted"::: -- You will have to give the key vault access to the new server. For more information, see [Assign a Key Vault access policy](../../key-vault/general/assign-access-policy.md?tabs=azure-portal). --4. After registering the service principal, revalidate the key again, and the server resumes its normal functionality. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/restore-successful.png" alt-text="Screenshot of Azure Database for MySQL, showing restored functionality"::: --## Next steps --* [Validating data encryption for Azure Database for MySQL](how-to-data-encryption-validation.md) -* [Troubleshoot data encryption in Azure Database for MySQL](how-to-data-encryption-troubleshoot.md) -* [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md). |
mysql | How To Data Encryption Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-troubleshoot.md | - Title: Troubleshoot data encryption - Azure Database for MySQL -description: Learn how to troubleshoot data encryption in Azure Database for MySQL ----- Previously updated : 06/20/2022---# Troubleshoot data encryption in Azure Database for MySQL ----This article describes how to identify and resolve common issues that can occur in Azure Database for MySQL when configured with data encryption using a customer-managed key. --## Introduction --When you configure data encryption to use a customer-managed key in Azure Key Vault, servers require continuous access to the key. If the server loses access to the customer-managed key in Azure Key Vault, it will deny all connections, return the appropriate error message, and change its state to ***Inaccessible*** in the Azure portal. --If you no longer need an inaccessible Azure Database for MySQL server, you can delete it to stop incurring costs. No other actions on the server are permitted until access to the key vault has been restored and the server is available. It's also not possible to change the data encryption option from `Yes`(customer-managed) to `No` (service-managed) on an inaccessible server when it's encrypted with a customer-managed key. You'll have to revalidate the key manually before the server is accessible again. This action is necessary to protect the data from unauthorized access while permissions to the customer-managed key are revoked. --## Common errors that cause the server to become inaccessible --The following misconfigurations cause most issues with data encryption that use Azure Key Vault keys: --- The key vault is unavailable or doesn't exist:- - The key vault was accidentally deleted. - - An intermittent network error causes the key vault to be unavailable. --- You don't have permissions to access the key vault or the key doesn't exist:- - The key expired or was accidentally deleted or disabled. - - The managed identity of the Azure Database for MySQL instance was accidentally deleted. - - The managed identity of the Azure Database for MySQL instance has insufficient key permissions. For example, the permissions don't include Get, Wrap, and Unwrap. - - The managed identity permissions to the Azure Database for MySQL instance were revoked or deleted. --## Identify and resolve common errors --### Errors on the key vault --#### Disabled key vault --- `AzureKeyVaultKeyDisabledMessage`-- **Explanation**: The operation couldn't be completed on server because the Azure Key Vault key is disabled.--#### Missing key vault permissions --- `AzureKeyVaultMissingPermissionsMessage`-- **Explanation**: The server doesn't have the required Get, Wrap, and Unwrap permissions to Azure Key Vault. Grant any missing permissions to the service principal with ID.--### Mitigation --- Confirm that the customer-managed key is present in the key vault.-- Identify the key vault, then go to the key vault in the Azure portal.-- Ensure that the key URI identifies a key that is present.--## Next steps --[Use the Azure portal to set up data encryption with a customer-managed key on Azure Database for MySQL](how-to-data-encryption-portal.md) |
mysql | How To Data Encryption Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-validation.md | - Title: How to ensure validation of the Azure Database for MySQL - Data encryption -description: Learn how to validate the encryption of the Azure Database for MySQL - Data encryption using the customers managed key. ----- Previously updated : 06/20/2022---# Validating data encryption for Azure Database for MySQL ----This article helps you validate that data encryption using customer managed key for Azure Database for MySQL is working as expected. --## Check the encryption status --### From portal --1. If you want to verify that the customer's key is used for encryption, follow these steps: -- * In the Azure portal, navigate to the **Azure Key Vault** -> **Keys** - * Select the key used for server encryption. - * Set the status of the key **Enabled** to **No**. - - After some time (**~15 min**), the Azure Database for MySQL server **Status** should be **Inaccessible**. Any I/O operation done against the server will fail which validates that the server is indeed encrypted with customers key and the key is currently not valid. - - In order to make the server **Available** against, you can revalidate the key. - - * Set the status of the key in the Key Vault to **Yes**. - * On the server **Data Encryption**, select **Revalidate key**. - * After the revalidation of the key is successful, the server **Status** changes to **Available**. --2. On the Azure portal, if you can ensure that the encryption key is set, then data is encrypted using the customers key used in the Azure portal. -- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/byok-validate.png" alt-text="Access policy overview"::: --### From CLI --1. We can use *az CLI* command to validate the key resources being used for the Azure Database for MySQL server. -- ```azurecli-interactive - az mysql server key list --name '<server_name>' -g '<resource_group_name>' - ``` -- For a server without Data encryption set, this command results in empty set []. --### Azure audit reports --[Audit Reports](https://servicetrust.microsoft.com) can also be reviewed that provides information about the compliance with data protection standards and regulatory requirements. --## Next steps --To learn more about data encryption, see [Azure Database for MySQL data encryption with customer-managed key](concepts-data-encryption-mysql.md). |
mysql | How To Data In Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-in-replication.md | - Title: Configure Data-in Replication - Azure Database for MySQL -description: This article describes how to set up Data-in Replication for Azure Database for MySQL. ------ Previously updated : 05/03/2023---# How to configure Azure Database for MySQL Data-in Replication ----This article describes how to set up [Data-in Replication](concepts-data-in-replication.md) in Azure Database for MySQL by configuring the source and replica servers. This article assumes that you have some prior experience with MySQL servers and databases. --> [!NOTE] -> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. -> --To create a replica in the Azure Database for MySQL service, [Data-in Replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). --Review the [limitations and requirements](concepts-data-in-replication.md#limitations-and-considerations) of Data-in replication before performing the steps in this article. --## Create an Azure Database for MySQL single server instance to use as a replica --1. Create a new instance of Azure Database for MySQL single server (for example, `replica.mysql.database.azure.com`). Refer to [Create an Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) for server creation. This server is the "replica" server for Data-in Replication. -- > [!IMPORTANT] - > The Azure Database for MySQL server must be created in the General Purpose or Memory Optimized pricing tiers as data-in replication is only supported in these tiers. - > GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2). --2. Create the same user accounts and corresponding privileges. -- User accounts aren't replicated from the source server to the replica server. If you plan on providing users with access to the replica server, you need to create all accounts and corresponding privileges manually on this newly created Azure Database for MySQL server. --3. Add the source server's IP address to the replica's firewall rules. -- Update firewall rules using the [Azure portal](how-to-manage-firewall-using-portal.md) or [Azure CLI](how-to-manage-firewall-using-cli.md). --4. **Optional** - If you wish to use [GTID-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html) from the source server to the Azure Database for MySQL replica server, you'll need to enable the following server parameters on the Azure Database for MySQL server as shown in the portal image below: -- :::image type="content" source="./media/how-to-data-in-replication/enable-gtid.png" alt-text="Enable GTID on Azure Database for MySQL server"::: --## Configure the source MySQL server --The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in Replication. This server is the "source" for Data-in replication. --1. Review the [source server requirements](concepts-data-in-replication.md#requirements) before proceeding. --2. Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or that it has a fully qualified domain name (FQDN). -- Test connectivity to the source server by attempting to connect from a tool such as the MySQL command line hosted on another machine or from the [Azure Cloud Shell](../../cloud-shell/overview.md) available in the Azure portal. -- If your organization has strict security policies and won't allow all IP addresses on the source server to enable communication from Azure to your source server, you can potentially use the command below to determine the IP address of your MySQL server. -- 1. Sign in to your Azure Database for MySQL server using a tool such as the MySQL command line. -- 2. Execute the following query. -- ```sql - mysql> SELECT @@global.redirect_server_host; - ``` -- Below is some sample output: -- ```bash - +--+ - | @@global.redirect_server_host | - +--+ - | e299ae56f000.tr1830.westus1-a.worker.database.windows.net | - +--+ - ``` -- 3. Exit from the MySQL command line. - 4. To get the IP address, execute the following command in the ping utility: -- ```bash - ping <output of step 2b> - ``` -- For example: -- ```cmd - C:\Users\testuser> ping e299ae56f000.tr1830.westus1-a.worker.database.windows.net - Pinging tr1830.westus1-a.worker.database.windows.net (**11.11.111.111**) 56(84) bytes of data. - ``` -- 5. Configure your source server's firewall rules to include the previous step's outputted IP address on port 3306. -- > [!NOTE] - > This IP address may change due to maintenance / deployment operations. This method of connectivity is only for customers who cannot afford to allow all IP address on 3306 port. --3. Turn on binary logging. -- Check to see if binary logging has been enabled on the source by running the following command: -- ```sql - SHOW VARIABLES LIKE 'log_bin'; - ``` -- If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server. -- If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the steps below: - 1. Locate your MySQL configuration file (my.cnf) in the source server. For example: /etc/my.cnf - 2. Open the configuration file to edit it and locate **mysqld** section in the file. - 3. In the mysqld section, add following line: -- ```config - log-bin=mysql-bin.log - ``` -- 4. Restart the MySQL source server for the changes to take effect. - 5. After the server is restarted, verify that binary logging is enabled by running the same query as before: -- ```sql - SHOW VARIABLES LIKE 'log_bin'; - ``` --4. Configure the source server settings. -- Data-in Replication requires the parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL. -- ```sql - SET GLOBAL lower_case_table_names = 1; - ``` -- **Optional** - If you wish to use [GTID-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you'll need to check if GTID is enabled on the source server. You can execute following command against your source MySQL server to see if gtid_mode is ON. -- ```sql - show variables like 'gtid_mode'; - ``` -- >[!IMPORTANT] - > All servers have gtid_mode set to the default value OFF. You don't need to enable GTID on the source MySQL server specifically to set up Data-in Replication. If GTID is already enabled on source server, you can optionally use GTID-based replication to set up Data-in Replication too with Azure Database for MySQL single server. You can use file-based replication to set up data-in replication for all servers regardless of the gitd_mode configuration on the source server. --5. Create a new replication role and set up permission. -- Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server. -- In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html). -- **SQL Command** -- *Replication with SSL* -- To require SSL for all user connections, use the following command to create a user: -- ```sql - CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword'; - GRANT REPLICATION SLAVE ON *.* TO 'syncuser'@'%' REQUIRE SSL; - ``` -- *Replication without SSL* -- If SSL isn't required for all connections, use the following command to create a user: -- ```sql - CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword'; - GRANT REPLICATION SLAVE ON *.* TO 'syncuser'@'%'; - ``` -- **MySQL Workbench** -- To create the replication role in MySQL Workbench, open the **Users and Privileges** panel from the **Management** panel, and then select **Add Account**. -- :::image type="content" source="./media/how-to-data-in-replication/users-privileges.png" alt-text="Users and Privileges"::: -- Type in the username into the **Login Name** field. -- :::image type="content" source="./media/how-to-data-in-replication/sync-user.png" alt-text="Sync user"::: -- Select the **Administrative Roles** panel and then select **Replication Slave** from the list of **Global Privileges**. Then select **Apply** to create the replication role. -- :::image type="content" source="./media/how-to-data-in-replication/replication-slave.png" alt-text="Replication Slave"::: --6. Set the source server to read-only mode. -- Before starting to dump out the database, the server needs to be placed in read-only mode. While in read-only mode, the source will be unable to process any write transactions. Evaluate the impact to your business and schedule the read-only window in an off-peak time if necessary. -- ```sql - FLUSH TABLES WITH READ LOCK; - SET GLOBAL read_only = ON; - ``` --7. Get binary log file name and offset. -- Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset. -- ```sql - show master status; - ``` -- The results should appear similar to the following. Make sure to note the binary file name for use in later steps. -- :::image type="content" source="./media/how-to-data-in-replication/master-status.png" alt-text="Master Status Results"::: --## Dump and restore the source server --1. Determine which databases and tables you want to replicate into Azure Database for MySQL and perform the dump from the source server. -- You can use mysqldump to dump databases from your primary server. For details, refer to [Dump & Restore](concepts-migrate-dump-restore.md). It's unnecessary to dump the MySQL library and test library. --2. **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you'll need to identify the GTID of the last transaction executed at the primary. You can use the following command to note the GTID of the last transaction executed on the master server. -- ```sql - show global variables like 'gtid_executed'; - ``` --3. Set source server to read/write mode. -- After the database has been dumped, change the source MySQL server back to read/write mode. -- ```sql - SET GLOBAL read_only = OFF; - UNLOCK TABLES; - ``` --4. Restore dump file to new server. -- Restore the dump file to the server created in the Azure Database for MySQL service. Refer to [Dump & Restore](concepts-migrate-dump-restore.md) for how to restore a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL server from the virtual machine. --5. **Optional** - Note the GTID of the restored server on Azure Database for MySQL to ensure it is same as the primary server. You can use the following command to note the GTID of the GTID purged value on the Azure Database for MySQL replica server. The value of gtid_purged should be same as gtid_executed on master noted in step 2 for GTID-based replication to work. -- ```sql - show global variables like 'gtid_purged'; - ``` --## Link source and replica servers to start Data-in Replication --1. Set the source server. -- All Data-in Replication functions are done by stored procedures. You can find all procedures at [Data-in Replication Stored Procedures](./reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench. -- To link two servers and start replication, login to the target replica server in the Azure Database for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure Database for MySQL server. -- ```sql - CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>'); - ``` -- **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html),you will need to use the following command to link the two servers -- ```sql - call mysql.az_replication_change_master_with_gtid('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_ssl_ca>'); - ``` -- - master_host: hostname of the source server - - master_user: username for the source server - - master_password: password for the source server - - master_port: port number on which source server is listening for connections. (3306 is the default port on which MySQL is listening) - - master_log_file: binary log file name from running `show master status` - - master_log_pos: binary log position from running `show master status` - - master_ssl_ca: CA certificate's context. If not using SSL, pass in empty string. -- It's recommended to pass this parameter in as a variable. For more information, see the following examples. -- > [!NOTE] - > If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. For more information, see [Manage firewall rules using the portal](how-to-manage-firewall-using-portal.md) . -- **Examples** -- *Replication with SSL* -- The variable `@cert` is created by running the following MySQL commands: -- ```sql - SET @cert = '--BEGIN CERTIFICATE-- - PLACE YOUR PUBLIC KEY CERTIFICATE'`S CONTEXT HERE - --END CERTIFICATE--' - ``` -- Replication with SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica. -- ```sql - CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, @cert); - ``` -- *Replication without SSL* -- Replication without SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica. -- ```sql - CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, ''); - ``` --2. Set up filtering. -- If you want to skip replicating some tables from your master, update the `replicate_wild_ignore_table` server parameter on your replica server. You can provide more than one table pattern using a comma-separated list. -- Review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table) to learn more about this parameter. -- To update the parameter, you can use the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md). --3. Start replication. -- Call the `mysql.az_replication_start` stored procedure to start replication. -- ```sql - CALL mysql.az_replication_start; - ``` --4. Check replication status. -- Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status. -- ```sql - show slave status; - ``` -- If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. --## Other useful stored procedures for Data-in Replication operations --### Stop replication --To stop replication between the source and replica server, use the following stored procedure: --```sql -CALL mysql.az_replication_stop; -``` --### Remove replication relationship --To remove the relationship between source and replica server, use the following stored procedure: --```sql -CALL mysql.az_replication_remove_master; -``` --### Skip replication error --To skip a replication error and allow replication to continue, use the following stored procedure: --```sql -CALL mysql.az_replication_skip_counter; -``` -- **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), use the following stored procedure to skip a transaction --```sql -call mysql. az_replication_skip_gtid_transaction(ΓÇÿ<transaction_gtid>ΓÇÖ) -``` --The procedure can skip the transaction for the given GTID. If the GTID format is not right or the GTID transaction has already been executed, the procedure will fail to execute. The GTID for a transaction can be determined by parsing the binary log to check the transaction events. MySQL provides a utility [mysqlbinlog](https://dev.mysql.com/doc/refman/5.7/en/mysqlbinlog.html) to parse binary logs and display their contents in text format, which can be used to identify GTID of the transaction. -->[!Important] ->This procedure can be only used to skip one transaction, and can't be used to skip gtid set or set gtid_purged. --To skip the next transaction after the current replication position, use the following command to identify the GTID of next transaction as shown below. --```sql -SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos][LIMIT [offset,] row_count] -``` -- :::image type="content" source="./media/how-to-data-in-replication/show-binary-log.png" alt-text="Show binary log results"::: --## Next steps --- Learn more about [Data-in Replication](concepts-data-in-replication.md) for Azure Database for MySQL. |
mysql | How To Deny Public Network Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-deny-public-network-access.md | - Title: Deny Public Network Access - Azure portal - Azure Database for MySQL -description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for MySQL ----- Previously updated : 06/20/2022---# Deny Public Network Access in Azure Database for MySQL using Azure portal ----This article describes how you can configure an Azure Database for MySQL server to deny all public configurations and allow only connections through private endpoints to further enhance the network security. --## Prerequisites --To complete this how-to guide, you need: --* An [Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md) with General Purpose or Memory Optimized pricing tier --## Set Deny Public Network Access --Follow these steps to set MySQL server Deny Public Network Access: --1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server. --1. On the MySQL server page, under **Settings**, click **Connection security** to open the connection security configuration page. --1. In **Deny Public Network Access**, select **Yes** to enable deny public access for your MySQL server. -- :::image type="content" source="./media/how-to-deny-public-network-access/setting-deny-public-network-access.PNG" alt-text="Azure Database for MySQL Deny network access"::: --1. Click **Save** to save the changes. --1. A notification will confirm that connection security setting was successfully enabled. -- :::image type="content" source="./media/how-to-deny-public-network-access/setting-deny-public-network-access-success.png" alt-text="Azure Database for MySQL Deny network access success"::: --## Next steps --Learn about [how to create alerts on metrics](how-to-alert-on-metric.md). |
mysql | How To Double Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-double-encryption.md | - Title: Infrastructure double encryption - Azure portal - Azure Database for MySQL -description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for MySQL. ----- Previously updated : 06/20/2022---# Infrastructure double encryption for Azure Database for MySQL ----Learn how to use the how set up and manage Infrastructure double encryption for your Azure Database for MySQL. --## Prerequisites --* You must have an Azure subscription and be an administrator on that subscription. -* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [infrastructure double encryption](concepts-infrastructure-double-encryption.md#limitations). --## Create an Azure Database for MySQL server with Infrastructure Double encryption - Portal --Follow these steps to create an Azure Database for MySQL server with Infrastructure double encryption from Azure portal: --1. Select **Create a resource** (+) in the upper-left corner of the portal. --2. Select **Databases** > **Azure Database for MySQL**. You can also enter **MySQL** in the search box to find the service. -- :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/2-navigate-to-mysql.png" alt-text="Azure Database for MySQL option"::: --3. Provide the basic information of the server. Select **Additional settings** and enabled the **Infrastructure double encryption** checkbox to set the parameter. -- :::image type="content" source="./media/how-to-double-encryption/infrastructure-encryption-selected.png" alt-text="Azure Database for MySQL selections"::: --4. Select **Review + create** to provision the server. -- :::image type="content" source="./media/how-to-double-encryption/infrastructure-encryption-summary.png" alt-text="Azure Database for MySQL summary"::: --5. Once the server is created you can validate the infrastructure double encryption by checking the status in the **Data encryption** server blade. -- :::image type="content" source="./media/how-to-double-encryption/infrastructure-encryption-validation.png" alt-text="Azure Database for MySQL validation"::: --## Create an Azure Database for MySQL server with Infrastructure Double encryption - CLI --Follow these steps to create an Azure Database for MySQL server with Infrastructure double encryption from CLI: --This example creates a resource group named `myresourcegroup` in the `westus` location. --```azurecli-interactive -az group create --name myresourcegroup --location westus -``` -The following example creates a MySQL 5.7 server in West US named `mydemoserver` in your resource group `myresourcegroup` with server admin login `myadmin`. This is a **Gen 4** **General Purpose** server with **2 vCores**. This will also enabled infrastructure double encryption for the server created. Substitute the `<server_admin_password>` with your own value. --```azurecli-interactive -az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7 --infrastructure-encryption <Enabled/Disabled> -``` --## Next steps -- To learn more about data encryption, see [Azure Database for MySQL data Infrastructure double encryption](concepts-Infrastructure-double-encryption.md). |
mysql | How To Major Version Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-major-version-upgrade.md | - Title: Major version upgrade in Azure Database for MySQL - Single Server -description: This article describes how you can upgrade major version for Azure Database for MySQL - Single Server ------ Previously updated : 06/20/2022---# Major version upgrade in Azure Database for MySQL single server ----> [!NOTE] -> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article. --> [!IMPORTANT] -> Major version upgrade for Azure database for MySQL single server is in public preview. --This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL single server. --This feature will enable customers to perform in-place upgrades of their MySQL 5.6 servers to MySQL 5.7 with a click of button without any data movement or the need of any application connection string changes. --> [!NOTE] -> * Major version upgrade is only available for major version upgrade from MySQL 5.6 to MySQL 5.7. -> * The server will be unavailable throughout the upgrade operation. It is therefore recommended to perform upgrades during your planned maintenance window. You can consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas) --## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure portal --Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.6 server using Azure portal --> [!IMPORTANT] -> We recommend to perform upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](how-to-restore-server-portal.md#point-in-time-restore). --1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 server. --2. From the **Overview** page, click the **Upgrade** button in the toolbar. --3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 server to 5.7 server. -- :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade"::: --4. A notification will confirm that upgrade is successful. ---## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure CLI --Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.6 server using Azure CLI --> [!IMPORTANT] -> We recommend to perform upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](how-to-restore-server-cli.md#server-point-in-time-restore). --1. Install [Azure CLI for Windows](/cli/azure/install-azure-cli) or use Azure CLI in [Azure Cloud Shell](../../cloud-shell/overview.md) to run the upgrade commands. - - This upgrade requires version 2.16.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade. --2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command: -- ```azurecli - az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --target-server-version 5.7" - ``` - - The command prompt shows the "-Running" message. After this message is no longer displayed, the version upgrade is complete. --## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 on read replica using Azure portal --1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 read replica server. --2. From the **Overview** page, click the **Upgrade** button in the toolbar. --3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 read replica server to 5.7 server. -- :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade"::: --4. A notification will confirm that upgrade is successful. --5. From the **Overview** page, confirm that your Azure database for MySQL read replica server version is 5.7. --6. Now go to your primary server and [Perform major version upgrade](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-using-azure-portal) on it. --## Perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replicas --You can perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 by utilizing read replicas. The idea is to upgrade the read replica of your server to 5.7 first and later failover your application to point to read replica and make it a new primary. --1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6. --2. Create a [read replica](./concepts-read-replicas.md#create-a-replica) from your primary server. --3. [Upgrade your read replica](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-on-read-replica-using-azure-portal) to version 5.7. --4. Once you confirm that the replica server is running on version 5.7, stop your application from connecting to your primary server. - -5. Check replication status, and make sure replica is all caught up with primary so all the data is in sync and ensure there are no new operations performed in primary. -- Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status. -- ```sql - SHOW SLAVE STATUS\G - ``` -- If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm `Seconds_Behind_Master` is "0" it's safe to stop replication. --6. Promote your read replica to primary by [stopping replication](./how-to-read-replicas-portal.md#stop-replication-to-a-replica-server). --7. Point your application to the new primary (former replica) which is running server 5.7. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source. --> [!NOTE] -> This scenario will have downtime during steps 4, 5 and 6 only. ---## Frequently asked questions --### When will this upgrade feature be GA as we have MySQL v5.6 in our production environment that we need to upgrade? --The GA of this feature is planned before MySQL v5.6 retirement. However, the feature is production ready and fully supported by Azure so you should run it with confidence on your environment. As a recommended best practice, we strongly suggest you to run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production. For more information, see [how to perform point-in-time restore](how-to-restore-server-portal.md#point-in-time-restore) to create a point in time copy of your server. --### Will this cause downtime of the server and if so, how long? --Yes, the server will be unavailable during the upgrade process so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server.The upgrades of Basic SKU servers are expected to take longer time as it is on standard storage platform. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. Consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas) --### What happens if we do not choose to upgrade our MySQL v5.6 server before February 5, 2021? --You can still continue running your MySQL v5.6 server as before. Azure **will never** perform force upgrade on your server. However, the restrictions documented in [Azure Database for MySQL versioning policy](../concepts-version-policy.md) will apply. --## Next steps --Learn about [Azure Database for MySQL versioning policy](../concepts-version-policy.md). |
mysql | How To Manage Firewall Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-cli.md | - Title: Manage firewall rules - Azure CLI - Azure Database for MySQL -description: This article describes how to create and manage Azure Database for MySQL firewall rules using Azure CLI command-line. ----- Previously updated : 06/20/2022----# Create and manage Azure Database for MySQL firewall rules by using the Azure CLI ----Server-level firewall rules can be used to manage access to an Azure Database for MySQL Server from a specific IP address or a range of IP addresses. Using convenient Azure CLI commands, you can create, update, delete, list, and show firewall rules to manage your server. For an overview of Azure Database for MySQL firewalls, see [Azure Database for MySQL server firewall rules](./concepts-firewall-rules.md). --Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure CLI](how-to-manage-vnet-using-cli.md). --## Prerequisites -* [Install Azure CLI](/cli/azure/install-azure-cli). -* An [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-cli.md). --## Firewall rule commands: -The **az mysql server firewall-rule** command is used from the Azure CLI to create, delete, list, show, and update firewall rules. --Commands: -- **create**: Create an Azure MySQL server firewall rule.-- **delete**: Delete an Azure MySQL server firewall rule.-- **list**: List the Azure MySQL server firewall rules.-- **show**: Show the details of an Azure MySQL server firewall rule.-- **update**: Update an Azure MySQL server firewall rule.--## Sign in to Azure and list your Azure Database for MySQL Servers -Securely connect Azure CLI with your Azure account by using the **az login** command. --1. From the command-line, run the following command: - ```azurecli - az login - ``` - This command outputs a code to use in the next step. --2. Use a web browser to open the page [https://aka.ms/devicelogin](https://aka.ms/devicelogin), and then enter the code. --3. At the prompt, sign in using your Azure credentials. --4. After your login is authorized, a list of subscriptions is printed in the console. Copy the ID of the desired subscription to set the current subscription to use. Use the [az account set](/cli/azure/account#az-account-set) command. - ```azurecli-interactive - az account set --subscription <your subscription id> - ``` --5. List the Azure Databases for MySQL servers for your subscription and resource group if you are unsure of the names. Use the [az mysql server list](/cli/azure/mysql/server#az-mysql-server-list) command. -- ```azurecli-interactive - az mysql server list --resource-group myresourcegroup - ``` -- Note the name attribute in the listing, which you need to specify the MySQL server to work on. If needed, confirm the details for that server and using the name attribute to ensure it is correct. Use the [az mysql server show](/cli/azure/mysql/server#az-mysql-server-show) command. -- ```azurecli-interactive - az mysql server show --resource-group myresourcegroup --name mydemoserver - ``` --## List firewall rules on Azure Database for MySQL Server -Using the server name and the resource group name, list the existing server firewall rules on the server. Use the [az mysql server firewall list](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-list) command. Notice that the server name attribute is specified in the **--server** switch and not in the **--name** switch. -```azurecli-interactive -az mysql server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver -``` -The output lists the rules, if any, in JSON format (by default). You can use the **--output table** switch to output the results in a more readable table format. -```azurecli-interactive -az mysql server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver --output table -``` -## Create a firewall rule on Azure Database for MySQL Server -Using the Azure MySQL server name and the resource group name, create a new firewall rule on the server. Use the [az mysql server firewall create](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-create) command. Provide a name for the rule, as well as the start IP and end IP (to provide access to a range of IP addresses) for the rule. -```azurecli-interactive -az mysql server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.15 -``` --To allow access for a single IP address, provide the same IP address as the Start IP and End IP, as in this example. -```azurecli-interactive -az mysql server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 1.1.1.1 --end-ip-address 1.1.1.1 -``` --To allow applications from Azure IP addresses to connect to your Azure Database for MySQL server, provide the IP address 0.0.0.0 as the Start IP and End IP, as in this example. -```azurecli-interactive -az mysql server firewall-rule create --resource-group myresourcegroup --server mysql --name "AllowAllWindowsAzureIps" --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0 -``` --> [!IMPORTANT] -> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users. -> --Upon success, each create command output lists the details of the firewall rule you have created, in JSON format (by default). If there is a failure, the output shows error message text instead. --## Update a firewall rule on Azure Database for MySQL server -Using the Azure MySQL server name and the resource group name, update an existing firewall rule on the server. Use the [az mysql server firewall update](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-update) command. Provide the name of the existing firewall rule as input, as well as the start IP and end IP attributes to update. -```azurecli-interactive -az mysql server firewall-rule update --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.1 -``` -Upon success, the command output lists the details of the firewall rule you have updated, in JSON format (by default). If there is a failure, the output shows error message text instead. --> [!NOTE] -> If the firewall rule does not exist, the rule is created by the update command. --## Show firewall rule details on Azure Database for MySQL Server -Using the Azure MySQL server name and the resource group name, show the existing firewall rule details from the server. Use the [az mysql server firewall show](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-show) command. Provide the name of the existing firewall rule as input. -```azurecli-interactive -az mysql server firewall-rule show --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 -``` -Upon success, the command output lists the details of the firewall rule you have specified, in JSON format (by default). If there is a failure, the output shows error message text instead. --## Delete a firewall rule on Azure Database for MySQL Server -Using the Azure MySQL server name and the resource group name, remove an existing firewall rule from the server. Use the [az mysql server firewall delete](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-delete) command. Provide the name of the existing firewall rule. -```azurecli-interactive -az mysql server firewall-rule delete --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 -``` -Upon success, there is no output. Upon failure, error message text displays. --## Next steps -- Understand more about [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md).-- [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md).-- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure CLI](how-to-manage-vnet-using-cli.md). |
mysql | How To Manage Firewall Using Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-portal.md | - Title: Manage firewall rules - Azure portal - Azure Database for MySQL -description: Create and manage Azure Database for MySQL firewall rules using the Azure portal ----- Previously updated : 06/20/2022---# Create and manage Azure Database for MySQL firewall rules by using the Azure portal ----Server-level firewall rules can be used to manage access to an Azure Database for MySQL Server from a specified IP address or a range of IP addresses. --Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md). --> [!NOTE] -> Virtual Network (VNet) rules can only be used on General Purpose or Memory Optimized tiers. --## Create a server-level firewall rule in the Azure portal --1. On the MySQL server page, under Settings heading, click **Connection Security** to open the Connection Security page for the Azure Database for MySQL. -- :::image type="content" source="./media/how-to-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security"::: --2. Click **Add My IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system. -- :::image type="content" source="./media/how-to-manage-firewall-using-portal/2-add-my-ip.png" alt-text="Azure portal - click Add My IP"::: --3. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Therefore, you may need to change the Start IP and End IP to make the rule function as expected. -- Use a search engine or other online tool to check your own IP address. For example, search "what is my IP address". --4. Add additional address ranges. In the firewall rules for the Azure Database for MySQL, you can specify a single IP address or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the Start IP and End IP fields. Opening the firewall enables administrators, users, and application to access any database on the MySQL server to which they have valid credentials. -- :::image type="content" source="./media/how-to-manage-firewall-using-portal/4-specify-addresses.png" alt-text="Azure portal - firewall rules"::: --5. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules is successful. -- :::image type="content" source="./media/how-to-manage-firewall-using-portal/5-save-firewall-rule.png" alt-text="Azure portal - click Save"::: --## Connecting from Azure -To allow applications from Azure to connect to your Azure Database for MySQL server, Azure connections must be enabled. For example, to host an Azure Web Apps application, or an application that runs in an Azure VM, or to connect from an Azure Data Factory data management gateway. The resources do not need to be in the same Virtual Network (VNet) or Resource Group for the firewall rule to enable those connections. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. There are a couple of methods to enable these types of connections. A firewall setting with starting and ending address equal to 0.0.0.0 indicates these connections are allowed. Alternatively, you can set the **Allow access to Azure services** option to **ON** in the portal from the **Connection security** pane and hit **Save**. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL server. --> [!IMPORTANT] -> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users. -> --## Manage existing server-level firewall rules by using the Azure portal -Repeat the steps to manage the firewall rules. -* To add the current computer, click **+ Add My IP**. Click **Save** to save the changes. -* To add additional IP addresses, type in the **RULE NAME**, **START IP**, and **END IP**. Click **Save** to save the changes. -* To modify an existing rule, click any of the fields in the rule, and then modify. Click **Save** to save the changes. -* To delete an existing rule, click the ellipsis […], and then click **Delete**. Click **Save** to save the changes. ---## Next steps -- Similarly, you can script to [Create and manage Azure Database for MySQL firewall rules using Azure CLI](how-to-manage-firewall-using-cli.md).-- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md).-- For help in connecting to an Azure Database for MySQL server, see [Connection libraries for Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md). |
mysql | How To Manage Single Server Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-single-server-cli.md | - Title: Manage server - Azure CLI - Azure Database for MySQL -description: Learn how to manage an Azure Database for MySQL server from the Azure CLI. ------ Previously updated : 06/20/2022---# Manage an Azure Database for MySQL single server using the Azure CLI ----This article shows you how to manage your Single servers deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details. --## Prerequisites -If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --You'll need to log in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account. --```azurecli-interactive -az login -``` --Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list). --```azurecli -az account set --subscription <subscription id> -``` --If you have not already created a server , refer to this [quickstart](quickstart-create-mysql-server-database-using-azure-cli.md) to create one. --## Scale compute and storage -You can scale up your pricing tier , compute and storage easily using the following command. You can see all the server operation you can perform [az mysql server overview](/cli/azure/mysql/server) --```azurecli-interactive -az mysql server update --resource-group myresourcegroup --name mydemoserver --sku-name GP_Gen5_4 --storage-size 6144 -``` --Here are the details for arguments above : --**Setting** | **Sample value** | **Description** -|| -name | mydemoserver | Enter a unique name for your Azure Database for MySQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters. -resource-group | myresourcegroup | Provide the name of the Azure resource group. -sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information. -storage-size | 6144 | The storage capacity of the server (unit is megabytes). Minimum 5120 and increases in 1024 increments. --> [!Important] -> - Storage can be scaled up (however, you cannot scale storage down) -> - Scaling up from Basic to General purpose or Memory optimized pricing tier is not supported. You can manually scale up with either [using a bash script](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/upgrade-from-basic-to-general-purpose-or-memory-optimized-tiers/ba-p/830404) or [using MySQL Workbench](https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-scale-up-azure-database-for-mysql-from-basic-tier-to/ba-p/369134) ---## Manage MySQL databases on a server -You can use any of these commands to create, delete , list and view database properties of a database on your server --| Cmdlet | Usage| Description | -| | | | -|[az mysql db create](/cli/azure/sql/db#az-mysql-db-create)|```az mysql db create -g myresourcegroup -s mydemoserver -n mydatabasename``` |Creates a database| -|[az mysql db delete](/cli/azure/sql/db#az-mysql-db-delete)|```az mysql db delete -g myresourcegroup -s mydemoserver -n mydatabasename```|Delete your database from your server. This command does not delete your server. | -|[az mysql db list](/cli/azure/sql/db#az-mysql-db-list)|```az mysql db list -g myresourcegroup -s mydemoserver```|lists all the databases on the server| -|[az mysql db show](/cli/azure/sql/db#az-mysql-db-show)|```az mysql db show -g myresourcegroup -s mydemoserver -n mydatabasename```|Shows more details of the database| --## Update admin password -You can change the administrator role's password with this command -```azurecli-interactive -az mysql server update --resource-group myresourcegroup --name mydemoserver --admin-password <new-password> -``` --> [!Important] -> Make sure password is minimum 8 characters and maximum 128 characters. -> Password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. --## Delete a server -If you would just like to delete the MySQL single server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command. --```azurecli-interactive -az mysql server delete --resource-group myresourcegroup --name mydemoserver -``` --## Next steps -- [Restart a server](how-to-restart-server-cli.md)-- [Restore a server in a bad state](how-to-restore-server-cli.md)-- [Monitor and tune the server](concepts-monitoring.md) |
mysql | How To Manage Vnet Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-cli.md | - Title: Manage VNet endpoints - Azure CLI - Azure Database for MySQL -description: This article describes how to create and manage Azure Database for MySQL VNet service endpoints and rules using Azure CLI command line. ----- Previously updated : 06/20/2022----# Create and manage Azure Database for MySQL VNet service endpoints using Azure CLI ----Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL. ----> [!NOTE] -> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. -> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server. --## Configure Vnet service endpoints for Azure Database for MySQL --The [az network vnet](/cli/azure/network/vnet) commands are used to configure Virtual Networks. --If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. The account must have the necessary permissions to create a virtual network and service endpoint. -Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network. --To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles. --Learn more about [built-in roles](../../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../../role-based-access-control/custom-roles.md). --VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource providers registered. For more information refer [resource-manager-registration][resource-manager-portal] --> [!IMPORTANT] -> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet. --## Sample script ---### Run the script ---## Clean up resources ---```azurecli -az group delete --name $resourceGroup -``` --<!-- Link references, to text, Within this same GitHub repo. --> -[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | How To Manage Vnet Using Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-portal.md | - Title: Manage VNet endpoints - Azure portal - Azure Database for MySQL -description: Create and manage Azure Database for MySQL VNet service endpoints and rules using the Azure portal ----- Previously updated : 06/20/2022---# Create and manage Azure Database for MySQL VNet service endpoints and VNet rules by using the Azure portal ----Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL. --> [!NOTE] -> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. -> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server. ---## Create a VNet rule and enable service endpoints in the Azure portal --1. On the MySQL server page, under the Settings heading, click **Connection Security** to open the Connection Security pane for Azure Database for MySQL. --2. Ensure that the Allow access to Azure services control is set to **No**. --> [!Important] -> If you leave the control set to **Yes**, your Azure MySQL Database server accepts communication from any subnet. Leaving the control set to **Yes** might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for MySQL, together can reduce your security surface area. --3. Next, click on **+ Adding existing virtual network**. If you do not have an existing VNet you can click **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../../virtual-network/quick-create-portal.md) -- :::image type="content" source="./media/how-to-manage-vnet-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security"::: --4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then click **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag. -- :::image type="content" source="./media/how-to-manage-vnet-using-portal/2-configure-vnet.png" alt-text="Azure portal - configure VNet"::: -- The account must have the necessary permissions to create a virtual network and service endpoint. -- Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network. - - To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles. - - Learn more about [built-in roles](../../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../../role-based-access-control/custom-roles.md). - - VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource providers registered. For more information refer [resource-manager-registration][resource-manager-portal] -- > [!IMPORTANT] - > It is highly recommended to read this article about service endpoint configurations and considerations before configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet. - > --5. Once enabled, click **OK** and you will see that VNet service endpoints are enabled along with a VNet rule. -- :::image type="content" source="./media/how-to-manage-vnet-using-portal/3-vnet-service-endpoints-enabled-vnet-rule-created.png" alt-text="VNet service endpoints enabled and VNet rule created"::: --## Next steps -- Similarly, you can script to [Enable VNet service endpoints and create a VNET rule for Azure Database for MySQL using Azure CLI](how-to-manage-vnet-using-cli.md).-- For help in connecting to an Azure Database for MySQL server, see [Connection libraries for Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md)--<!-- Link references, to text, Within this same GitHub repo. --> -[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md |
mysql | How To Migrate Rds Mysql Workbench | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-workbench.md | - Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench -description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using the MySQL Workbench Migration Wizard. ----- Previously updated : 06/20/2022---# Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench ----You can use various utilities, such as MySQL Workbench Export/Import, Azure Database Migration Service (DMS), and MySQL dump and restore, to migrate Amazon RDS for MySQL to Azure Database for MySQL. However, using the MySQL Workbench Migration Wizard provides an easy and convenient way to move your Amazon RDS for MySQL databases to Azure Database for MySQL. --With the Migration Wizard, you can conveniently select which schemas and objects to migrate. It also allows you to view server logs to identify errors and bottlenecks in real time. As a result, you can edit and modify tables or database structures and objects during the migration process when an error is detected, and then resume migration without having to restart from scratch. --> [!NOTE] -> You can also use the Migration Wizard to migrate other sources, such as Microsoft SQL Server, Oracle, PostgreSQL, MariaDB, etc., which are outside the scope of this article. --## Prerequisites --Before you start the migration process, it's recommended that you ensure that several parameters and features are configured and set up properly, as described below. --- Make sure the Character set of the source and target databases are the same.-- Set the wait timeout to a reasonable time depending on the amount data or workload you want to import or migrate.-- Set the `max_allowed_packet parameter` to a reasonable amount depending on the size of the database you want to import or migrate.-- Verify that all of your tables use InnoDB, as Azure Database for MySQL Server only supports the InnoDB Storage engine.-- Remove, replace, or modify all triggers, stored procedures, and other functions containing root user or super user definers (Azure Database for MySQL doesnΓÇÖt support the Super user privilege). To replace the definers with the name of the admin user that is running the import process, run the following command:-- ``` - DELIMITER; ;/*!50003 CREATE*/ /*!50017 DEFINER=`root`@`127.0.0.1`*/ /*!50003 - DELIMITER; - /* Modified to */ - DELIMITER; - /*!50003 CREATE*//*!50017 DEFINER=`AdminUserName`@`ServerName`*/ /*!50003 - DELIMITER; -- ``` --- If User Defined Functions (UDFs) are running on your database server, you need to delete the privilege for the mysql database. To determine if any UDFs are running on your server, use the following query:-- ``` - SELECT * FROM mysql.func; - ``` -- If you discover that UDFs are running, you can drop the UDFs by using the following query: -- ``` - DROP FUNCTION your_UDFunction; - ``` --- Make sure that the server on which the tool is running, and ultimately the export location, has ample disk space and compute power (vCores, CPU, and Memory) to perform the export operation, especially when exporting a very large database.-- Create a path between the on-premises or AWS instance and Azure Database for MySQL if the workload is behind firewalls or other network security layers.--## Begin the migration process --1. To start the migration process, sign in to MySQL Workbench, and then select the home icon. -2. In the left-hand navigation bar, select the Migration Wizard icon, as shown in the screenshot below. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/begin-the-migration.png" alt-text="MySQL Workbench start screen"::: -- The **Overview** page of the Migration Wizard is displayed, as shown below. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/migration-wizard-welcome.png" alt-text="MySQL Workbench Migration Wizard welcome page"::: --3. Determine if you have an ODBC driver for MySQL Server installed by selecting **Open ODBC Administrator**. -- In our case, on the **Drivers** tab, youΓÇÖll notice that there are already two MySQL Server ODBC drivers installed. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/obdc-administrator-page.png" alt-text="ODBC Data Source Administrator page"::: -- If a MySQL ODBC driver isn't installed, use the MySQL Installer you used to install MySQL Workbench to install the driver. For more information about MySQL ODBC driver installation, see the following resources: -- - [MySQL :: MySQL Connector/ODBC Developer Guide :: 4.1 Installing Connector/ODBC on Windows](https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation-binary-windows.html) - - [ODBC Driver for MySQL: How to Install and Set up Connection (Step-by-step) ΓÇô {coding}Sight (codingsight.com)](https://codingsight.com/install-and-configure-odbc-drivers-for-mysql/) --4. Close the **ODBC Data Source Administrator** dialog box, and then continue with the migration process. --## Configure source database server connection parameters --1. On the **Overview** page, select **Start Migration**. -- The **Source Selection** page appears. Use this page to provide information about the RDBMS you're migrating from and the parameters for the connection. --2. In the **Database System** field, select **MySQL**. -3. In the **Stored Connection** field, select one of the saved connection settings for that RDBMS. -- You can save connections by marking the checkbox at the bottom of the page and providing a name of your preference. --4. In the **Connection Method** field, select **Standard TCP/IP**. -5. In the **Hostname** field, specify the name of your source database server. -6. In the **Port** field, specify **3306**, and then enter the username and password for connecting to the server. -7. In the **Database** field, enter the name of the database you want to migrate if you know it; otherwise leave this field blank. -8. Select **Test Connection** to check the connection to your MySQL Server instance. -- If youΓÇÖve entered the correct parameters, a message appears indicating a successful connection attempt. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/source-connection-parameters.png" alt-text="Source database connection parameters page"::: --9. Select **Next**. --## Configure target database server connection parameters --1. On the **Target Selection** page, set the parameters to connect to your target MySQL Server instance using a process similar to that for setting up the connection to the source server. -2. To verify a successful connection, select **Test Connection**. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/target-connection-parameters.png" alt-text="Target database connection parameters page"::: --3. Select **Next**. --## Select the schemas to migrate --The Migration Wizard will communicate to your MySQL Server instance and fetch a list of schemas from the source server. --1. Select **Show logs** to view this operation. -- The screenshot below shows how the schemas are being retrieved from the source database server. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/retrieve-schemas.png" alt-text="Fetch schemas list page"::: --2. Select **Next** to verify that all the schemas were successfully fetched. -- The screenshot below shows the list of fetched schemas. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/schemas-selection.png" alt-text="Schemas selection page"::: -- You can only migrate schemas that appear in this list. --3. Select the schemas that you want to migrate, and then select **Next**. --## Object migration --Next, specify the object(s) that you want to migrate. --1. Select **Show Selection**, and then, under **Available Objects**, select and add the objects that you want to migrate. -- When you've added the objects, they'll appear under **Objects to Migrate**, as shown in the screenshot below. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/source-objects.png" alt-text="Source objects selection page"::: -- In this scenario, weΓÇÖve selected all table objects. --2. Select **Next**. --## Edit data --In this section, you have the option of editing the objects that you want to migrate. --1. On the **Manual Editing** page, notice the **View** drop-down menu in the top-right corner. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/manual-editing.png" alt-text="Manual Editing selection page"::: -- The **View** drop-down box includes three items: -- - **All Objects** ΓÇô Displays all objects. With this option, you can manually edit the generated SQL before applying them to the target database server. To do this, select the object and select Show Code and Messages. You can see (and edit!) the generated MySQL code that corresponds to the selected object. - - **Migration problems** ΓÇô Displays any problems that occurred during the migration, which you can review and verify. - - **Column Mapping** ΓÇô Displays column mapping information. You can use this view to edit the name and change column of the target object. --2. Select **Next**. --## Create the target database --1. Select the **Create schema in target RDBMS** check box. -- You can also choose to keep already existing schemas, so they won't be modified or updated. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/create-target-database.png" alt-text="Target Creation Options page"::: -- In this article, we've chosen to create the schema in target RDBMS, but you can also select the **Create a SQL script file** check box to save the file on your local computer or for other purposes. --2. Select **Next**. --## Run the MySQL script to create the database objects --Since we've elected to create schema in the target RDBMS, the migrated SQL script will be executed in the target MySQL server. You can view its progress as shown in the screenshot below: ---1. After the creation of the schemas and their objects completes, select **Next**. -- On the **Create Target Results** page, youΓÇÖre presented with a list of the objects created and notification of any errors that were encountered while creating them, as shown in the following screenshot. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/create-target-results.png" alt-text="Create Target Results page"::: --2. Review the detail on this page to verify that everything completed as intended. -- For this article, we donΓÇÖt have any errors. If there's no need to address any error messages, you can edit the migration script. --3. In the **Object** box, select the object that you want to edit. -4. Under **SQL CREATE script for selected object**, modify your SQL script, and then select **Apply** to save the changes. -5. Select **Recreate Objects** to run the script including your changes. -- If the script fails, you may need to edit the generated script. You can then manually fix the SQL script and run everything again. In this article, weΓÇÖre not changing anything, so weΓÇÖll leave the script as it is. --6. Select **Next**. --## Transfer data --This part of the process moves data from the source MySQL Server database instance into your newly created target MySQL database instance. Use the **Data Transfer Setup** page to configure this process. ---This page provides options for setting up the data transfer. For the purposes of this article, weΓÇÖll accept the default values. --1. To begin the actual process of transferring data, select **Next**. -- The progress of the data transfer process appears as shown in the following screenshot. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/bulk-data-transfer.png" alt-text="Bulk Data Transfer page"::: -- > [!NOTE] - > The duration of the data transfer process is directly related to the size of the database you're migrating. The larger the source database, the longer the process will take, potentially up to a few hours for larger databases. --2. After the transfer completes, select **Next**. -- The **Migration Report** page appears, providing a report summarizing the whole process, as shown on the screenshot below: -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/migration-report.png" alt-text="Migration Progress Report page"::: --3. Select **Finish** to close the Migration Wizard. -- The migration is now successfully completed. --## Verify consistency of the migrated schemas and tables --1. Next, log into your MySQL target database instance to verify that the migrated schemas and tables are consistent with your MySQL source database. -- In our case, you can see that all schemas (sakila, moda, items, customer, clothes, world, and world_x) from the Amazon RDS for MySQL: **MyjolieDB** database have been successfully migrated to the Azure Database for MySQL: **azmysql** instance. --2. To verify the table and rows counts, run the following query on both instances: -- `SELECT COUNT (*) FROM sakila.actor;` -- You can see from the screenshot below that the row count for Amazon RDS MySQL is 200, which matches the Azure Database for MySQL instance. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/table-row-size-source.png" alt-text="Table and Row size source database"::: -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/table-row-size-target.png" alt-text="Table and Row size target database"::: -- While you can run the above query on every single schema and table, that will be quite a bit of work if youΓÇÖre dealing with hundreds of thousands or even millions of tables. You can use the queries below to verify the schema (database) and table size instead. --3. To check the database size, run the following query: -- ``` - SELECT table_schema AS "Database", - ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size (MB)" - FROM information_schema.TABLES - GROUP BY table_schema; - ``` --4. To check the table size, run the following query: -- ``` - SELECT table_name AS "Table", - ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)" - FROM information_schema.TABLES - WHERE table_schema = "database_name" - ORDER BY (data_length + index_length) DESC; - ``` -- You see from the screenshots below that schema (database) size from the Source Amazon RDS MySQL instance is the same as that of the target Azure Database for MySQL Instance. -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/database-size-source.png" alt-text="Database size source database"::: -- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/database-size-target.png" alt-text="Database size target database"::: -- Since the schema (database) sizes are the same in both instances, it's not really necessary to check individual table sizes. In any case, you can always use the above query to check your table sizes, as necessary. -- YouΓÇÖve now confirmed that your migration completed successfully. --## Next steps --- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201), which contains a demo showing how to migrate MySQL apps to Azure Database for MySQL. |
mysql | How To Move Regions Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-move-regions-portal.md | - Title: Move Azure regions - Azure portal - Azure Database for MySQL -description: Move an Azure Database for MySQL server from one Azure region to another using a read replica and the Azure portal. ------ Previously updated : 06/20/2022-#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region. ---# Move an Azure Database for MySQL server to another region by using the Azure portal ----There are various scenarios for moving an existing Azure Database for MySQL server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning. --You can use an Azure Database for MySQL [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic. --> [!NOTE] -> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../../azure-resource-manager/management/move-resource-group-and-subscription.md) article. --## Prerequisites --- The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.--- Make sure that your Azure Database for MySQL source server is in the Azure region that you want to move from.--## Prepare to move --To create a cross-region read replica server in the target region using the Azure portal, use the following steps: --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Select the existing Azure Database for MySQL server that you want to use as the source server. This action opens the **Overview** page. -1. Select **Replication** from the menu, under **SETTINGS**. -1. Select **Add Replica**. -1. Enter a name for the replica server. -1. Select the location for the replica server. The default location is the same as the source server's. Verify that you've selected the target location where you want the replica to be deployed. -1. Select **OK** to confirm creation of the replica. During replica creation, data is copied from the source server to the replica. Create time may last several minutes or more, in proportion to the size of the source server. -->[!NOTE] -> When you create a replica, it doesn't inherit the VNet service endpoints of the source server. These rules must be set up independently for the replica. --## Move --> [!IMPORTANT] -> The standalone server can't be made into a replica again. -> Before you stop replication on a read replica, ensure the replica has all the data that you require. --Stopping replication to the replica server, causes it to become a standalone server. To stop replication to the replica from the Azure portal, use the following steps: --1. Once the replica has been created, locate and select your Azure Database for MySQL source server. -1. Select **Replication** from the menu, under **SETTINGS**. -1. Select the replica server. -1. Select **Stop replication**. -1. Confirm you want to stop replication by clicking **OK**. --## Clean up source server --You may want to delete the source Azure Database for MySQL server. To do so, use the following steps: --1. Once the replica has been created, locate and select your Azure Database for MySQL source server. -1. In the **Overview** window, select **Delete**. -1. Type in the name of the source server to confirm you want to delete. -1. Select **Delete**. --## Next steps --In this tutorial, you moved an Azure Database for MySQL server from one region to another by using the Azure portal and then cleaned up the unneeded source resources. --- Learn more about [read replicas](concepts-read-replicas.md)-- Learn more about [managing read replicas in the Azure portal](how-to-read-replicas-portal.md)-- Learn more about [business continuity](concepts-business-continuity.md) options |
mysql | How To Read Replicas Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-cli.md | - Title: Manage read replicas - Azure CLI, REST API - Azure Database for MySQL -description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure CLI or REST API. ----- Previously updated : 06/20/2022----# How to create and manage read replicas in Azure Database for MySQL using the Azure CLI and REST API ----In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md). --## Azure CLI -You can create and manage read replicas using the Azure CLI. --### Prerequisites --- [Install Azure CLI 2.0](/cli/azure/install-azure-cli)-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md) that will be used as the source server. --> [!IMPORTANT] -> The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers. --### Create a read replica --> [!IMPORTANT] -> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details. -> ->If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid) --A read replica server can be created using the following command: --```azurecli-interactive -az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup -``` --The `az mysql server replica create` command requires the following parameters: --| Setting | Example value | Description  | -| | | | -| resource-group |  myresourcegroup |  The resource group where the replica server will be created to.  | -| name | mydemoreplicaserver | The name of the new replica server that is created. | -| source-server | mydemoserver | The name or ID of the existing source server to replicate from. | --To create a cross region read replica, use the `--location` parameter. The CLI example below creates the replica in West US. --```azurecli-interactive -az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup --location westus -``` --> [!NOTE] -> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md). --> [!NOTE] -> * The `az mysql server replica create` command has `--sku-name` argument which allows you to specify the sku (`{pricing_tier}_{compute generation}_{vCores}`) while you create a replica using Azure CLI. </br> -> * The primary server and read replica should be on same pricing tier (General Purpose or Memory Optimized). </br> -> * The replica server configuration can also be changed after it has been created. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the master. ---### List replicas for a source server --To view all replicas for a given source server, run the following command: --```azurecli-interactive -az mysql server replica list --server-name mydemoserver --resource-group myresourcegroup -``` --The `az mysql server replica list` command requires the following parameters: --| Setting | Example value | Description  | -| | | | -| resource-group |  myresourcegroup |  The resource group where the replica server will be created to.  | -| server-name | mydemoserver | The name or ID of the source server. | --### Stop replication to a replica server --> [!IMPORTANT] -> Stopping replication to a server is irreversible. Once replication has stopped between a source and replica, it cannot be undone. The replica server then becomes a standalone server and now supports both read and writes. This server cannot be made into a replica again. --Replication to a read replica server can be stopped using the following command: --```azurecli-interactive -az mysql server replica stop --name mydemoreplicaserver --resource-group myresourcegroup -``` --The `az mysql server replica stop` command requires the following parameters: --| Setting | Example value | Description  | -| | | | -| resource-group |  myresourcegroup |  The resource group where the replica server exists.  | -| name | mydemoreplicaserver | The name of the replica server to stop replication on. | --### Delete a replica server --Deleting a read replica server can be done by running the **[az mysql server delete](/cli/azure/mysql/server)** command. --```azurecli-interactive -az mysql server delete --resource-group myresourcegroup --name mydemoreplicaserver -``` --### Delete a source server --> [!IMPORTANT] -> Deleting a source server stops replication to all replica servers and deletes the source server itself. Replica servers become standalone servers that now support both read and writes. --To delete a source server, you can run the **[az mysql server delete](/cli/azure/mysql/server)** command. --```azurecli-interactive -az mysql server delete --resource-group myresourcegroup --name mydemoserver -``` ---## REST API -You can create and manage read replicas using the [Azure REST API](/rest/api/azure/). --### Create a read replica -You can create a read replica by using the [create API](/rest/api/mysql/flexibleserver/servers/create): --```http -PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{replicaName}?api-version=2017-12-01 -``` --```json -{ - "location": "southeastasia", - "properties": { - "createMode": "Replica", - "sourceServerId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}" - } -} -``` --> [!NOTE] -> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md). --A replica is created by using the same compute and storage settings as the master. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and back-up retention period. The pricing tier can also be changed independently, except to or from the Basic tier. ---> [!IMPORTANT] -> Before a source server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master. --### List replicas -You can view the list of replicas of a source server using the [replica list API](/rest/api/mysql/flexibleserver/replicas/list-by-server): --```http -GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}/Replicas?api-version=2017-12-01 -``` --### Stop replication to a replica server -You can stop replication between a source server and a read replica by using the [update API](/rest/api/mysql/flexibleserver/servers/update). --After you stop replication to a source server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again. --```http -PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}?api-version=2017-12-01 -``` --```json -{ - "properties": { - "replicationRole":"None" - } -} -``` --### Delete a source or replica server -To delete a source or replica server, you use the [delete API](/rest/api/mysql/flexibleserver/servers/delete): --When you delete a source server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes. --```http -DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{serverName}?api-version=2017-12-01 -``` --### Known issue --There are two generations of storage which the servers in General Purpose and Memory Optimized tier use, General purpose storage v1 (Supports up to 4-TB) & General purpose storage v2 (Supports up to 16-TB storage). -Source server and the replica server should have same storage type. As [General purpose storage v2](./concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) is not available in all regions, please make sure you choose the correct replica region while you use location with the CLI or REST API for read replica creation. On how to identify the storage type of your source server refer to link [How can I determine which storage type my server is running on](./concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on). --If you choose a region where you cannot create a read replica for your source server, you will encounter the issue where the deployment will keep running as shown in the figure below and then will timeout with the error *“The resource provision operation did not complete within the allowed timeout period.”* --[ :::image type="content" source="media/how-to-read-replicas-cli/replica-cli-known-issue.png" alt-text="Read replica cli error.":::](media/how-to-read-replicas-cli/replica-cli-known-issue.png#lightbox) --## Next steps --- Learn more about [read replicas](concepts-read-replicas.md) |
mysql | How To Read Replicas Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-portal.md | - Title: Manage read replicas - Azure portal - Azure Database for MySQL -description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure portal. ----- Previously updated : 06/20/2022---# How to create and manage read replicas in Azure Database for MySQL using the Azure portal ----In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure portal. --## Prerequisites --- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md) that will be used as the source server.--> [!IMPORTANT] -> The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers. --## Create a read replica --> [!IMPORTANT] -> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details. -> ->If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid) --A read replica server can be created using the following steps: --1. Sign in to the [Azure portal](https://portal.azure.com). --2. Select the existing Azure Database for MySQL server that you want to use as a master. This action opens the **Overview** page. --3. Select **Replication** from the menu, under **SETTINGS**. --4. Select **Add Replica**. -- :::image type="content" source="./media/how-to-read-replica-portal/add-replica-1.png" alt-text="Azure Database for MySQL - Replication"::: --5. Enter a name for the replica server. -- :::image type="content" source="./media/how-to-read-replica-portal/replica-name.png" alt-text="Azure Database for MySQL - Replica name"::: --6. Select the location for the replica server. The default location is the same as the source server's. -- :::image type="content" source="./media/how-to-read-replica-portal/replica-location.png" alt-text="Azure Database for MySQL - Replica location"::: -- > [!NOTE] - > To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md). --7. Select **OK** to confirm creation of the replica. --> [!NOTE] -> Read replicas are created with the same server configuration as the master. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the master. --Once the replica server has been created, it can be viewed from the **Replication** blade. -- :::image type="content" source="./media/how-to-read-replica-portal/list-replica.png" alt-text="Azure Database for MySQL - List replicas"::: --## Stop replication to a replica server --> [!IMPORTANT] -> Stopping replication to a server is irreversible. Once replication has stopped between a source and replica, it cannot be undone. The replica server then becomes a standalone server and now supports both read and writes. This server cannot be made into a replica again. --To stop replication between a source and a replica server from the Azure portal, use the following steps: --1. In the Azure portal, select your source Azure Database for MySQL server. --2. Select **Replication** from the menu, under **SETTINGS**. --3. Select the replica server you wish to stop replication for. -- :::image type="content" source="./media/how-to-read-replica-portal/stop-replication-select.png" alt-text="Azure Database for MySQL - Stop replication select server"::: --4. Select **Stop replication**. -- :::image type="content" source="./media/how-to-read-replica-portal/stop-replication.png" alt-text="Azure Database for MySQL - Stop replication"::: --5. Confirm you want to stop replication by clicking **OK**. -- :::image type="content" source="./media/how-to-read-replica-portal/stop-replication-confirm.png" alt-text="Azure Database for MySQL - Stop replication confirm"::: --## Delete a replica server --To delete a read replica server from the Azure portal, use the following steps: --1. In the Azure portal, select your source Azure Database for MySQL server. --2. Select **Replication** from the menu, under **SETTINGS**. --3. Select the replica server you wish to delete. -- :::image type="content" source="./media/how-to-read-replica-portal/delete-replica-select.png" alt-text="Azure Database for MySQL - Delete replica select server"::: --4. Select **Delete replica** -- :::image type="content" source="./media/how-to-read-replica-portal/delete-replica.png" alt-text="Azure Database for MySQL - Delete replica"::: --5. Type the name of the replica and click **Delete** to confirm deletion of the replica. -- :::image type="content" source="./media/how-to-read-replica-portal/delete-replica-confirm.png" alt-text="Azure Database for MySQL - Delete replica confirm"::: --## Delete a source server --> [!IMPORTANT] -> Deleting a source server stops replication to all replica servers and deletes the source server itself. Replica servers become standalone servers that now support both read and writes. --To delete a source server from the Azure portal, use the following steps: --1. In the Azure portal, select your source Azure Database for MySQL server. --2. From the **Overview**, select **Delete**. -- :::image type="content" source="./media/how-to-read-replica-portal/delete-master-overview.png" alt-text="Azure Database for MySQL - Delete master"::: --3. Type the name of the source server and click **Delete** to confirm deletion of the source server. -- :::image type="content" source="./media/how-to-read-replica-portal/delete-master-confirm.png" alt-text="Azure Database for MySQL - Delete master confirm"::: --## Monitor replication --1. In the [Azure portal](https://portal.azure.com), select the replica Azure Database for MySQL server you want to monitor. --2. Under the **Monitoring** section of the sidebar, select **Metrics**: --3. Select **Replication lag in seconds** from the dropdown list of available metrics. -- :::image type="content" source="./media/how-to-read-replica-portal/monitor-select-replication-lag-1.png" alt-text="Select Replication lag"::: --4. Select the time range you wish to view. The image below selects a 30 minute time range. -- :::image type="content" source="./media/how-to-read-replica-portal/monitor-replication-lag-time-range-1.png" alt-text="Select time range"::: --5. View the replication lag for the selected time range. The image below displays the last 30 minutes. -- :::image type="content" source="./media/how-to-read-replica-portal/monitor-replication-lag-time-range-thirty-mins-1.png" alt-text="Select time range 30 minutes"::: --## Next steps --- Learn more about [read replicas](concepts-read-replicas.md) |
mysql | How To Read Replicas Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-powershell.md | - Title: Manage read replicas - Azure PowerShell - Azure Database for MySQL -description: Learn how to set up and manage read replicas in Azure Database for MySQL using PowerShell. ----- Previously updated : 06/20/2022----# How to create and manage read replicas in Azure Database for MySQL using PowerShell ----In this article, you learn how to create and manage read replicas in the Azure Database for MySQL -service using PowerShell. To learn more about read replicas, see the -[overview](concepts-read-replicas.md). --## Azure PowerShell --You can create and manage read replicas using PowerShell. --## Prerequisites --To complete this how-to guide, you need: --- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or- [Azure Cloud Shell](https://shell.azure.com/) in the browser -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)--> [!IMPORTANT] -> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az -> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`. -> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az -> PowerShell module releases and available natively from within Azure Cloud Shell. --If you choose to use PowerShell locally, connect to your Azure account using the -[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. ---> [!IMPORTANT] -> The read replica feature is only available for Azure Database for MySQL servers in the General -> Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing -> tiers. -> ->If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid) --### Create a read replica --> [!IMPORTANT] -> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details. --A read replica server can be created using the following command: --```azurepowershell-interactive -Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup | - New-AzMySqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -``` --The `New-AzMySqlReplica` command requires the following parameters: --| Setting | Example value | Description  | -| | | | -| ResourceGroupName |  myresourcegroup |  The resource group where the replica server is created.  | -| Name | mydemoreplicaserver | The name of the new replica server that is created. | --To create a cross region read replica, use the **Location** parameter. The following example creates -a replica in the **West US** region. --```azurepowershell-interactive -Get-AzMySqlServer -Name mrdemoserver -ResourceGroupName myresourcegroup | - New-AzMySqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -Location westus -``` --> [!NOTE] -> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md). --By default, read replicas are created with the same server configuration as the source unless the -**Sku** parameter is specified. --> [!NOTE] -> It is recommended that the replica server's configuration should be kept at equal or greater -> values than the source to ensure the replica is able to keep up with the master. --### List replicas for a source server --To view all replicas for a given source server, run the following command: --```azurepowershell-interactive -Get-AzMySqlReplica -ResourceGroupName myresourcegroup -ServerName mydemoserver -``` --The `Get-AzMySqlReplica` command requires the following parameters: --| Setting | Example value | Description  | -| | | | -| ResourceGroupName |  myresourcegroup |  The resource group where the replica server will be created to.  | -| ServerName | mydemoserver | The name or ID of the source server. | --### Delete a replica server --Deleting a read replica server can be done by running the `Remove-AzMySqlServer` cmdlet. --```azurepowershell-interactive -Remove-AzMySqlServer -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -``` --### Delete a source server --> [!IMPORTANT] -> Deleting a source server stops replication to all replica servers and deletes the source server -> itself. Replica servers become standalone servers that now support both read and writes. --To delete a source server, you can run the `Remove-AzMySqlServer` cmdlet. --```azurepowershell-interactive -Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -``` --### Known Issue --There are two generations of storage which the servers in General Purpose and Memory Optimized tier use, General purpose storage v1 (Supports up to 4-TB) & General purpose storage v2 (Supports up to 16-TB storage). -Source server and the replica server should have same storage type. As [General purpose storage v2](./concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) is not available in all regions, please make sure you choose the correct replica region while you use location with the PowerShell for read replica creation. On how to identify the storage type of your source server refer to link [How can I determine which storage type my server is running on](./concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on). --If you choose a region where you cannot create a read replica for your source server, you will encounter the issue where the deployment will keep running as shown in the figure below and then will timeout with the error *“The resource provision operation did not complete within the allowed timeout period.”* --[ :::image type="content" source="media/how-to-read-replicas-powershell/replica-powershell-known-issue.png" alt-text="Read replica cli error":::](media/how-to-read-replicas-powershell/replica-powershell-known-issue.png#lightbox) ---## Next steps --> [!div class="nextstepaction"] -> [Restart Azure Database for MySQL server using PowerShell](how-to-restart-server-powershell.md) |
mysql | How To Redirection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md | - Title: Connect with redirection - Azure Database for MySQL -description: This article describes how you can configure your application to connect to Azure Database for MySQL with redirection. --- Previously updated : 05/03/2023-------# Connect to Azure Database for MySQL with redirection ----This article explains how to connect an application to your Azure Database for MySQL server with redirection mode. Redirection reduces network latency between client applications and MySQL servers by allowing applications to connect directly to backend server nodes. --## Before you begin --Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Database for MySQL server with engine version 5.6, 5.7, or 8.0. --For details, refer to how to create an Azure Database for MySQL server using the [Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md). --> [!IMPORTANT] -> Redirection is currently not supported with [Private Link for Azure Database for MySQL](concepts-data-access-security-private-link.md). --## Enable redirection --On your Azure Database for MySQL server, configure the `redirect_enabled` parameter to `ON` to allow connections with redirection mode. To update this server parameter, use the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md). --## PHP --Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft. --The mysqlnd_azure extension is available to add to PHP applications through PECL, and it's highly recommended to install and configure the extension through the officially published [PECL package](https://pecl.php.net/package/mysqlnd_azure). --> [!IMPORTANT] -> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview. --### Redirection logic --> [!IMPORTANT] -> Redirection logic/behavior beginning version 1.1.0 was updated and **it is recommended to use version 1.1.0+**. --The redirection behavior is determined by the value of `mysqlnd_azure.enableRedirect`. The table below outlines the behavior of redirection based on the value of this parameter beginning in **version 1.1.0+**. --If you're using an older version of the mysqlnd_azure extension (version 1.0.0-1.0.3), the redirection behavior is determined by the value of `mysqlnd_azure.enabled`. The valid values are `off` (acts similarly as the behavior outlined in the table below) and `on` (acts like `preferred` in the table below). --| **mysqlnd_azure.enableRedirect value** | **Behavior** | -| | | -| `off` or `0` | Redirection won't be used. | -| `on` or `1` | - If the connection doesn't use SSL on the driver side, no connection is made. The following error is returned: *"mysqlnd_azure.enableRedirect is on, but SSL option isn't set in the connection string. Redirection is only possible with SSL."*<br />- If SSL is used on the driver side, but the redirection isn't supported on the server, the first connection is aborted, and the following error is returned: *"Connection aborted because redirection isn't enabled on the MySQL server or the network package doesn't meet redirection protocol."*<br />- If the MySQL server supports redirection, but the redirected connection failed for any reason, also abort the first proxy connection. Return the error of the redirected connection. | -| `preferred` or `2`<br />(default value) | - mysqlnd_azure will use redirection if possible.<br />- If the connection doesn't use SSL on the driver side, the server doesn't support redirection, or the redirected connection fails to connect for any nonfatal reason while the proxy connection is still a valid one, it falls back to the first proxy connection. | --For a successful connection to Azure Database for MySQL single server using `mysqlnd_azure.enableRedirect`, you need to follow mandatory steps of combining your root certificate as per the compliance requirements. For more details, please visit [link](./concepts-certificate-rotation.md#do-i-need-to-make-any-changes-on-my-client-to-maintain-connectivity). --The subsequent sections of the document outline how to install the `mysqlnd_azure` extension using PECL and set the value of this parameter. --### [Ubuntu Linux](#tab/ubuntu) --**Prerequisites** --- PHP versions 7.2.15+ and 7.3.2+-- PHP PEAR-- php-mysql-- Azure Database for MySQL server--1. Install [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) with [PECL](https://pecl.php.net/package/mysqlnd_azure). It's recommended to use version 1.1.0+. -- ```bash - sudo pecl install mysqlnd_azure - ``` --1. Locate the extension directory (`extension_dir`) by running the below: -- ```bash - php -i | grep "extension_dir" - ``` --1. Change directories to the returned folder and ensure `mysqlnd_azure.so` is located in this folder. --1. Locate the folder for .ini files by running the below: -- ```bash - php -i | grep "dir for additional .ini files" - ``` --1. Change directories to this returned folder. --1. Create a new .ini file for `mysqlnd_azure`. Make sure the alphabet order of the name is after that of mysqnld, since the modules are loaded according to the name order of the ini files. For example, if `mysqlnd` .ini is named `10-mysqlnd.ini`, name the mysqlnd ini as `20-mysqlnd-azure.ini`. --1. Within the new .ini file, add the following lines to enable redirection. -- ```bash - extension=mysqlnd_azure - mysqlnd_azure.enableRedirect = on/off/preferred - ``` --### [Windows](#tab/windows) --**Prerequisites** --- PHP versions 7.2.15+ and 7.3.2+-- php-mysql-- Azure Database for MySQL server--1. Determine if you're running a x64 or x86 version of PHP by running the following command: -- ```cmd - php -i | findstr "Thread" - ``` --1. Download the corresponding x64 or x86 version of the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) DLL from [PECL](https://pecl.php.net/package/mysqlnd_azure) that matches your version of PHP. It's recommended to use version 1.1.0+. --1. Extract the zip file and find the DLL named `php_mysqlnd_azure.dll`. --1. Locate the extension directory (`extension_dir`) by running the below command: -- ```cmd - php -i | find "extension_dir" - ``` --1. Copy the `php_mysqlnd_azure.dll` file into the directory returned in step 4. --1. Locate the PHP folder containing the `php.ini` file using the following command: -- ```cmd - php -i | find "Loaded Configuration File" - ``` --1. Modify the `php.ini` file and add the following extra lines to enable redirection. -- Under the Dynamic Extensions section: -- ```cmd - extension=mysqlnd_azure - ``` -- Under the Module Settings section: -- ```cmd - [mysqlnd_azure] - mysqlnd_azure.enableRedirect = on/off/preferred - ``` ----### Confirm redirection --You can also confirm redirection is configured with the below sample PHP code. Create a PHP file called `mysqlConnect.php` and paste the below code. Update the server name, username, and password with your own. --```php -<?php -$host = '<yourservername>.mysql.database.azure.com'; -$username = '<yourusername>@<yourservername>'; -$password = '<yourpassword>'; -$db_name = 'testdb'; - echo "mysqlnd_azure.enableRedirect: ", ini_get("mysqlnd_azure.enableRedirect"), "\n"; - $db = mysqli_init(); - //The connection must be configured with SSL for the redirection test - $link = mysqli_real_connect ($db, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL); - if (!$link) { - die ('Connect error (' . mysqli_connect_errno() . '): ' . mysqli_connect_error() . "\n"); - } - else { - echo $db->host_info, "\n"; //if redirection succeeds, the host_info will differ from the hostname you used used to connect - $res = $db->query('SHOW TABLES;'); //test query with the connection - print_r ($res); - $db->close(); - } -?> -``` --## Next steps --- For more information about connection strings, see [Connection Strings](how-to-connection-string.md). |
mysql | How To Restart Server Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-cli.md | - Title: Restart server - Azure CLI - Azure Database for MySQL -description: This article describes how you can restart an Azure Database for MySQL server using the Azure CLI. ----- Previously updated : 06/20/2022----# Restart Azure Database for MySQL server using the Azure CLI ----This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. --The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores. --The time required to complete a restart depends on the MySQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. --## Prerequisites --To complete this how-to guide: --- You need an [Azure Database for MySQL server](quickstart-create-server-up-azure-cli.md).- --- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-->[!NOTE] ->If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server. --## Restart the server --Restart the server with the following command: --```azurecli-interactive -az mysql server restart --name mydemoserver --resource-group myresourcegroup -``` --## Next steps --Learn about [how to set parameters in Azure Database for MySQL](how-to-configure-server-parameters-using-cli.md) |
mysql | How To Restart Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-portal.md | - Title: Restart server - Azure portal - Azure Database for MySQL -description: This article describes how you can restart an Azure Database for MySQL server using the Azure portal. ----- Previously updated : 06/20/2022---# Restart Azure Database for MySQL server using Azure portal ----This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. --The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores. --The time required to complete a restart depends on the MySQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. --## Prerequisites -To complete this how-to guide, you need: -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)-->[!NOTE] ->If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server. ---## Perform server restart --The following steps restart the MySQL server: --1. In the Azure portal, select your Azure Database for MySQL server. --2. In the toolbar of the server's **Overview** page, click **Restart**. -- :::image type="content" source="./media/how-to-restart-server-portal/2-server.png" alt-text="Azure Database for MySQL - Overview - Restart button"::: --3. Click **Yes** to confirm restarting the server. -- :::image type="content" source="./media/how-to-restart-server-portal/3-restart-confirm.png" alt-text="Azure Database for MySQL - Restart confirm"::: --4. Observe that the server status changes to "Restarting". -- :::image type="content" source="./media/how-to-restart-server-portal/4-restarting-status.png" alt-text="Azure Database for MySQL - Restart status"::: --5. Confirm server restart is successful. -- :::image type="content" source="./media/how-to-restart-server-portal/5-restart-success.png" alt-text="Azure Database for MySQL - Restart success"::: --## Next steps --[Quickstart: Create Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) |
mysql | How To Restart Server Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md | - Title: Restart server - Azure PowerShell - Azure Database for MySQL -description: This article describes how you can restart an Azure Database for MySQL server using PowerShell. ----- Previously updated : 06/20/2022----# Restart Azure Database for MySQL server using PowerShell ----This topic describes how you can restart an Azure Database for MySQL server. You may need to restart -your server for maintenance reasons, which causes a short outage during the operation. --The server restart is blocked if the service is busy. For example, the service may be processing a -previously requested operation such as scaling vCores. --The amount of time required to complete a restart depends on the MySQL recovery process. To reduce -the restart time, we recommend you minimize the amount of activity occurring on the server before -the restart. --## Prerequisites --To complete this how-to guide, you need: --- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or- [Azure Cloud Shell](https://shell.azure.com/) in the browser -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)--> [!IMPORTANT] -> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az -> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`. -> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az -> PowerShell module releases and available natively from within Azure Cloud Shell. -->[!NOTE] ->If the user restarting the server is part of [custom role](../../role-based-access-control/custom-roles.md) the user should have write privilege on the server. --If you choose to use PowerShell locally, connect to your Azure account using the -[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. ---## Restart the server --Restart the server with the following command: --```azurepowershell-interactive -Restart-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -``` ---## Next steps --> [!div class="nextstepaction"] -> [Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md) |
mysql | How To Restore Dropped Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-dropped-server.md | - Title: Restore a deleted Azure Database for MySQL server -description: This article describes how to restore a deleted server in Azure Database for MySQL using the Azure portal. ----- Previously updated : 06/20/2022---# Restore a deleted Azure Database for MySQL server ----When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MySQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system. --## Pre-requisites -To restore a deleted Azure Database for MySQL server, you need following: -- Azure Subscription name hosting the original server-- Location where the server was created--## Steps to restore --1. Go to the [Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade) from Monitor blade in Azure portal. --2. In Activity Log, click on **Add filter** as shown and set following filters for the -- - **Subscription** = Your Subscription hosting the deleted server - - **Resource Type** = Azure Database for MySQL servers (Microsoft.DBforMySQL/servers) - - **Operation** = Delete MySQL Server (Microsoft.DBforMySQL/servers/delete) - - [![Activity log filtered for delete MySQL server operation](./media/how-to-restore-dropped-server/activity-log.png)](./media/how-to-restore-dropped-server/activity-log.png#lightbox) - - 3. Double Click on the Delete MySQL Server event and click on the JSON tab and note the "resourceId" and "submissionTimestamp" attributes in JSON output. The resourceId is in the following format: /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBforMySQL/servers/deletedserver. - - 4. Go to [Create Server REST API Page](/rest/api/mysql/singleserver/servers(2017-12-01)/create) and click on "Try It" tab highlighted in green and login in with your Azure account. - - 5. Provide the resourceGroupName, serverName (deleted server name), subscriptionId, derived from resourceId attribute captured in Step 3, while api-version is pre-populated as shown in image. - - [![Create server using REST API](./media/how-to-restore-dropped-server/create-server-from-rest-api.png)](./media/how-to-restore-dropped-server/create-server-from-rest-api.png#lightbox) - - 6. Scroll below on Request Body section and paste the following: - - ```json - { - "location": "Dropped Server Location", - "properties": - { - "restorePointInTime": "submissionTimestamp - 15 minutes", - "createMode": "PointInTimeRestore", - "sourceServerId": "resourceId" - } - } - ``` -7. Replace the following values in the above request body: - * "Dropped server Location" with the Azure region where the deleted server was originally created - * "submissionTimestamp", and "resourceId" with the values captured in Step 3. - * For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out. - -8. If you see Response Code 201 or 202, the restore request is successfully submitted. --9. The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for - - **Subscription** = Your Subscription - - **Resource Type** = Azure Database for MySQL servers (Microsoft.DBforMySQL/servers) - - **Operation** = Update MySQL Server Create --## Next steps -- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a deleted server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system. -- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/preventing-the-disaster-of-accidental-deletion-for-your-mysql/ba-p/825222). |
mysql | How To Restore Server Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-cli.md | - Title: Backup and restore - Azure CLI - Azure Database for MySQL -description: Learn how to backup and restore a server in Azure Database for MySQL by using the Azure CLI. ----- Previously updated : 06/20/2022----# How to back up and restore a server in Azure Database for MySQL using the Azure CLI ----Azure Database for MySQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. --## Prerequisites --To complete this how-to guide: --- You need an [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-cli.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Set backup configuration --You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation. --> [!NOTE] -> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched. -> --While creating a server via the `az mysql server create` command, the `--geo-redundant-backup` parameter decides your Backup Redundancy Option. If `Enabled`, geo redundant backups are taken. Or if `Disabled` locally redundant backups are taken. --The backup retention period is set by the parameter `--backup-retention`. --For more information about setting these values during create, see the [Azure Database for MySQL server CLI Quickstart](quickstart-create-mysql-server-database-using-azure-cli.md). --The backup retention period of a server can be changed as follows: --```azurecli-interactive -az mysql server update --name mydemoserver --resource-group myresourcegroup --backup-retention 10 -``` --The preceding example changes the backup retention period of mydemoserver to 10 days. --The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the next section. --## Server point-in-time restore -You can restore the server to a previous point in time. The restored data is copied to a new server, and the existing server is left as is. For example, if a table is accidentally dropped at noon today, you can restore to the time just before noon. Then, you can retrieve the missing table and data from the restored copy of the server. --To restore the server, use the Azure CLI [az mysql server restore](/cli/azure/mysql/server#az-mysql-server-restore) command. --### Run the restore command --To restore the server, at the Azure CLI command prompt, enter the following command: --```azurecli-interactive -az mysql server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time 2018-03-13T13:59:00Z --source-server mydemoserver -``` --The `az mysql server restore` command requires the following parameters: --| Setting | Suggested value | Description  | -| | | | -| resource-group |  myresourcegroup |  The resource group where the source server exists.  | -| name | mydemoserver-restored | The name of the new server that is created by the restore command. | -| restore-point-in-time | 2018-03-13T13:59:00Z | Select a point in time to restore to. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as `2018-03-13T05:59:00-08:00`. You can also use the UTC Zulu format, for example, `2018-03-13T13:59:00Z`. | -| source-server | mydemoserver | The name or ID of the source server to restore from. | --When you restore a server to an earlier point in time, a new server is created. The original server and its databases from the specified point in time are copied to the new server. --The location and pricing tier values for the restored server remain the same as the original server. --After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page. --Additionally, after the restore operation finishes, there are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation -* time_zone - This value to set to DEFAULT value **SYSTEM** -* event_scheduler - The event_scheduler is set to **OFF** on the restored server --You will need to copy over the value from the primary server and set it on the restored server by reconfiguring the [server parameter](how-to-server-parameters.md) --The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored. --## Geo restore -If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MySQL is available. --To create a server using a geo redundant backup, use the Azure CLI `az mysql server georestore` command. --> [!NOTE] -> When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated. -> --To geo restore the server, at the Azure CLI command prompt, enter the following command: --```azurecli-interactive -az mysql server georestore --resource-group myresourcegroup --name mydemoserver-georestored --source-server mydemoserver --location eastus --sku-name GP_Gen5_8 -``` -This command creates a new server called *mydemoserver-georestored* in East US that will belong to *myresourcegroup*. It is a General Purpose, Gen 5 server with 8 vCores. The server is created from the geo-redundant backup of *mydemoserver*, which is also in the resource group *myresourcegroup* --If you want to create the new server in a different resource group from the existing server, then in the `--source-server` parameter you would qualify the server name as in the following example: --```azurecli-interactive -az mysql server georestore --resource-group newresourcegroup --name mydemoserver-georestored --source-server "/subscriptions/$<subscription ID>/resourceGroups/$<resource group ID>/providers/Microsoft.DBforMySQL/servers/mydemoserver" --location eastus --sku-name GP_Gen5_8 --``` --The `az mysql server georestore` command requires the following parameters: --| Setting | Suggested value | Description  | -| | | | -|resource-group| myresourcegroup | The name of the resource group the new server will belong to.| -|name | mydemoserver-georestored | The name of the new server. | -|source-server | mydemoserver | The name of the existing server whose geo redundant backups are used. | -|location | eastus | The location of the new server. | -|sku-name| GP_Gen5_8 | This parameter sets the pricing tier, compute generation, and number of vCores of the new server. GP_Gen5_8 maps to a General Purpose, Gen 5 server with 8 vCores.| --When creating a new server by a geo restore, it inherits the same storage size and pricing tier as the source server. These values cannot be changed during creation. After the new server is created, its storage size can be scaled up. --After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page. --The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored. --## Next steps -- Learn more about the service's [backups](concepts-backup.md)-- Learn about [replicas](concepts-read-replicas.md)-- Learn more about [business continuity](concepts-business-continuity.md) options |
mysql | How To Restore Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-portal.md | - Title: Backup and restore - Azure portal - Azure Database for MySQL -description: This article describes how to restore a server in Azure Database for MySQL using the Azure portal. ----- Previously updated : 06/20/2022---# How to backup and restore a server in Azure Database for MySQL using the Azure portal ----## Backup happens automatically -Azure Database for MySQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. --## Prerequisites -To complete this how-to guide, you need: -- An [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-portal.md)--## Set backup configuration --You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation, in the **Pricing Tier** window. --> [!NOTE] -> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched. -> --While creating a server through the Azure portal, the **Pricing Tier** window is where you select either **Locally Redundant** or **Geographically Redundant** backups for your server. This window is also where you select the **Backup Retention Period** - how long (in number of days) you want the server backups stored for. -- :::image type="content" source="./media/how-to-restore-server-portal/pricing-tier.png" alt-text="Pricing Tier - Choose Backup Redundancy"::: --For more information about setting these values during create, see the [Azure Database for MySQL server quickstart](quickstart-create-mysql-server-database-using-azure-portal.md). --The backup retention period can be changed on a server through the following steps: -1. Sign in to the [Azure portal](https://portal.azure.com). -2. Select your Azure Database for MySQL server. This action opens the **Overview** page. -3. Select **Pricing Tier** from the menu, under **SETTINGS**. Using the slider you can change the **Backup Retention Period** to your preference between 7 and 35 days. -In the screenshot below it has been increased to 34 days. --4. Click **OK** to confirm the change. --The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section. --## Point-in-time restore -Azure Database for MySQL allows you to restore the server back to a point-in-time and into a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server. --For example, if a table was accidentally dropped at noon today, you could restore it to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level. --The following steps restore the sample server to a point-in-time: -1. In the Azure portal, select your Azure Database for MySQL server. --2. In the toolbar of the server's **Overview** page, select **Restore**. -- :::image type="content" source="./media/how-to-restore-server-portal/2-server.png" alt-text="Azure Database for MySQL - Overview - Restore button"::: --3. Fill out the Restore form with the required information: -- :::image type="content" source="./media/how-to-restore-server-portal/3-restore.png" alt-text="Azure Database for MySQL - Restore information"::: - - **Restore point**: Select the point-in-time you want to restore to. - - **Target server**: Provide a name for the new server. - - **Location**: You cannot select the region. By default, it is the same as the source server. - - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is the same as the source server. --4. Click **OK** to restore the server to restore to a point-in-time. --5. Once the restore finishes, locate the new server that is created to verify the data was restored as expected. --The new server created by point-in-time restore has the same server admin login name and password that was valid for the existing server at the point-in-time chose. You can change the password from the new server's **Overview** page. --Additionally, after the restore operation finishes, there are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation -* time_zone - This value to set to DEFAULT value **SYSTEM** -* event_scheduler - The event_scheduler is set to **OFF** on the restored server --You will need to copy over the value from the primary server and set it on the restored server by reconfiguring the [server parameter](how-to-server-parameters.md) --The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored. --## Geo restore -If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MySQL is available. --1. Select the **Create a resource** button (+) in the upper-left corner of the portal. Select **Databases** > **Azure Database for MySQL**. -- :::image type="content" source="./media/how-to-restore-server-portal/1-navigate-to-mysql.png" alt-text="Navigate to Azure Database for MySQL."::: - -2. Provide the subscription, resource group, and name of the new server. --3. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo-redundant backups enabled. - - :::image type="content" source="./media/how-to-restore-server-portal/3-geo-restore.png" alt-text="Select data source."::: - - > [!NOTE] - > When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated. - > --4. Select the **Backup** dropdown. - - :::image type="content" source="./media/how-to-restore-server-portal/4-geo-restore-backup.png" alt-text="Select backup dropdown."::: --5. Select the source server to restore from. - - :::image type="content" source="./media/how-to-restore-server-portal/5-select-backup.png" alt-text="Select backup."::: --6. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**. - - :::image type="content" source="./media/how-to-restore-server-portal/6-accept-backup.png" alt-text="Continue with backup."::: --7. Fill out the rest of the form with your preferences. You can select any **Location**. -- After selecting the location, you can select **Configure server** to update the **Compute Generation** (if available in the region you have chosen), number of **vCores**, **Backup Retention Period**, and **Backup Redundancy Option**. Changing **Pricing Tier** (Basic, General Purpose, or Memory Optimized) or **Storage** size during restore is not supported. -- :::image type="content" source="./media/how-to-restore-server-portal/7-create.png" alt-text="Fill form."::: --8. Select **Review + create** to review your selections. --9. Select **Create** to provision the server. This operation may take a few minutes. --The new server created by geo restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page. --The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored. --## Next steps -- Learn more about the service's [backups](concepts-backup.md)-- Learn about [replicas](concepts-read-replicas.md)-- Learn more about [business continuity](concepts-business-continuity.md) options |
mysql | How To Restore Server Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-powershell.md | - Title: Backup and restore - Azure PowerShell - Azure Database for MySQL -description: Learn how to backup and restore a server in Azure Database for MySQL by using Azure PowerShell. ----- Previously updated : 06/20/2022----# How to back up and restore an Azure Database for MySQL server using PowerShell ----Azure Database for MySQL servers is backed up periodically to enable restore features. Using this -feature you may restore the server and all its databases to an earlier point-in-time, on a new -server. --## Prerequisites --To complete this how-to guide, you need: --- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or- [Azure Cloud Shell](https://shell.azure.com/) in the browser -- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)--> [!IMPORTANT] -> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az -> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`. -> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az -> PowerShell module releases and available natively from within Azure Cloud Shell. --If you choose to use PowerShell locally, connect to your Azure account using the -[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. ---## Set backup configuration --At server creation, you make the choice between configuring your server for either locally redundant -or geographically redundant backups. --> [!NOTE] -> After a server is created, the kind of redundancy it has, geographically redundant vs locally -> redundant, can't be changed. --While creating a server via the `New-AzMySqlServer` command, the **GeoRedundantBackup** -parameter decides your backup redundancy option. If **Enabled**, geo redundant backups are taken. Or -if **Disabled**, locally redundant backups are taken. --The backup retention period is set by the **BackupRetentionDay** parameter. --For more information about setting these values during server creation, see -[Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md). --The backup retention period of a server can be changed as follows: --```azurepowershell-interactive -Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -BackupRetentionDay 10 -``` --The preceding example changes the backup retention period of mydemoserver to 10 days. --The backup retention period governs how far back a point-in-time restore can be retrieved, since -it's based on available backups. Point-in-time restore is described further in the next section. --## Server point-in-time restore --You can restore the server to a previous point-in-time. The restored data is copied to a new server, -and the existing server is left unchanged. For example, if a table is accidentally dropped, you can -restore to the time just the drop occurred. Then, you can retrieve the missing table and data from -the restored copy of the server. --To restore the server, use the `Restore-AzMySqlServer` PowerShell cmdlet. --### Run the restore command --To restore the server, run the following example from PowerShell. --```azurepowershell-interactive -$restorePointInTime = (Get-Date).AddMinutes(-10) -Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup | - Restore-AzMySqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore -``` --The **PointInTimeRestore** parameter set of the `Restore-AzMySqlServer` cmdlet requires the -following parameters: --| Setting | Suggested value | Description  | -| | | | -| ResourceGroupName |  myresourcegroup |  The resource group where the source server exists.  | -| Name | mydemoserver-restored | The name of the new server that is created by the restore command. | -| RestorePointInTime | 2020-03-13T13:59:00Z | Select a point in time to restore. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as **2020-03-13T05:59:00-08:00**. You can also use the UTC Zulu format, for example, **2018-03-13T13:59:00Z**. | -| UsePointInTimeRestore | `<SwitchParameter>` | Use point-in-time mode to restore. | --When you restore a server to an earlier point-in-time, a new server is created. The original server -and its databases from the specified point-in-time are copied to the new server. --The location and pricing tier values for the restored server remain the same as the original server. --After the restore process finishes, locate the new server and verify that the data is restored as -expected. The new server has the same server admin login name and password that was valid for the -existing server at the time the restore was started. The password can be changed from the new -server's **Overview** page. --The new server created during a restore does not have the VNet service endpoints that existed on the -original server. These rules must be set up separately for the new server. Firewall rules from the -original server are restored. --## Geo restore --If you configured your server for geographically redundant backups, a new server can be created from -the backup of the existing server. This new server can be created in any region that Azure Database -for MySQL is available. --To create a server using a geo redundant backup, use the `Restore-AzMySqlServer` command with the -**UseGeoRestore** parameter. --> [!NOTE] -> When a server is first created it may not be immediately available for geo restore. It may take a -> few hours for the necessary metadata to be populated. --To geo restore the server, run the following example from PowerShell: --```azurepowershell-interactive -Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup | - Restore-AzMySqlServer -Name mydemoserver-georestored -ResourceGroupName myresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore -``` --This example creates a new server called **mydemoserver-georestored** in the East US region that -belongs to **myresourcegroup**. It is a General Purpose, Gen 5 server with 8 vCores. The server is -created from the geo-redundant backup of **mydemoserver**, also in the resource group -**myresourcegroup**. --To create the new server in a different resource group from the existing server, specify the new -resource group name using the **ResourceGroupName** parameter as shown in the following example: --```azurepowershell-interactive -Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup | - Restore-AzMySqlServer -Name mydemoserver-georestored -ResourceGroupName newresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore -``` --The **GeoRestore** parameter set of the `Restore-AzMySqlServer` cmdlet requires the following -parameters: --| Setting | Suggested value | Description  | -| | | | -|ResourceGroupName | myresourcegroup | The name of the resource group the new server belongs to.| -|Name | mydemoserver-georestored | The name of the new server. | -|Location | eastus | The location of the new server. | -|UseGeoRestore | `<SwitchParameter>` | Use geo mode to restore. | --When creating a new server using geo restore, it inherits the same storage size and pricing tier as -the source server unless the **Sku** parameter is specified. --After the restore process finishes, locate the new server and verify that the data is restored as -expected. The new server has the same server admin login name and password that was valid for the -existing server at the time the restore was started. The password can be changed from the new -server's **Overview** page. --The new server created during a restore does not have the VNet service endpoints that existed on the -original server. These rules must be set up separately for this new server. Firewall rules from the -original server are restored. --## Next steps --> [!div class="nextstepaction"] -> [How to generate an Azure Database for MySQL connection string with PowerShell](how-to-connection-string-powershell.md) |
mysql | How To Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-server-parameters.md | - Title: Configure server parameters - Azure portal - Azure Database for MySQL -description: This article describes how to configure MySQL server parameters in Azure Database for MySQL using the Azure portal. ----- Previously updated : 06/20/2022---# Configure server parameters in Azure Database for MySQL using the Azure portal ----Azure Database for MySQL supports configuration of some server parameters. This article describes how to configure these parameters by using the Azure portal. Not all server parameters can be adjusted. -->[!NOTE] -> Server parameters can be updated globally at the server-level, use the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), [PowerShell](./how-to-configure-server-parameters-using-powershell.md), or [Azure portal](./how-to-server-parameters.md). --## Configure server parameters --1. Sign in to the [Azure portal](https://portal.azure.com), then locate your Azure Database for MySQL server. -2. Under the **SETTINGS** section, click **Server parameters** to open the server parameters page for the Azure Database for MySQL server. -3. Locate any settings you need to adjust. Review the **Description** column to understand the purpose and allowed values. -4. Click **Save** to save your changes. -5. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**. --## Setting parameters not listed --If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server. --1. Under the **SETTINGS** section, click **Server parameters** to open the server parameters page for the Azure Database for MySQL server. -2. Search for `init_connect` -3. Add the server parameters in the format: `SET parameter_name=YOUR_DESIRED_VALUE` in value the value column. -- For example, you can change the character set of your server by setting of `init_connect` to `SET character_set_client=utf8;SET character_set_database=utf8mb4;SET character_set_connection=latin1;SET character_set_results=latin1;` -4. Click **Save** to save your changes. -->[!NOTE] -> `init_connect` can be used to change parameters that do not require SUPER privilege(s) at the session level. To verify if you can set the parameter using `init_connect`, execute the `set session parameter_name=YOUR_DESIRED_VALUE;` command and if it errors out with **Access denied; you need SUPER privileges(s)** error, then you cannot set the parameter using `init_connect'. --## Working with the time zone parameter --### Populating the time zone tables --The time zone tables on your server can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. --> [!NOTE] -> If you are running the `mysql.az_load_timezone` command from MySQL Workbench, you may need to turn off safe update mode first using `SET SQL_SAFE_UPDATES=0;`. --```sql -CALL mysql.az_load_timezone(); -``` --> [!IMPORTANT] -> You should restart the server to ensure the time zone tables are properly populated. To restart the server, use the [Azure portal](how-to-restart-server-portal.md) or [CLI](how-to-restart-server-cli.md). --To view available time zone values, run the following command: --```sql -SELECT name FROM mysql.time_zone_name; -``` --### Setting the global level time zone --The global level time zone can be set from the **Server parameters** page in the Azure portal. The below sets the global time zone to the value "US/Pacific". ---### Setting the session level time zone --The session level time zone can be set by running the `SET time_zone` command from a tool like the MySQL command line or MySQL Workbench. The example below sets the time zone to the **US/Pacific** time zone. --```sql -SET time_zone = 'US/Pacific'; -``` --Refer to the MySQL documentation for [Date and Time Functions](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz). --## Next steps --- [Connection libraries for Azure Database for MySQL](../flexible-server/concepts-connection-libraries.md). |
mysql | How To Stop Start Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-stop-start-server.md | - Title: Stop/start - Azure portal - Azure Database for MySQL server -description: This article describes how to stop/start operations in Azure Database for MySQL. ----- Previously updated : 06/20/2022---# Stop/Start an Azure Database for MySQL ----> [!IMPORTANT] -> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can choose to **Stop** it again if you are not using the server. --This article provides step-by-step procedure to perform Stop and Start of the single server. --## Prerequisites --To complete this how-to guide, you need: --- You must have an Azure Database for MySQL single server.--> [!NOTE] -> Refer to the limitation of using [stop/start](concepts-servers.md#limitations-of-stopstart-operation) --## How to stop/start the Azure Database for MySQL using Azure portal --### Stop a running server --1. In the [Azure portal](https://portal.azure.com/), choose your MySQL server that you want to stop. --2. From the **Overview** page, click the **Stop** button in the toolbar. -- :::image type="content" source="./media/how-to-stop-start-server/mysql-stop-server.png" alt-text="Azure Database for MySQL Stop server"::: -- > [!NOTE] - > Once the server is stopped, the other management operations are not available for the single server. --### Start a stopped server --1. In the [Azure portal](https://portal.azure.com/), choose your single server that you want to start. --2. From the **Overview** page, click the **Start** button in the toolbar. -- :::image type="content" source="./media/how-to-stop-start-server/mysql-start-server.png" alt-text="Azure Database for MySQL start server"::: -- > [!NOTE] - > Once the server is started, all management operations are now available for the single server. --## How to stop/start the Azure Database for MySQL using CLI --### Stop a running server --1. In the [Azure portal](https://portal.azure.com/), choose your MySQL server that you want to stop. --2. From the **Overview** page, click the **Stop** button in the toolbar. -- ```azurecli-interactive - az mysql server stop --name <server-name> -g <resource-group-name> - ``` - > [!NOTE] - > Once the server is stopped, the other management operations are not available for the single server. --### Start a stopped server --1. In the [Azure portal](https://portal.azure.com/), choose your single server that you want to start. --2. From the **Overview** page, click the **Start** button in the toolbar. -- ```azurecli-interactive - az mysql server start --name <server-name> -g <resource-group-name> - ``` - > [!NOTE] - > Once the server is started, all management operations are now available for the single server. --## Next steps -Learn about [how to create alerts on metrics](how-to-alert-on-metric.md). |
mysql | How To Tls Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-tls-configurations.md | - Title: TLS configuration - Azure portal - Azure Database for MySQL -description: Learn how to set TLS configuration using Azure portal for your Azure Database for MySQL ----- Previously updated : 06/20/2022---# Configuring TLS settings in Azure Database for MySQL using Azure portal ----This article describes how you can configure an Azure Database for MySQL server to enforce minimum TLS version allowed for connections to go through and deny all connections with lower TLS version than configured minimum TLS version thereby enhancing the network security. --You can enforce TLS version for connecting to their Azure Database for MySQL. Customers now have a choice to set the minimum TLS version for their database server. For example, setting this Minimum TLS version to 1.0 means you shall allow clients connecting using TLS 1.0,1.1 and 1.2. Alternatively, setting this to 1.2 means that you only allow clients connecting using TLS 1.2+ and all incoming connections with TLS 1.0 and TLS 1.1 will be rejected. --## Prerequisites --To complete this how-to guide, you need: --* An [Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md) --## Set TLS configurations for Azure Database for MySQL --Follow these steps to set MySQL server minimum TLS version: --1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server. --1. On the MySQL server page, under **Settings**, click **Connection security** to open the connection security configuration page. --1. In **Minimum TLS version**, select **1.2** to deny connections with TLS version less than TLS 1.2 for your MySQL server. -- :::image type="content" source="./media/how-to-tls-configurations/setting-tls-value.png" alt-text="Azure Database for MySQL TLS configuration"::: --1. Click **Save** to save the changes. --1. A notification will confirm that connection security setting was successfully enabled and in effect immediately. There is **no restart** of the server required or performed. After the changes are saved, all new connections to the server are accepted only if the TLS version is greater than or equal to the minimum TLS version set on the portal. -- :::image type="content" source="./media/how-to-tls-configurations/setting-tls-value-success.png" alt-text="Azure Database for MySQL TLS configuration success"::: --## Next steps --- Learn about [how to create alerts on metrics](how-to-alert-on-metric.md) |
mysql | How To Troubleshoot Common Connection Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-connection-issues.md | - Title: Troubleshoot connection issues - Azure Database for MySQL -description: Learn how to troubleshoot connection issues to Azure Database for MySQL, including transient errors requiring retries, firewall issues, and outages. -keywords: mysql connection,connection string,connectivity issues,transient error,connection error ----- Previously updated : 06/20/2022---# Troubleshoot connection issues to Azure Database for MySQL ----Connection problems may be caused by a variety of things, including: --* Firewall settings -* Connection time-out -* Incorrect login information -* Maximum limit reached on some Azure Database for MySQL resources -* Issues with the infrastructure of the service -* Maintenance being performed in the service -* The compute allocation of the server is changed by scaling the number of vCores or moving to a different service tier --Generally, connection issues to Azure Database for MySQL can be classified as follows: --* Transient errors (short-lived or intermittent) -* Persistent or non-transient errors (errors that regularly recur) --## Troubleshoot transient errors --Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. The Azure Database for MySQL service has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery. --### Steps to resolve transient connectivity issues --1. Check the [Microsoft Azure Service Dashboard](https://azure.microsoft.com/status) for any known outages that occurred during the time in which the errors were reported by the application. -2. Applications that connect to a cloud service such as Azure Database for MySQL should expect transient errors and implement retry logic to handle these errors instead of surfacing these as application errors to users. Review [Handling of transient connectivity errors for Azure Database for MySQL](concepts-connectivity.md) for best practices and design guidelines for handling transient errors. -3. As a server approaches its resource limits, errors can seem to be transient connectivity issue. See [Limitations in Azure Database for MySQL](concepts-limits.md). -4. If connectivity problems continue, or if the duration for which your application encounters the error exceeds 60 seconds or if you see multiple occurrences of the error in a given day, file an Azure support request by selecting **Get Support** on the [Azure Support](https://azure.microsoft.com/support/options) site. --## Troubleshoot persistent errors --If the application persistently fails to connect to Azure Database for MySQL, it usually indicates an issue with one of the following: --* Server firewall configuration: Make sure that the Azure Database for MySQL server firewall is configured to allow connections from your client, including proxy servers and gateways. -* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you cannot to must be allowed as well as application names such as MySQL in some firewalls. -* User error: You might have mistyped connection parameters, such as the server name in the connection string or a missing *\@servername* suffix in the user name. --### Steps to resolve persistent connectivity issues --1. Set up [firewall rules](how-to-manage-firewall-using-portal.md) to allow the client IP address. For temporary testing purposes only, set up a firewall rule using 0.0.0.0 as the starting IP address and using 255.255.255.255 as the ending IP address. This will open the server to all IP addresses. If this resolves your connectivity issue, remove this rule and create a firewall rule for an appropriately limited IP address or address range. -2. On all firewalls between the client and the internet, make sure that port 3306 is open for outbound connections. -3. Verify your connection string and other connection settings. Review [How to connect applications to Azure Database for MySQL](how-to-connection-string.md). -4. Check the service health in the dashboard. If you think there's a regional outage, see [Overview of business continuity with Azure Database for MySQL](concepts-business-continuity.md) for steps to recover to a new region. --## Next steps --* [Handling of transient connectivity errors for Azure Database for MySQL](concepts-connectivity.md) |
mysql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md | - Title: Built-in policy definitions for Azure Database for MySQL -description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources. ------ Previously updated : 02/06/2024---# Azure Policy built-in definitions for Azure Database for MySQL ----This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy definitions for Azure Database for MySQL. For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md). --The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the **Version** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). --## Azure Database for MySQL ---## Next steps --- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../../governance/policy/concepts/effects.md). |
mysql | Quickstart Create Mysql Server Database Using Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md | - Title: 'Quickstart: Create an Azure Database for MySQL - ARM template' -description: In this Quickstart, learn how to create an Azure Database for MySQL server with virtual network integration, by using an Azure Resource Manager template. ------ Previously updated : 06/20/2022---# Quickstart: Use an ARM template to create an Azure Database for MySQL server ----Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for MySQL server with virtual network integration. You can create the server in the Azure portal, Azure CLI, or Azure PowerShell. ---If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. ---## Prerequisites --# [Portal](#tab/azure-portal) --An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). --# [PowerShell](#tab/PowerShell) --* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). -* If you want to run the code locally, [Azure PowerShell](/powershell/azure/). --# [CLI](#tab/CLI) --* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). -* If you want to run the code locally, [Azure CLI](/cli/azure/). ----## Review the template --You create an Azure Database for MySQL server with a defined set of compute and storage resources. To learn more, see [Azure Database for MySQL pricing tiers](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md). --The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-mysql-with-vnet/). ---The template defines five Azure resources: --* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks) -* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets) -* [**Microsoft.DBforMySQL/servers**](/azure/templates/microsoft.dbformysql/servers) -* [**Microsoft.DBforMySQL/servers/virtualNetworkRules**](/azure/templates/microsoft.dbformysql/servers/virtualnetworkrules) -* [**Microsoft.DBforMySQL/servers/firewallRules**](/azure/templates/microsoft.dbformysql/servers/firewallrules) --More Azure Database for MySQL template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Dbformysql&pageNumber=1&sort=Popular). --## Deploy the template --# [Portal](#tab/azure-portal) --Select the following link to deploy the Azure Database for MySQL server template in the Azure portal: ---On the **Deploy Azure Database for MySQL with VNet** page: --1. For **Resource group**, select **Create new**, enter a name for the new resource group, and select **OK**. --2. If you created a new resource group, select a **Location** for the resource group and the new server. --3. Enter a **Server Name**, **Administrator Login**, and **Administrator Login Password**. -- :::image type="content" source="./media/quickstart-create-mysql-server-database-using-arm-template/deploy-azure-database-for-mysql-with-vnet.png" alt-text="Deploy Azure Database for MySQL with VNet window, Azure quickstart template, Azure portal"::: --4. Change the other default settings if you want: -- * **Subscription**: the Azure subscription you want to use for the server. - * **Sku Capacity**: the vCore capacity, which can be *2* (the default), *4*, *8*, *16*, *32*, or *64*. - * **Sku Name**: the SKU tier prefix, SKU family, and SKU capacity, joined by underscores, such as *B_Gen5_1*, *GP_Gen5_2* (the default), or *MO_Gen5_32*. - * **Sku Size MB**: the storage size, in megabytes, of the Azure Database for MySQL server (default *5120*). - * **Sku Tier**: the deployment tier, such as *Basic*, *GeneralPurpose* (the default), or *MemoryOptimized*. - * **Sku Family**: *Gen4* or *Gen5* (the default), which indicates hardware generation for server deployment. - * **Mysql Version**: the version of MySQL server to deploy, such as *5.6* or *5.7* (the default). - * **Backup Retention Days**: the desired period for geo-redundant backup retention, in days (default *7*). - * **Geo Redundant Backup**: *Enabled* or *Disabled* (the default), depending on geo-disaster recovery (Geo-DR) requirements. - * **Virtual Network Name**: the name of the virtual network (default *azure_mysql_vnet*). - * **Subnet Name**: the name of the subnet (default *azure_mysql_subnet*). - * **Virtual Network Rule Name**: the name of the virtual network rule allowing the subnet (default *AllowSubnet*). - * **Vnet Address Prefix**: the address prefix for the virtual network (default *10.0.0.0/16*). - * **Subnet Prefix**: the address prefix for the subnet (default *10.0.0.0/16*). --5. Read the terms and conditions, and then select **I agree to the terms and conditions stated above**. --6. Select **Purchase**. --# [PowerShell](#tab/PowerShell) --Use the following interactive code to create a new Azure Database for MySQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password. --To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block. --```azurepowershell-interactive -$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for MySQL server" -$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist" -$location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group" -$adminUser = Read-Host -Prompt "Enter the Azure Database for MySQL server's administrator account name" -$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString --New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment -New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName ` - -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json ` - -serverName $serverName ` - -administratorLogin $adminUser ` - -administratorLoginPassword $adminPassword --Read-Host -Prompt "Press [ENTER] to continue ..." -``` --# [CLI](#tab/CLI) --Use the following interactive code to create a new Azure Database for MySQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password. --To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block. --```azurecli-interactive -echo "Enter a name for the new Azure Database for MySQL server:" && -read serverName && -echo "Enter a name for the new resource group where the server will exist:" && -read resourceGroupName && -echo "Enter an Azure region (for example, centralus) for the resource group:" && -read location && -echo "Enter the Azure Database for MySQL server's administrator account name:" && -read adminUser && -echo "Enter the administrator password:" && -read adminPassword && -params='serverName='$serverName' administratorLogin='$adminUser' administratorLoginPassword='$adminPassword && -az group create --name $resourceGroupName --location $location && -az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json && -echo "Press [ENTER] to continue ..." -``` ----## Review deployed resources --# [Portal](#tab/azure-portal) --Follow these steps to see an overview of your new Azure Database for MySQL server: --1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for MySQL servers**. --2. In the database list, select your new server. The **Overview** page for your new Azure Database for MySQL server appears. --# [PowerShell](#tab/PowerShell) --Run the following interactive code to view details about your Azure Database for MySQL server. You'll have to enter the name of the new server. --```azurepowershell-interactive -$serverName = Read-Host -Prompt "Enter the name of your Azure Database for MySQL server" -Get-AzResource -ResourceType "Microsoft.DBforMySQL/servers" -Name $serverName | ft -Write-Host "Press [ENTER] to continue..." -``` --# [CLI](#tab/CLI) --Run the following interactive code to view details about your Azure Database for MySQL server. You'll have to enter the name and the resource group of the new server. --```azurecli-interactive -echo "Enter your Azure Database for MySQL server name:" && -read serverName && -echo "Enter the resource group where the Azure Database for MySQL server exists:" && -read resourcegroupName && -az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DbForMySQL/servers" -``` ----## Exporting ARM template from the portal -You can [export an ARM template](../../azure-resource-manager/templates/export-template-portal.md) from the Azure portal. There are two ways to export a template: --- [Export from resource group or resource](../../azure-resource-manager/templates/export-template-portal.md#export-template-from-a-resource). This option generates a new template from existing resources. The exported template is a "snapshot" of the current state of the resource group. You can export an entire resource group or specific resources within that resource group.-- [Export before deployment or from history](../../azure-resource-manager/templates/export-template-portal.md#download-template-before-deployment). This option retrieves an exact copy of a template used for deployment.--When exporting the template, in the ```"properties":{ }``` section of the MySQL server resource you will notice that ```administratorLogin``` and ```administratorLoginPassword``` will not be included for security reasons. You **MUST** add these parameters to your template before deploying the template or the template will fail. --``` -"resources": [ - { - "type": "Microsoft.DBforMySQL/servers", - "apiVersion": "2017-12-01", - "name": "[parameters('servers_name')]", - "location": "southcentralus", - "sku": { - "name": "B_Gen5_1", - "tier": "Basic", - "family": "Gen5", - "capacity": 1 - }, - "properties": { - "administratorLogin": "[parameters('administratorLogin')]", - "administratorLoginPassword": "[parameters('administratorLoginPassword')]", -``` --## Clean up resources --When it's no longer needed, delete the resource group, which deletes the resources in the resource group. --# [Portal](#tab/azure-portal) --1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**. --2. In the resource group list, choose the name of your resource group. --3. In the **Overview** page of your resource group, select **Delete resource group**. --4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**. --# [PowerShell](#tab/PowerShell) --```azurepowershell-interactive -$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name" -Remove-AzResourceGroup -Name $resourceGroupName -Write-Host "Press [ENTER] to continue..." -``` --# [CLI](#tab/CLI) --```azurecli-interactive -echo "Enter the Resource Group name:" && -read resourceGroupName && -az group delete --name $resourceGroupName && -echo "Press [ENTER] to continue ..." -``` ----## Next steps --For a step-by-step tutorial that guides you through the process of creating an ARM template, see: --> [!div class="nextstepaction"] -> [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md) |
mysql | Quickstart Create Mysql Server Database Using Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli.md | - Title: 'Quickstart: Create a server - Azure CLI - Azure Database for MySQL' -description: This quickstart describes how to use the Azure CLI to create an Azure Database for MySQL server in an Azure resource group. ----- Previously updated : 06/20/2022----# Quickstart: Create an Azure Database for MySQL server using Azure CLI ----> [!TIP] -> Consider using the simpler [az mysql up](/cli/azure/mysql#az-mysql-up) Azure CLI command (currently in preview). Try out the [quickstart](./quickstart-create-server-up-azure-cli.md). --This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create an Azure Database for MySQL server in five minutes. ------ ```azurecli - az account set --subscription <subscription id> - ``` --## Create an Azure Database for MySQL server -Create an [Azure resource group](../../azure-resource-manager/management/overview.md) using the [az group create](/cli/azure/group) command and then create your MySQL server inside this resource group. You should provide a unique name. The following example creates a resource group named `myresourcegroup` in the `westus` location. --```azurecli-interactive -az group create --name myresourcegroup --location westus -``` --Create an Azure Database for MySQL server with the [az mysql server create](/cli/azure/mysql/server#az-mysql-server-create) command. A server can contain multiple databases. --```azurecli -az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 -``` --Here are the details for arguments above : --**Setting** | **Sample value** | **Description** -|| -name | mydemoserver | Enter a unique name for your Azure Database for MySQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters. -resource-group | myresourcegroup | Provide the name of the Azure resource group. -location | westus | The Azure location for the server. -admin-user | myadmin | The username for the administrator login. It cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**. -admin-password | *secure password* | The password of the administrator user. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. -sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information. -->[!IMPORTANT] ->- The default MySQL version on your server is 5.7 . We currently have 5.6 and 8.0 versions also available. ->- To view all the arguments for **az mysql server create** command, see this [reference document](/cli/azure/mysql/server#az-mysql-server-create). ->- SSL is enabled by default on your server . For more infroamtion on SSL, see [Configure SSL connectivity](how-to-configure-ssl.md) --## Configure a server-level firewall rule -By default the new server created is protected with firewall rules and not accessible publicly. You can configure the firewall rule on your server using the [az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule) command. This will allow you to connect to the server locally. --The following example creates a firewall rule called `AllowMyIP` that allows connections from a specific IP address, 192.168.0.1. Replace the IP address you will be connecting from. You can use an range of IP addresses if needed. Don't know how to look for your IP, then go to [https://whatismyipaddress.com/](https://whatismyipaddress.com/) to get your IP address. --```azurecli-interactive -az mysql server firewall-rule create --resource-group myresourcegroup --server mydemoserver --name AllowMyIP --start-ip-address 192.168.0.1 --end-ip-address 192.168.0.1 -``` --> [!NOTE] -> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from within a corporate network, outbound traffic over port 3306 might not be allowed. If this is the case, you can't connect to your server unless your IT department opens port 3306. --## Get the connection information --To connect to your server, you need to provide host information and access credentials. --```azurecli-interactive -az mysql server show --resource-group myresourcegroup --name mydemoserver -``` --The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**. -```json -{ - "administratorLogin": "myadmin", - "earliestRestoreDate": null, - "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver", - "location": "westus", - "name": "mydemoserver", - "resourceGroup": "myresourcegroup", - "sku": { - "capacity": 2, - "family": "Gen5", - "name": "GP_Gen5_2", - "size": null, - "tier": "GeneralPurpose" - }, - "sslEnforcement": "Enabled", - "storageProfile": { - "backupRetentionDays": 7, - "geoRedundantBackup": "Disabled", - "storageMb": 5120 - }, - "tags": null, - "type": "Microsoft.DBforMySQL/servers", - "userVisibleState": "Ready", - "version": "5.7" -} -``` --## Connect to Azure Database for MySQL server using mysql command-line client -You can connect to your server using a popular client tool, **[mysql.exe](https://dev.mysql.com/downloads/)** command-line tool with [Azure Cloud Shell](../../cloud-shell/overview.md). Alternatively, you can use mysql command line on your local environment. -```bash - mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p -``` --## Clean up resources -If you don't need these resources for another quickstart/tutorial, you can delete them by doing the following command: --```azurecli-interactive -az group delete --name myresourcegroup -``` --If you would just like to delete the one newly created server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command. --```azurecli-interactive -az mysql server delete --resource-group myresourcegroup --name mydemoserver -``` --## Next steps --> [!div class="nextstepaction"] ->[Build a PHP app on Windows with MySQL](../../app-service/tutorial-php-mysql-app.md) |
mysql | Quickstart Create Mysql Server Database Using Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-powershell.md | - Title: 'Quickstart: Create a server - Azure PowerShell - Azure Database for MySQL' -description: This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an Azure resource group. ----- Previously updated : 06/20/2022----# Quickstart: Create an Azure Database for MySQL server using PowerShell ----This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an Azure resource group. You can use PowerShell to create and manage Azure resources interactively or in scripts. --## Prerequisites --If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. --If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell). --> [!IMPORTANT] -> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az -> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`. -> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az -> PowerShell module releases and available natively from within Azure Cloud Shell. --If this is your first time using the Azure Database for MySQL service, you must register the -**Microsoft.DBforMySQL** resource provider. --```azurepowershell-interactive -Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL -``` ---If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription ID using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet. --```azurepowershell-interactive -Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 -``` --## Create a resource group --Create an [Azure resource group](../../azure-resource-manager/management/overview.md) using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as a group. --The following example creates a resource group named **myresourcegroup** in the **West US** region. --```azurepowershell-interactive -New-AzResourceGroup -Name myresourcegroup -Location westus -``` --## Create an Azure Database for MySQL server --Create an Azure Database for MySQL server with the `New-AzMySqlServer` cmdlet. A server can manage multiple databases. Typically, a separate database is used for each project or for each user. --The following table contains a list of commonly used parameters and sample values for the -`New-AzMySqlServer` cmdlet. --| **Setting** | **Sample value** | **Description** | -| -- | - | - | -| Name | mydemoserver | Choose a globally unique name in Azure that identifies your Azure Database for MySQL server. The server name can only contain letters, numbers, and the hyphen (-) character. Any uppercase characters that are specified are automatically converted to lowercase during the creation process. It must contain from 3 to 63 characters. | -| ResourceGroupName | myresourcegroup | Provide the name of the Azure resource group. | -| Sku | GP_Gen5_2 | The name of the SKU. Follows the convention **pricing-tier\_compute-generation\_vCores** in shorthand. For more information about the Sku parameter, see the information following this table. | -| BackupRetentionDay | 7 | How long a backup should be retained. Unit is days. Range is 7-35. | -| GeoRedundantBackup | Enabled | Whether geo-redundant backups should be enabled for this server or not. This value cannot be enabled for servers in the basic pricing tier and it cannot be changed after the server is created. Allowed values: Enabled, Disabled. | -| Location | westus | The Azure region for the server. | -| SslEnforcement | Enabled | Whether SSL should be enabled or not for this server. Allowed values: Enabled, Disabled. | -| StorageInMb | 51200 | The storage capacity of the server (unit is megabytes). Valid StorageInMb is a minimum of 5120 MB and increases in 1024 MB increments. For more information about storage size limits, see [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md). | -| Version | 5.7 | The MySQL major version. | -| AdministratorUserName | myadmin | The username for the administrator login. It cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**. | -| AdministratorLoginPassword | `<securestring>` | The password of the administrator user in the form of a secure string. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. | --The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as shown in the following examples. --- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.-- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.-- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.--For information about valid **Sku** values by region and for tiers, see -[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md). --The following example creates a MySQL server in the **West US** region named **mydemoserver** in the -**myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5 server in the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document the password used in the first line of the example as this is the password for the MySQL server admin account. --> [!TIP] -> A server name maps to a DNS name and must be globally unique in Azure. --```azurepowershell-interactive -$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString -New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password -``` --Consider using the basic pricing tier if light compute and I/O are adequate for your workload. --> [!IMPORTANT] -> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory- -> optimized and cannot be geo-replicated. --## Configure a firewall rule --Create an Azure Database for MySQL server-level firewall rule using the `New-AzMySqlFirewallRule` cmdlet. A server-level firewall rule allows an external application, such as the `mysql` command-line tool or MySQL Workbench to connect to your server through the Azure Database for MySQL service firewall. --The following example creates a firewall rule named **AllowMyIP** that allows connections from a specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond to the location that you are connecting from. --```azurepowershell-interactive -New-AzMySqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1 -``` --> [!NOTE] -> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from -> within a corporate network, outbound traffic over port 3306 might not be allowed. In this -> scenario, you can only connect to the server if your IT department opens port 3306. --## Configure SSL settings --By default, SSL connections between your server and client applications are enforced. This default ensures the security of _in-motion_ data by encrypting the data stream over the Internet. For this quickstart, disable SSL connections for your server. For more information, see [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). --> [!WARNING] -> Disabling SSL is not recommended for production servers. --The following example disables SSL on your Azure Database for MySQL server. --```azurepowershell-interactive -Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -SslEnforcement Disabled -``` --## Get the connection information --To connect to your server, you need to provide host information and access credentials. Use the following example to determine the connection information. Make a note of the values for **FullyQualifiedDomainName** and **AdministratorLogin**. --```azurepowershell-interactive -Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup | - Select-Object -Property FullyQualifiedDomainName, AdministratorLogin -``` --```Output -FullyQualifiedDomainName AdministratorLogin - -mydemoserver.mysql.database.azure.com myadmin -``` --## Connect to the server using the mysql command-line tool --Connect to your server using the `mysql` command-line tool. To download and install the command-line tool, see [MySQL Community Downloads](https://dev.mysql.com/downloads/shell/). You can also access a pre-installed version of the `mysql` command-line tool in Azure Cloud Shell by selecting the **Try It** button on a code sample in this article. Other ways to access Azure Cloud Shell are to select the **>_** button on the upper-right toolbar in the Azure portal or by visiting [shell.azure.com](https://shell.azure.com/). --1. Connect to the server using the `mysql` command-line tool. -- ```azurepowershell-interactive - mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p - ``` --1. View server status. -- ```sql - mysql> status - ``` -- ```Output - C:\Users\>mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p - Enter password: ************* - Welcome to the MySQL monitor. Commands end with ; or \g. - Your MySQL connection id is 65512 - Server version: 5.6.42.0 MySQL Community Server (GPL) -- Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. -- Oracle is a registered trademark of Oracle Corporation and/or its - affiliates. Other names may be trademarks of their respective - owners. -- Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. -- mysql> status - -- - mysql Ver 14.14 Distrib 5.7.29, for Win64 (x86_64) -- Connection id: 65512 - Current database: - Current user: myadmin@myipaddress - SSL: Not in use - Using delimiter: ; - Server version: 5.6.42.0 MySQL Community Server (GPL) - Protocol version: 10 - Connection: mydemoserver.mysql.database.azure.com via TCP/IP - Server characterset: latin1 - Db characterset: latin1 - Client characterset: utf8 - Conn. characterset: utf8 - TCP port: 3306 - Uptime: 1 hour 2 min 12 sec -- Threads: 7 Questions: 952 Slow queries: 0 Opens: 66 Flush tables: 3 Open tables: 16 Queries per second avg: 0.255 - -- -- mysql> - ``` --For additional commands, see [MySQL 5.7 Reference Manual - Chapter 4.5.1](https://dev.mysql.com/doc/refman/5.7/en/mysql.html). --## Connect to the server using MySQL Workbench --1. Launch the MySQL Workbench application on your client computer. To download and install MySQL Workbench, see [Download MySQL Workbench](https://dev.mysql.com/downloads/workbench/). --1. In the **Setup New Connection** dialog box, enter the following information on the **Parameters** - tab: -- :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-powershell/setup-new-connection.png" alt-text="setup new connection"::: -- | **Setting** | **Suggested Value** | **Description** | - | -- | | - | - | Connection Name | My Connection | Specify a label for this connection | - | Connection Method | Standard (TCP/IP) | Use TCP/IP protocol to connect to Azure Database for MySQL | - | Hostname | `mydemoserver.mysql.database.azure.com` | Server name you previously noted | - | Port | 3306 | The default port for MySQL | - | Username | myadmin@mydemoserver | The server admin login you previously noted | - | Password | ************* | Use the admin account password you configured earlier | --1. To test if the parameters are configured correctly, click the **Test Connection** button. --1. Select the connection to connect to the server. --## Clean up resources --If the resources created in this quickstart aren't needed for another quickstart or tutorial, you can delete them by running the following example. --> [!CAUTION] -> The following example deletes the specified resource group and all resources contained within it. -> If resources outside the scope of this quickstart exist in the specified resource group, they will -> also be deleted. --```azurepowershell-interactive -Remove-AzResourceGroup -Name myresourcegroup -``` --To delete only the server created in this quickstart without deleting the resource group, use the -`Remove-AzMySqlServer` cmdlet. --```azurepowershell-interactive -Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -``` --## Next steps --> [!div class="nextstepaction"] -> [Design an Azure Database for MySQL using PowerShell](tutorial-design-database-using-powershell.md) |
mysql | Quickstart Create Server Up Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-server-up-azure-cli.md | - Title: 'Quickstart: Create Azure Database for MySQL using az mysql up' -description: Quickstart guide to create Azure Database for MySQL server using Azure CLI (command line interface) up command. ----- Previously updated : 06/20/2022----# Quickstart: Create an Azure Database for MySQL using a simple Azure CLI command - az mysql up (preview) ----> [!IMPORTANT] -> The [az mysql up](/cli/azure/mysql#az-mysql-up) Azure CLI command is in preview. --Azure Database for MySQL is a managed service that enables you to run, manage, and scale highly available MySQL databases in the cloud. The Azure CLI is used to create and manage Azure resources from the command-line or in scripts. This quickstart shows you how to use the [az mysql up](/cli/azure/mysql#az-mysql-up) command to create an Azure Database for MySQL server using the Azure CLI. In addition to creating the server, the `az mysql up` command creates a sample database, a root user in the database, opens the firewall for Azure services, and creates default firewall rules for the client computer. This helps to expedite the development process. --## Prerequisites --If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. --This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --You'll need to login to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name. --```azurecli -az login -``` --If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account) command. Substitute the **subscription ID** property from the **az login** output for your subscription into the subscription ID placeholder. --```azurecli -az account set --subscription <subscription id> -``` --## Create an Azure Database for MySQL server --To use the commands, install the [db-up](/cli/azure/mysql -) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli). --```azurecli -az extension add --name db-up -``` --Create an Azure Database for MySQL server using the following command: --```azurecli -az mysql up -``` --The server is created with the following default values (unless you manually override them): --**Setting** | **Default value** | **Description** -|| -server-name | System generated | A unique name that identifies your Azure Database for MySQL server. -resource-group | System generated | A new Azure resource group. -sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. See our [pricing page](https://azure.microsoft.com/pricing/details/mysql/) for more information about the tiers. -backup-retention | 7 | How long a backup should be retained. Unit is days. -geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not. -location | westus2 | The Azure location for the server. -ssl-enforcement | Enabled | Whether SSL should be enabled or not for this server. -storage-size | 5120 | The storage capacity of the server (unit is megabytes). -version | 5.7 | The MySQL major version. -admin-user | System generated | The username for the administrator login. -admin-password | System generated | The password of the administrator user. --> [!NOTE] -> For more information about the `az mysql up` command and its additional parameters, see the [Azure CLI documentation](/cli/azure/mysql#az-mysql-up). --Once your server is created, it comes with the following settings: --- A firewall rule called "devbox" is created. The Azure CLI attempts to detect the IP address of the machine the `az mysql up` command is run from and allows that IP address.-- "Allow access to Azure services" is set to ON. This setting configures the server's firewall to accept connections from all Azure resources, including resources not in your subscription.-- The `wait_timeout` parameter is set to 8 hours-- An empty database named "sampledb" is created-- A new user named "root" with privileges to "sampledb" is created--> [!NOTE] -> Azure Database for MySQL communicates over port 3306. When connecting from within a corporate network, outbound traffic over port 3306 may not be allowed by your network's firewall. Have your IT department open port 3306 to connect to your server. --## Get the connection information --After the `az mysql up` command is completed, a list of connection strings for popular programming languages is returned to you. These connection strings are pre-configured with the specific attributes of your newly created Azure Database for MySQL server. --You can use the [az mysql show-connection-string](/cli/azure/mysql#az-mysql-show-connection-string) command to list these connection strings again. --## Clean up resources --Clean up all resources you created in the quickstart using the following command. This command deletes the Azure Database for MySQL server and the resource group. --```azurecli -az mysql down --delete-group -``` --If you would just like to delete the newly created server, you can run [az mysql down](/cli/azure/mysql#az-mysql-down) command. --```azurecli -az mysql down -``` --## Next steps --> [!div class="nextstepaction"] -> [Design a MySQL Database with Azure CLI](./tutorial-design-database-using-cli.md) |
mysql | Reference Stored Procedures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/reference-stored-procedures.md | - Title: Management stored procedures - Azure Database for MySQL -description: Learn which stored procedures in Azure Database for MySQL are useful to help you configure data-in replication, set the timezone, and kill queries. ----- Previously updated : 06/20/2022---# Azure Database for MySQL management stored procedures ----Stored procedures are available on Azure Database for MySQL servers to help manage your MySQL server. This includes managing your server's connections, queries, and setting up Data-in Replication. --## Data-in Replication stored procedures --Data-in Replication allows you to synchronize data from a MySQL server running on-premises, in virtual machines, or database services hosted by other cloud providers into the Azure Database for MySQL service. --The following stored procedures are used to set up or remove Data-in Replication between a source and replica. --|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**| -|--|--|--|--| -|*mysql.az_replication_change_master*|master_host<br/>master_user<br/>master_password<br/>master_port<br/>master_log_file<br/>master_log_pos<br/>master_ssl_ca|N/A|To transfer data with SSL mode, pass in the CA certificate's context into the master_ssl_ca parameter. </br><br>To transfer data without SSL, pass in an empty string into the master_ssl_ca parameter.| -|*mysql.az_replication _start*|N/A|N/A|Starts replication.| -|*mysql.az_replication _stop*|N/A|N/A|Stops replication.| -|*mysql.az_replication _remove_master*|N/A|N/A|Removes the replication relationship between the source and replica.| -|*mysql.az_replication_skip_counter*|N/A|N/A|Skips one replication error.| --To set up Data-in Replication between a source and a replica in Azure Database for MySQL, refer to [how to configure Data-in Replication](how-to-data-in-replication.md). --## Other stored procedures --The following stored procedures are available in Azure Database for MySQL to manage your server. --|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**| -|--|--|--|--| -|*mysql.az_kill*|processlist_id|N/A|Equivalent to [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the connection associated with the provided processlist_id after terminating any statement the connection is executing.| -|*mysql.az_kill_query*|processlist_id|N/A|Equivalent to [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the statement the connection is currently executing. Leaves the connection itself alive.| -|*mysql.az_load_timezone*|N/A|N/A|Loads time zone tables to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").| --## Next steps -- Learn how to set up [Data-in Replication](how-to-data-in-replication.md)-- Learn how to use the [time zone tables](how-to-server-parameters.md#working-with-the-time-zone-parameter) |
mysql | Sample Scripts Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-azure-cli.md | - Title: Azure CLI samples - Azure Database for MySQL | Microsoft Docs -description: This article lists the Azure CLI code samples available for interacting with Azure Database for MySQL. ------ Previously updated : 06/20/2022---# Azure CLI samples for Azure Database for MySQL ----The following table includes links to sample Azure CLI scripts for Azure Database for MySQL. --| Sample link | Description | -||| -|**Create a server**|| -| [Create a server and firewall rule](../scripts/sample-create-server-and-firewall-rule.md) | Azure CLI script that creates a single Azure Database for MySQL server and configures a server-level firewall rule. | -|**Scale a server**|| -| [Scale a server](../scripts/sample-scale-server.md) | Azure CLI script that scales a single Azure Database for MySQL server up or down to allow for changing performance needs. | -|**Change server configurations**|| -| [Change server configurations](../scripts/sample-change-server-configuration.md) | Azure CLI script that change configurations of a single Azure Database for MySQL server. | -|**Restore a server**|| -| [Restore a server](../scripts/sample-point-in-time-restore.md) | Azure CLI script that restores a single Azure Database for MySQL server to a previous point in time. | -|**Manipulate with server logs**|| -| [Enable server logs](../scripts/sample-server-logs.md) | Azure CLI script that enables server logs of a single Azure Database for MySQL server. | -||| |
mysql | Sample Scripts Java Connection Pooling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-java-connection-pooling.md | - Title: Java samples to illustrate connection pooling -description: This article lists Java samples to illustrate connection pooling. ------ Previously updated : 06/20/2022---# Java sample to illustrate connection pooling ----The below sample code illustrates connection pooling in Java. --```java -import java.sql.Connection; -import java.sql.DriverManager; -import java.sql.ResultSet; -import java.sql.SQLException; -import java.sql.Statement; -import java.util.HashSet; -import java.util.Set; -import java.util.Stack; --public class MySQLConnectionPool { - private String databaseUrl; - private String userName; - private String password; - private int maxPoolSize = 10; - private int connNum = 0; -- private static final String SQL_VERIFYCONN = "select 1"; -- Stack<Connection> freePool = new Stack<>(); - Set<Connection> occupiedPool = new HashSet<>(); -- /** - * Constructor - * - * @param databaseUrl - * The connection url - * @param userName - * user name - * @param password - * password - * @param maxSize - * max size of the connection pool - */ - public MySQLConnectionPool(String databaseUrl, String userName, - String password, int maxSize) { - this.databaseUrl = databaseUrl; - this.userName = userName; - this.password = password; - this.maxPoolSize = maxSize; - } -- /** - * Get an available connection - * - * @return An available connection - * @throws SQLException - * Fail to get an available connection - */ - public synchronized Connection getConnection() throws SQLException { - Connection conn = null; -- if (isFull()) { - throw new SQLException("The connection pool is full."); - } -- conn = getConnectionFromPool(); -- // If there is no free connection, create a new one. - if (conn == null) { - conn = createNewConnectionForPool(); - } -- // For Azure Database for MySQL, if there is no action on one connection for some - // time, the connection is lost. By this, make sure the connection is - // active. Otherwise reconnect it. - conn = makeAvailable(conn); - return conn; - } -- /** - * Return a connection to the pool - * - * @param conn - * The connection - * @throws SQLException - * When the connection is returned already or it isn't gotten - * from the pool. - */ - public synchronized void returnConnection(Connection conn) - throws SQLException { - if (conn == null) { - throw new NullPointerException(); - } - if (!occupiedPool.remove(conn)) { - throw new SQLException( - "The connection is returned already or it isn't for this pool"); - } - freePool.push(conn); - } -- /** - * Verify if the connection is full. - * - * @return if the connection is full - */ - private synchronized boolean isFull() { - return ((freePool.size() == 0) && (connNum >= maxPoolSize)); - } -- /** - * Create a connection for the pool - * - * @return the new created connection - * @throws SQLException - * When fail to create a new connection. - */ - private Connection createNewConnectionForPool() throws SQLException { - Connection conn = createNewConnection(); - connNum++; - occupiedPool.add(conn); - return conn; - } -- /** - * Crate a new connection - * - * @return the new created connection - * @throws SQLException - * When fail to create a new connection. - */ - private Connection createNewConnection() throws SQLException { - Connection conn = null; - conn = DriverManager.getConnection(databaseUrl, userName, password); - return conn; - } -- /** - * Get a connection from the pool. If there is no free connection, return - * null - * - * @return the connection. - */ - private Connection getConnectionFromPool() { - Connection conn = null; - if (freePool.size() > 0) { - conn = freePool.pop(); - occupiedPool.add(conn); - } - return conn; - } -- /** - * Make sure the connection is available now. Otherwise, reconnect it. - * - * @param conn - * The connection for verification. - * @return the available connection. - * @throws SQLException - * Fail to get an available connection - */ - private Connection makeAvailable(Connection conn) throws SQLException { - if (isConnectionAvailable(conn)) { - return conn; - } -- // If the connection is't available, reconnect it. - occupiedPool.remove(conn); - connNum--; - conn.close(); -- conn = createNewConnection(); - occupiedPool.add(conn); - connNum++; - return conn; - } -- /** - * By running a sql to verify if the connection is available - * - * @param conn - * The connection for verification - * @return if the connection is available for now. - */ - private boolean isConnectionAvailable(Connection conn) { - try (Statement st = conn.createStatement()) { - st.executeQuery(SQL_VERIFYCONN); - return true; - } catch (SQLException e) { - return false; - } - } - - // Just an Example - public static void main(String[] args) throws SQLException { - Connection conn = null; - MySQLConnectionPool pool = new MySQLConnectionPool( - "jdbc:mysql://mysqlaasdevintic-sha.cloudapp.net:3306/<Your DB name>", - "<Your user>", "<Your Password>", 2); - try { - conn = pool.getConnection(); - try (Statement statement = conn.createStatement()) - { - ResultSet res = statement.executeQuery("show tables"); - System.out.println("There are below tables:"); - while (res.next()) { - String tblName = res.getString(1); - System.out.println(tblName); - } - } - } - finally { - if (conn != null) { - pool.returnConnection(conn); - } - } - } --} --``` |
mysql | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md | - Title: Azure Policy Regulatory Compliance controls for Azure Database for MySQL -description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. ------ Previously updated : 02/06/2024---# Azure Policy Regulatory Compliance controls for Azure Database for MySQL ----[Regulatory Compliance in Azure Policy](../../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This -page lists the **compliance domains** and **security controls** for Azure Database for MySQL. You can assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard. ----## Next steps --- Learn more about [Azure Policy Regulatory Compliance](../../governance/policy/concepts/regulatory-compliance.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). |
mysql | Single Server Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-overview.md | - Title: Overview - Azure Database for MySQL single server -description: Learn about the Azure Database for MySQL single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition. ------ Previously updated : 06/20/2022---# Azure Database for MySQL single server ----[Azure Database for MySQL](overview.md) powered by the MySQL community edition is available in two deployment modes: --- Flexible Server -- Single Server--In this article, we'll provide an overview and introduction to core concepts of the Single Server deployment model. To learn about flexible server deployment mode, refer [flexible server overview](../flexible-server/index.yml). For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](select-right-deployment-type.md). --## Overview -Azure Database for MySQL single server is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). --Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. To learn about the differences between Flexible Server and Single Server deployment options, refer [select the right deployment option for you](select-right-deployment-type.md) documentation. --## High availability --The Single Server deployment model is optimized for built-in high availability, and elasticity at reduced cost. The architecture separates compute and storage. The database engine runs on a proprietary compute container, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability. --During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure: --1. A new compute container is provisioned -2. The storage with data files is mapped to the new container -3. MySQL database engine is brought online on the new compute container -4. Gateway service ensures transparent failover ensuring no application side changes requires. - -The typical failover time ranges from 60-120 seconds. The cloud native design of Single Server allows it to support 99.99% of availability eliminating the cost of passive hot standby. --Azure's industry leading 99.99% availability service level agreement (SLA), powered by a global network of Microsoft-managed datacenters, helps keep your applications running 24/7. ---## Automated Patching --The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There's no user action or configuration settings required for patching. The patching frequency is service managed based on the criticality of the payload. In general, the service follows monthly release schedule as part of the continuous integration and release. Users can subscribe to the [planned maintenance notification](concepts-monitoring.md) to receive notification of the upcoming maintenance 72 hours before the event. --## Automatic Backups --Single Server automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption. Refer to [Backups](concepts-backup.md) for details. --## Adjust performance and scale within seconds --Single Server is available in three SKU tiers: Basic, General Purpose, and Memory Optimized. The Basic tier is best suited for low-cost development and low concurrency workloads. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume. See [Pricing tiers](./concepts-pricing-tiers.md) for details. --## Enterprise grade Security, Compliance, and Governance --Single Server uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default) or [customer managed](concepts-data-encryption-mysql.md). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service supports TLS versions 1.2, 1.1 and 1.0 with an ability to enforce [minimum TLS version](concepts-ssl-connection-security.md). --The service allows private access to the servers using [private link](concepts-data-access-security-private-link.md) and offers threat protection through the optional [Microsoft Defender for open-source relational databases](../../defender-for-cloud/defender-for-databases-introduction.md) plan. Microsoft Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. --In addition to native authentication, Single Server supports [Microsoft Entra ID](../../active-directory/fundamentals/active-directory-whatis.md) authentication. Microsoft Entra authentication is a mechanism of connecting to the MySQL servers using identities defined and managed in Microsoft Entra ID. With Microsoft Entra authentication, you can manage database user identities and other Azure services in a central location, which simplifies and centralizes access control. --[Audit logging](concepts-audit-logs.md) is available to track all database level activity. --Single Server is complaint with all the industry-leading certifications like FedRAMP, HIPAA, PCI DSS. Visit the [Azure Trust Center](https://www.microsoft.com/trustcenter/security) for information about Azure's platform security. --For more information about Azure Database for MySQL security features, see the [security overview](concepts-security.md). --## Monitoring and alerting --Single Server is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service allows configuring slow query logs and comes with a differentiated [Query store](concepts-query-store.md) feature. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. See [Monitoring](concepts-monitoring.md) for details. --## Migration --The service runs community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing application developed on MySQL engine to Single Server. The migration to the single server can be performed using one of the following options: --- **Dump and Restore** – For offline migrations, where users can afford some downtime, dump and restore using community tools like mysqldump/mydumper can provide fastest way to migrate. See [Migrate using dump and restore](concepts-migrate-dump-restore.md) for details. -- **Azure Database Migration Service** – For seamless and simplified offline migrations to single server with high speed data migration, [Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md) can be leveraged. -- **Data-in replication** – For minimal downtime migrations, data-in replication, which relies on binlog based replication can also be leveraged. Data-in replication is preferred for minimal downtime migrations by hands-on experts looking for more control over migration. See [data-in replication](concepts-data-in-replication.md) for details.--## Feedback and support --For any questions or suggestions you might have about working with Azure Database for MySQL flexible server, consider the following points of contact: --- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).-- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.-- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).--## Next steps --Now that you've read an introduction to Azure Database for MySQL - Single Server deployment mode, you're ready to: --- Create your first server.- - [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) - - [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md) - - [Azure CLI samples for Azure Database for MySQL](sample-scripts-azure-cli.md) --- Build your first app using your preferred language:- - [Python](./connect-python.md) - - [Node.JS](./connect-nodejs.md) - - [Java](./connect-java.md) - - [Ruby](./connect-ruby.md) - - [PHP](./connect-php.md) - - [.NET (C#)](./connect-csharp.md) - - [Go](./connect-go.md) |
mysql | Single Server Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-whats-new.md | - Title: What's new in Azure Database for MySQL single server -description: Learn about recent updates to Azure Database for MySQL - Single Server, a relational database service in the Microsoft cloud based on the MySQL Community Edition. ------ Previously updated : 06/20/2022---# What's new in Azure Database for MySQL - Single Server? ----Azure Database for MySQL is a relational database service in the Microsoft cloud. The service is based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine and supports versions 5.6(retired), 5.7, and 8.0. [Azure Database for MySQL - Single Server](./overview.md#azure-database-for-mysqlsingle-server) is a deployment mode that provides a fully managed database service with minimal requirements for customizations of database. The Single Server platform is designed to handle most database management functions such as patching, backups, high availability, and security, all with minimal user configuration and control. --This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first. --## September 2022 --Clients’ devices using SSL to connect to Azure Database for MySQL – Single Server instances must have their CA certificates updated. To address compliance requirements, starting October 2022 the CA certificates were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2. -To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the steps explained in the [article](./concepts-certificate-rotation.md#create-a-combined-ca-certificate), to maintain connectivity. -Use the steps mentioned to [create a combined certificate](./concepts-certificate-rotation.md#create-a-combined-ca-certificate) and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it. --## May 2022 --Enabled the ability to change the server parameter innodb_ft_server_stopword_table from Portal/CLI. -Users can now change the value of the innodb_ft_server_stopword_table parameter using the Azure portal and CLI. This parameter helps to configure your own InnoDB FULLTEXT index stopword list for all InnoDB tables. For more information, see [innodb_ft_server_stopword_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_ft_server_stopword_table). --**Known Issues** --Customers using PHP driver with [enableRedirect](./how-to-redirection.md) can no longer connect to the Azure Database for MySQL single server, as the CA certificates of the host servers were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2 to address compliance requirements. For successful connections to your database using PHP driver with enableRedirect please visit this [link](./concepts-certificate-rotation.md#do-i-need-to-make-any-changes-on-my-client-to-maintain-connectivity). --## March 2022 --This release of Azure Database for MySQL - Single Server includes the following updates. --**Bug Fixes** --The MySQL 8.0.27 client and newer versions are now compatible with Azure Database for MySQL - Single Server. --## February 2022 --This release of Azure Database for MySQL - Single Server includes the following updates. --**Known Issues** --Customers in Japan,East US received two Maintenance Notification emails for this month. The Email notification send for *05-Feb 2022* was send by mistake and no changes will be done to the service on this date. You can safely ignore them. We apologize for the inconvenience. --## December 2021 --This release of Azure Database for MySQL - Single Server includes the following updates: --- **Query Text removed in Query Performance Insights to avoid unauthorized access** --Starting December 2021, you will not be able to see the query text of the queries in Query performance insight blade in Azure portal. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk. The recommended steps to view the query text is shared below: --- Identify the query_id of the top queries from the Query Performance Insight blade in Azure portal-- Login to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries-- ```sql - SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store - SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics - ``` --- You can browse the query_digest_text column to identify the query text for the corresponding query_id--The above steps will ensure only authenticated and authorized users can have secure access to the query text. --## October 2021 --- **Known Issues**--The MySQL 8.0.27 client is incompatible with Azure Database for MySQL - Single Server. All connections from the MySQL 8.0.27 client created either via mysql.exe or workbench will fail. As a workaround, consider using an earlier version of the client (prior to MySQL 8.0.27) or creating an instance of [Azure Database for MySQL - Flexible Server](../flexible-server/overview.md) instead. --## June 2021 - -This release of Azure Database for MySQL - Single Server includes the following updates. --- **Enabled the ability to change the server parameter `activate_all_roles_on_login` from Portal/CLI for MySQL 8.0**-- Users can now change the value of the activate_all_roles_on_login parameter using the Azure portal and CLI. This parameter helps to configure whether to enable automatic activation of all granted roles when users sign in to the server. For more information, see [Server System Variables](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html). --- **Addressed MySQL Community Bugs #29596969 and #94668**-- This release addresses an issue with the default expression being ignored in a CREATE TABLE query if the field was marked as PRIMARY KEY for MySQL 8.0. (MySQL Community Bug #29596969, Bug #94668). For more information, see [MySQL Bugs: #94668: Expression Default is made NULL during CREATE TABLE query, if field is made PK](https://bugs.mysql.com/bug.php?id=94668) --- **Addressed an issue with duplicate table names in "SHOW TABLE" query**-- We've introduced a new function to give a fine-grained control of the table cache during the table operation. Because of a code defect in the new feature, the entry in the directory cache might be miss configured or added and cause the unexpected behavior like return two tables with the same name. The directory cache only works for the “SHOW TABLE” related query; it won't impact any DML or DDL queries. This issue is completely resolved in this release. --- **Increased the default value for the server parameter `max_heap_table_size` to help reduce temp table spills to disk**-- With this release, the max allowed value for the parameter `max_heap_table_size` has been changed to 8589934592 for General Purpose 64 vCore and Memory Optimize 32 vCore. --- **Addressed an issue with setting the value of the parameter `sql_require_primary_key` from the portal**-- Users can now modify the value of the parameter `sql_require_primary_key` directly from the Azure portal. --- **General Availability of planned maintenance notification**-- This release provides General Availability of planned maintenance notifications in Azure Database for MySQL - Single Server. For more information, see the article [Planned maintenance notification](concepts-planned-maintenance-notification.md). --- **Enabled the parameter `redirect_enabled` by default**-- With this release, the parameter `redirect_enabled` will be enabled by default. Redirection aims to reduce network latency between client applications and MySQL servers by allowing applications to connect directly to backend server nodes. Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft. For more information, see the article [Connect to Azure Database for MySQL with redirection](how-to-redirection.md). -->[!NOTE] -> * Redirection does not work with Private link setup. If you are using Private link for Azure Database for MySQL, you might encounter connection issue. To resolve the issue, make sure the parameter redirect_enabled is set to “OFF” and the client application is restarted.</br> -> * If you have a PHP application that uses the mysqlnd_azure redirection driver to connect to Azure Database for MySQL (with redirection enabled by default), you might face a data encoding issue that impacts your insert transactions..</br> -> To resolve this issue, either: -> - In Azure portal, disable the redirection by setting the redirect_enabled parameter to “OFF”, and restart the PHP application to clear the driver cache after the change. -> - Explicitly set the charset related parameters at the session level, based on your settings after the connection is established (for example “set names utf8mb4”). --## February 2021 --This release of Azure Database for MySQL - Single Server includes the following updates. --- Added new stored procedures to support the global transaction identifier (GTID) for data-in for the version 5.7 and 8.0 Large Storage server.-- Updated to support MySQL versions to 5.6.50 and 5.7.32.--## January 2021 --This release of Azure Database for MySQL - Single Server includes the following updates. --- Enabled "reset password" to automatically fix the first admin permission.-- Exposed the `auto_increment_increment/auto_increment_offset` server parameter and `session_track_gtids`.-- Added new stored procedures for control innodb buffer pool dump/restore.-- Exposed the innodb warm up related server parameter for large storage server.--## Feedback and support --For any questions or suggestions you might have about working with Azure Database for MySQL flexible server, consider the following points of contact: --- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).-- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.-- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).--## Next steps --- Learn more about [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/server/).-- Browse the [public documentation](./index.yml) for Azure Database for MySQL – Single Server.-- Review details on [troubleshooting common errors](./how-to-troubleshoot-common-errors.md). |
mysql | Tutorial Design Database Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-cli.md | - Title: 'Tutorial: Design a server - Azure CLI - Azure Database for MySQL' -description: This tutorial explains how to create and manage Azure Database for MySQL server and database using Azure CLI from the command line. ------ Previously updated : 06/20/2022---# Tutorial: Design an Azure Database for MySQL using Azure CLI ----Azure Database for MySQL is a relational database service in the Microsoft cloud based on MySQL Community Edition database engine. In this tutorial, you use Azure CLI (command-line interface) and other utilities to learn how to: --> [!div class="checklist"] -> * Create an Azure Database for MySQL -> * Configure the server firewall -> * Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database -> * Load sample data -> * Query data -> * Update data -> * Restore data ----- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account#az-account-set) command. -```azurecli-interactive -az account set --subscription 00000000-0000-0000-0000-000000000000 -``` --## Create a resource group -Create an [Azure resource group](../../azure-resource-manager/management/overview.md) with [az group create](/cli/azure/group#az-group-create) command. A resource group is a logical container into which Azure resources are deployed and managed as a group. --The following example creates a resource group named `myresourcegroup` in the `westus` location. --```azurecli-interactive -az group create --name myresourcegroup --location westus -``` --## Create an Azure Database for MySQL server -Create an Azure Database for MySQL server with the az mysql server create command. A server can manage multiple databases. Typically, a separate database is used for each project or for each user. --The following example creates an Azure Database for MySQL server located in `westus` in the resource group `myresourcegroup` with name `mydemoserver`. The server has an administrator user named `myadmin`. It is a General Purpose, Gen 5 server with 2 vCores. Substitute the `<server_admin_password>` with your own value. --```azurecli-interactive -az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7 -``` -The sku-name parameter value follows the convention {pricing tier}\_{compute generation}\_{vCores} as in the examples below: -+ `--sku-name B_Gen5_2` maps to Basic, Gen 5, and 2 vCores. -+ `--sku-name GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores. -+ `--sku-name MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores. --Please see the [pricing tiers](./concepts-pricing-tiers.md) documentation to understand the valid values per region and per tier. --> [!IMPORTANT] -> The server admin login and password that you specify here are required to log in to the server and its databases later in this quickstart. Remember or record this information for later use. ---## Configure firewall rule -Create an Azure Database for MySQL server-level firewall rule with the az mysql server firewall-rule create command. A server-level firewall rule allows an external application, such as **mysql** command-line tool or MySQL Workbench to connect to your server through the Azure MySQL service firewall. --The following example creates a firewall rule called `AllowMyIP` that allows connections from a specific IP address, 192.168.0.1. Substitute in the IP address or range of IP addresses that correspond to where you'll be connecting from. --```azurecli-interactive -az mysql server firewall-rule create --resource-group myresourcegroup --server mydemoserver --name AllowMyIP --start-ip-address 192.168.0.1 --end-ip-address 192.168.0.1 -``` --## Get the connection information --To connect to your server, you need to provide host information and access credentials. -```azurecli-interactive -az mysql server show --resource-group myresourcegroup --name mydemoserver -``` --The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**. -```json -{ - "administratorLogin": "myadmin", - "administratorLoginPassword": null, - "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver", - "location": "westus", - "name": "mydemoserver", - "resourceGroup": "myresourcegroup", - "sku": { - "capacity": 2, - "family": "Gen5", - "name": "GP_Gen5_2", - "size": null, - "tier": "GeneralPurpose" - }, - "sslEnforcement": "Enabled", - "storageProfile": { - "backupRetentionDays": 7, - "geoRedundantBackup": "Disabled", - "storageMb": 5120 - }, - "tags": null, - "type": "Microsoft.DBforMySQL/servers", - "userVisibleState": "Ready", - "version": "5.7" -} -``` --## Connect to the server using mysql -Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to establish a connection to your Azure Database for MySQL server. In this example, the command is: -```cmd -mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p -``` --## Create a blank database -Once you’re connected to the server, create a blank database. -```sql -mysql> CREATE DATABASE mysampledb; -``` --At the prompt, run the following command to switch the connection to this newly created database: -```sql -mysql> USE mysampledb; -``` --## Create tables in the database -Now that you know how to connect to the Azure Database for MySQL database, complete some basic tasks. --First, create a table and load it with some data. Let's create a table that stores inventory information. -```sql -CREATE TABLE inventory ( - id serial PRIMARY KEY, - name VARCHAR(50), - quantity INTEGER -); -``` --## Load data into the tables -Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data. -```sql -INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150); -INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154); -``` --Now you have two rows of sample data into the table you created earlier. --## Query and update the data in the tables -Execute the following query to retrieve information from the database table. -```sql -SELECT * FROM inventory; -``` --You can also update the data in the tables. -```sql -UPDATE inventory SET quantity = 200 WHERE name = 'banana'; -``` --The row gets updated accordingly when you retrieve data. -```sql -SELECT * FROM inventory; -``` --## Restore a database to a previous point in time -Imagine you have accidentally deleted this table. This is something you cannot easily recover from. Azure Database for MySQL allows you to go back to any point in time in the last up to 35 days and restore this point in time to a new server. You can use this new server to recover your deleted data. The following steps restore the sample server to a point before the table was added. --For the restore, you need the following information: --- Restore point: Select a point-in-time that occurs before the server was changed. Must be greater than or equal to the source database's Oldest backup value.-- Target server: Provide a new server name you want to restore to-- Source server: Provide the name of the server you want to restore from-- Location: You cannot select the region, by default it is same as the source server--```azurecli-interactive -az mysql server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time "2017-05-4 03:10" --source-server-name mydemoserver -``` --The `az mysql server restore` command needs the following parameters: --| Setting | Suggested value | Description  | -| | | | -| resource-group |  myresourcegroup |  The resource group in which the source server exists.  | -| name | mydemoserver-restored | The name of the new server that is created by the restore command. | -| restore-point-in-time | 2017-04-13T13:59:00Z | Select a point-in-time to restore to. This date and time must be within the source server's backup retention period. Use ISO8601 date and time format. For example, you may use your own local timezone, such as `2017-04-13T05:59:00-08:00`, or use UTC Zulu format `2017-04-13T13:59:00Z`. | -| source-server | mydemoserver | The name or ID of the source server to restore from. | --Restoring a server to a point-in-time creates a new server, copied as the original server as of the point in time you specify. The location and pricing tier values for the restored server are the same as the source server. --The command is synchronous, and will return after the server is restored. Once the restore finishes, locate the new server that was created. Verify the data was restored as expected. --## Clean up resources -If you don't need these resources for another quickstart/tutorial, you can delete them by doing the following command: --```azurecli-interactive -az group delete --name myresourcegroup -``` --If you would just like to delete the one newly created server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command. --```azurecli-interactive -az mysql server delete --resource-group myresourcegroup --name mydemoserver -``` --## Next steps -In this tutorial you learned to: -> [!div class="checklist"] -> * Create an Azure Database for MySQL server -> * Configure the server firewall -> * Use the mysql command-line tool to create a database -> * Load sample data -> * Query data -> * Update data -> * Restore data --> [!div class="nextstepaction"] -> [Azure Database for MySQL - Azure CLI samples](./sample-scripts-azure-cli.md) |
mysql | Tutorial Design Database Using Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-portal.md | - Title: 'Tutorial: Design a server - Azure portal - Azure Database for MySQL' -description: This tutorial explains how to create and manage Azure Database for MySQL server and database using Azure portal. ------ Previously updated : 06/20/2022---# Tutorial: Design an Azure Database for MySQL database using the Azure portal ----Azure Database for MySQL is a managed service that enables you to run, manage, and scale highly available MySQL databases in the cloud. Using the Azure portal, you can easily manage your server and design a database. --In this tutorial, you use the Azure portal to learn how to: --> [!div class="checklist"] -> * Create an Azure Database for MySQL -> * Configure the server firewall -> * Use mysql command-line tool to create a database -> * Load sample data -> * Query data -> * Update data -> * Restore data --## Prerequisites --If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. --## Sign in to the Azure portal --Open your favorite web browser, and sign in to the [Azure portal](https://portal.azure.com). Enter your credentials to sign in to the portal. The default view is your service dashboard. --## Create an Azure Database for MySQL server --An Azure Database for MySQL server is created with a defined set of [compute and storage resources](./concepts-pricing-tiers.md). The server is created within an [Azure resource group](../../azure-resource-manager/management/overview.md). --1. Select the **Create a resource** button (+) in the upper left corner of the portal. --2. Select **Databases** > **Azure Database for MySQL**. If you cannot find MySQL Server under the **Databases** category, click **See all** to show all available database services. You can also type **Azure Database for MySQL** in the search box to quickly find the service. - - :::image type="content" source="./media/tutorial-design-database-using-portal/1-navigate-to-mysql.png" alt-text="Navigate to MySQL"::: --3. Click **Azure Database for MySQL** tile. Fill out the Azure Database for MySQL form. - - :::image type="content" source="./media/tutorial-design-database-using-portal/2-create-form.png" alt-text="Create form"::: -- **Setting** | **Suggested value** | **Field description** - || - Server name | Unique server name | Choose a unique name that identifies your Azure Database for MySQL server. For example, mydemoserver. The domain name *.mysql.database.azure.com* is appended to the server name you provide. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters. - Subscription | Your subscription | Select the Azure subscription that you want to use for your server. If you have multiple subscriptions, choose the subscription in which you get billed for the resource. - Resource group | *myresourcegroup* | Provide a new or existing resource group name. - Select source | *Blank* | Select *Blank* to create a new server from scratch. (You select *Backup* if you are creating a server from a geo-backup of an existing Azure Database for MySQL server). - Server admin login | myadmin | A sign-in account to use when you're connecting to the server. The admin sign-in name cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**. - Password | *Your choice* | Provide a new password for the server admin account. It must contain from 8 to 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on). - Confirm password | *Your choice*| Confirm the admin account password. - Location | *The region closest to your users*| Choose the location that is closest to your users or your other Azure applications. - Version | *The latest version*| The latest version (unless you have specific requirements that require another version). - Pricing tier | **General Purpose**, **Gen 5**, **2 vCores**, **5 GB**, **7 days**, **Geographically Redundant** | The compute, storage, and backup configurations for your new server. Select **Pricing tier**. Next, select the **General Purpose** tab. *Gen 5*, *2 vCores*, *5 GB*, and *7 days* are the default values for **Compute Generation**, **vCore**, **Storage**, and **Backup Retention Period**. You can leave those sliders as is. To enable your server backups in geo-redundant storage, select **Geographically Redundant** from the **Backup Redundancy Options**. To save this pricing tier selection, select **OK**. The next screenshot captures these selections. -- :::image type="content" source="./media/tutorial-design-database-using-portal/3-pricing-tier.png" alt-text="Pricing tier"::: -- > [!TIP] - > With **auto-growth** enabled your server increases storage when you are approaching the allocated limit, without impacting your workload. --4. Click **Review + create**. You can click on the **Notifications** button on the toolbar to monitor the deployment process. Deployment can take up to 20 minutes. --## Configure firewall --Azure Databases for MySQL are protected by a firewall. By default, all connections to the server and the databases inside the server are rejected. Before connecting to Azure Database for MySQL for the first time, configure the firewall to add the client machine's public network IP address (or IP address range). --1. Click your newly created server, and then click **Connection security**. -- :::image type="content" source="./media/tutorial-design-database-using-portal/1-connection-security.png" alt-text="Connection security"::: -2. You can **Add My IP**, or configure firewall rules here. Remember to click **Save** after you have created the rules. -You can now connect to the server using mysql command-line tool or MySQL Workbench GUI tool. --> [!TIP] -> Azure Database for MySQL server communicates over port 3306. If you are trying to connect from within a corporate network, outbound traffic over port 3306 may not be allowed by your network's firewall. If so, you cannot connect to Azure MySQL server unless your IT department opens port 3306. --## Get connection information --Get the fully qualified **Server name** and **Server admin login name** for your Azure Database for MySQL server from the Azure portal. You use the fully qualified server name to connect to your server using mysql command-line tool. --1. In the [Azure portal](https://portal.azure.com), click **All resources** from the left-hand menu, type the name, and search for your Azure Database for MySQL server. Select the server name to view the details. --2. From the **Overview** page, note down **Server Name** and **Server admin login name**. You may click the copy button next to each field to copy to the clipboard. - :::image type="content" source="./media/tutorial-design-database-using-portal/2-server-properties.png" alt-text="4-2 server properties"::: --In this example, the server name is *mydemoserver.mysql.database.azure.com*, and the server admin login is *myadmin\@mydemoserver*. --## Connect to the server using mysql --Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.7/en/mysql.html) to establish a connection to your Azure Database for MySQL server. You can run the mysql command-line tool from the Azure Cloud Shell in the browser or from your own machine using mysql tools installed locally. To launch the Azure Cloud Shell, click the `Try It` button on a code block in this article, or visit the Azure portal and click the `>_` icon in the top right toolbar. --Type the command to connect: --```azurecli-interactive -mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p -``` --## Create a blank database --Once you're connected to the server, create a blank database to work with. --```sql -CREATE DATABASE mysampledb; -``` --At the prompt, run the following command to switch connection to this newly created database: --```sql -USE mysampledb; -``` --## Create tables in the database --Now that you know how to connect to the Azure Database for MySQL database, you can complete some basic tasks: --First, create a table and load it with some data. Let's create a table that stores inventory information. --```sql -CREATE TABLE inventory ( - id serial PRIMARY KEY, - name VARCHAR(50), - quantity INTEGER -); -``` --## Load data into the tables --Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data. --```sql -INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150); -INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154); -``` --Now you have two rows of sample data into the table you created earlier. --## Query and update the data in the tables --Execute the following query to retrieve information from the database table. --```sql -SELECT * FROM inventory; -``` --You can also update the data in the tables. --```sql -UPDATE inventory SET quantity = 200 WHERE name = 'banana'; -``` --The row gets updated accordingly when you retrieve data. --```sql -SELECT * FROM inventory; -``` --## Restore a database to a previous point in time --Imagine you have accidentally deleted an important database table, and cannot recover the data easily. Azure Database for MySQL allows you to restore the server to a point in time, creating a copy of the databases into new server. You can use this new server to recover your deleted data. The following steps restore the sample server to a point before the table was added. --1. In the Azure portal, locate your Azure Database for MySQL. On the **Overview** page, click **Restore** on the toolbar. The Restore page opens. -- :::image type="content" source="./media/tutorial-design-database-using-portal/1-restore-a-db.png" alt-text="10-1 restore a database"::: --2. Fill out the **Restore** form with the required information. -- :::image type="content" source="./media/tutorial-design-database-using-portal/2-restore-form.png" alt-text="10-2 restore form"::: -- - **Restore point**: Select a point-in-time that you want to restore to, within the timeframe listed. Make sure to convert your local timezone to UTC. - - **Restore to new server**: Provide a new server name you want to restore to. - - **Location**: The region is same as the source server, and cannot be changed. - - **Pricing tier**: The pricing tier is the same as the source server, and cannot be changed. - -3. Click **OK** to restore the server to [restore to a point in time](./how-to-restore-server-portal.md) before the table was deleted. Restoring a server creates a new copy of the server, as of the point in time you specify. --## Clean up resources --If you don't expect to need these resources in the future, you can delete them by deleting the resource group or just delete the MySQL server. To delete the resource group, follow these steps: -1. In the Azure portal, search for and select **Resource groups**. -2. In the resource group list, choose the name of your resource group. -3. In the Overview page of your resource group, select **Delete resource group**. -4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**. --## Next steps --In this tutorial, you use the Azure portal to learned how to: --> [!div class="checklist"] -> * Create an Azure Database for MySQL -> * Configure the server firewall -> * Use mysql command-line tool to create a database -> * Load sample data -> * Query data -> * Update data -> * Restore data --> [!div class="nextstepaction"] -> [How to connect applications to Azure Database for MySQL](./how-to-connection-string.md) |
mysql | Tutorial Design Database Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-powershell.md | - Title: 'Tutorial: Design a server - Azure PowerShell - Azure Database for MySQL' -description: This tutorial explains how to create and manage Azure Database for MySQL server and database using PowerShell. ------ Previously updated : 06/20/2022---# Tutorial: Design an Azure Database for MySQL using PowerShell ----Azure Database for MySQL is a relational database service in the Microsoft cloud based on MySQL Community Edition database engine. In this tutorial, you use PowerShell and other utilities to learn how to: --> [!div class="checklist"] -> - Create an Azure Database for MySQL -> - Configure the server firewall -> - Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database -> - Load sample data -> - Query data -> - Update data -> - Restore data --## Prerequisites --If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. --If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell). --> [!IMPORTANT] -> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az -> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`. -> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az -> PowerShell module releases and available natively from within Azure Cloud Shell. --If this is your first time using the Azure Database for MySQL service, you must register the -**Microsoft.DBforMySQL** resource provider. --```azurepowershell-interactive -Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL -``` ---If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription ID using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet. --```azurepowershell-interactive -Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 -``` --## Create a resource group --Create an [Azure resource group](../../azure-resource-manager/management/overview.md) using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as a group. --The following example creates a resource group named **myresourcegroup** in the **West US** region. --```azurepowershell-interactive -New-AzResourceGroup -Name myresourcegroup -Location westus -``` --## Create an Azure Database for MySQL server --Create an Azure Database for MySQL server with the `New-AzMySqlServer` cmdlet. A server can manage multiple databases. Typically, a separate database is used for each project or for each user. --The following example creates a MySQL server in the **West US** region named **mydemoserver** in the -**myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5 server in the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document the password used in the first line of the example as this is the password for the MySQL server admin account. --> [!TIP] -> A server name maps to a DNS name and must be globally unique in Azure. --```azurepowershell-interactive -$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString -New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password -``` --The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as shown in the following examples. --- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.-- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.-- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.--For information about valid **Sku** values by region and for tiers, see -[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md). --Consider using the basic pricing tier if light compute and I/O are adequate for your workload. --> [!IMPORTANT] -> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory- -> optimized and cannot be geo-replicated. --## Configure a firewall rule --Create an Azure Database for MySQL server-level firewall rule using the `New-AzMySqlFirewallRule` cmdlet. A server-level firewall rule allows an external application, such as the `mysql` command-line tool or MySQL Workbench to connect to your server through the Azure Database for MySQL service firewall. --The following example creates a firewall rule named **AllowMyIP** that allows connections from a specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond to the location that you are connecting from. --```azurepowershell-interactive -New-AzMySqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1 -``` --> [!NOTE] -> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from -> within a corporate network, outbound traffic over port 3306 might not be allowed. In this -> scenario, you can only connect to the server if your IT department opens port 3306. --## Get the connection information --To connect to your server, you need to provide host information and access credentials. Use the following example to determine the connection information. Make a note of the values for **FullyQualifiedDomainName** and **AdministratorLogin**. --```azurepowershell-interactive -Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup | - Select-Object -Property FullyQualifiedDomainName, AdministratorLogin -``` --```Output -FullyQualifiedDomainName AdministratorLogin - -mydemoserver.mysql.database.azure.com myadmin -``` --## Connect to the server using the mysql command-line tool --Connect to your server using the `mysql` command-line tool. To download and install the command-line tool, see [MySQL Community Downloads](https://dev.mysql.com/downloads/shell/). You can also access a pre-installed version of the `mysql` command-line tool in Azure Cloud Shell by selecting the **Try It** button on a code sample in this article. Other ways to access Azure Cloud Shell are to select the **>_** button on the upper-right toolbar in the Azure portal or by visiting [shell.azure.com](https://shell.azure.com/). --```azurepowershell-interactive -mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p -``` --## Create a database --Once youΓÇÖre connected to the server, create a blank database. --```sql -mysql> CREATE DATABASE mysampledb; -``` --At the prompt, run the following command to switch the connection to this newly created database: --```sql -mysql> USE mysampledb; -``` --## Create tables in the database --Now that you know how to connect to the Azure Database for MySQL database, complete some basic tasks. --First, create a table and load it with some data. Let's create a table that stores inventory information. --```sql -CREATE TABLE inventory ( - id serial PRIMARY KEY, - name VARCHAR(50), - quantity INTEGER -); -``` --## Load data into the tables --Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data. --```sql -INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150); -INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154); -``` --Now you have two rows of sample data into the table you created earlier. --## Query and update the data in the tables --Execute the following query to retrieve information from the database table. --```sql -SELECT * FROM inventory; -``` --You can also update the data in the tables. --```sql -UPDATE inventory SET quantity = 200 WHERE name = 'banana'; -``` --The row gets updated accordingly when you retrieve data. --```sql -SELECT * FROM inventory; -``` --## Restore a database to a previous point in time --You can restore the server to a previous point-in-time. The restored data is copied to a new server, and the existing server is left unchanged. For example, if a table is accidentally dropped, you can restore to the time just the drop occurred. Then, you can retrieve the missing table and data from the restored copy of the server. --To restore the server, use the `Restore-AzMySqlServer` PowerShell cmdlet. --### Run the restore command --To restore the server, run the following example from PowerShell. --```azurepowershell-interactive -$restorePointInTime = (Get-Date).AddMinutes(-10) -Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup | - Restore-AzMySqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore -``` --When you restore a server to an earlier point-in-time, a new server is created. The original server and its databases from the specified point-in-time are copied to the new server. --The location and pricing tier values for the restored server remain the same as the original server. --After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was started. The password can be changed from the new server's **Overview** page. --The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules must be set up separately for the new server. Firewall rules from the original server are restored. --## Clean up resources --If the resources created in this tutorial aren't needed for another quickstart or tutorial, you can delete them by running the following example. --> [!CAUTION] -> The following example deletes the specified resource group and all resources contained within it. -> If resources outside the scope of this tutorial exist in the specified resource group, they will -> also be deleted. --```azurepowershell-interactive -Remove-AzResourceGroup -Name myresourcegroup -``` --To delete only the server created in this tutorial without deleting the resource group, use the -`Remove-AzMySqlServer` cmdlet. --```azurepowershell-interactive -Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -``` --## Next steps --> [!div class="nextstepaction"] -> [How to back up and restore an Azure Database for MySQL server using PowerShell](how-to-restore-server-powershell.md) |
mysql | Tutorial Provision Mysql Server Using Azure Resource Manager Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-provision-mysql-server-using-azure-resource-manager-templates.md | - Title: 'Tutorial: Create Azure Database for MySQL - Azure Resource Manager template' -description: This tutorial explains how to provision and automate Azure Database for MySQL server deployments using Azure Resource Manager template. ----- Previously updated : 06/20/2022----# Tutorial: Provision an Azure Database for MySQL server using Azure Resource Manager template ----The [Azure Database for MySQL REST API](/rest/api/mysql/) enables DevOps engineers to automate and integrate provisioning, configuration, and operations of managed MySQL servers and databases in Azure. The API allows the creation, enumeration, management, and deletion of MySQL servers and databases on the Azure Database for MySQL service. --Azure Resource Manager leverages the underlying REST API to declare and program the Azure resources required for deployments at scale, aligning with infrastructure as a code concept. The template parameterizes the Azure resource name, SKU, network, firewall configuration, and settings, allowing it to be created one time and used multiple times. Azure Resource Manager templates can be easily created using [Azure portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md) or [Visual Studio Code](../../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md?tabs=CLI). They enable application packaging, standardization, and deployment automation, which can be integrated in the DevOps CI/CD pipeline. For instance, if you are looking to quickly deploy a Web App with Azure Database for MySQL backend, you can perform the end-to-end deployment using this [QuickStart template](https://azure.microsoft.com/resources/templates/webapp-managed-mysql/) from the GitHub gallery. --In this tutorial, you use Azure Resource Manager template and other utilities to learn how to: --> [!div class="checklist"] -> * Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template -> * Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database -> * Load sample data -> * Query data -> * Update data --## Prerequisites --If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. --## Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template --To get the JSON template reference for an Azure Database for MySQL server, go to [Microsoft.DBforMySQL servers](/azure/templates/microsoft.dbformysql/servers) template reference. Below is the sample JSON template that can be used to create a new server running Azure Database for MySQL with VNet Service Endpoint. -```json -{ - "apiVersion": "2017-12-01", - "type": "Microsoft.DBforMySQL/servers", - "name": "string", - "location": "string", - "tags": "string", - "properties": { - "version": "string", - "sslEnforcement": "string", - "administratorLogin": "string", - "administratorLoginPassword": "string", - "storageProfile": { - "storageMB": "string", - "backupRetentionDays": "string", - "geoRedundantBackup": "string" - } - }, - "sku": { - "name": "string", - "tier": "string", - "capacity": "string", - "family": "string" - }, - "resources": [ - { - "name": "AllowSubnet", - "type": "virtualNetworkRules", - "apiVersion": "2017-12-01", - "properties": { - "virtualNetworkSubnetId": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]", - "ignoreMissingVnetServiceEndpoint": true - }, - "dependsOn": [ - "[concat('Microsoft.DBforMySQL/servers/', parameters('serverName'))]" - ] - } - ] -} -``` -In this request, the values that need to be customized are: -+ `name` - Specify the name of your MySQL Server (without domain name). -+ `location` - Specify a valid Azure data center region for your MySQL Server. For example, westus2. -+ `properties/version` - Specify the MySQL server version to deploy. For example, 5.6 or 5.7. -+ `properties/administratorLogin` - Specify the MySQL admin login for the server. The admin sign-in name cannot be azure_superuser, admin, administrator, root, guest, or public. -+ `properties/administratorLoginPassword` - Specify the password for the MySQL admin user specified above. -+ `properties/sslEnforcement` - Specify Enabled/Disabled to enable/disable sslEnforcement. -+ `storageProfile/storageMB` - Specify the max provisioned storage size required for the server in megabytes. For example, 5120. -+ `storageProfile/backupRetentionDays` - Specify the desired backup retention period in days. For example, 7. -+ `storageProfile/geoRedundantBackup` - Specify Enabled/Disabled depending on Geo-DR requirements. -+ `sku/tier` - Specify Basic, GeneralPurpose, or MemoryOptimized tier for deployment. -+ `sku/capacity` - Specify the vCore capacity. Possible values include 2, 4, 8, 16, 32 or 64. -+ `sku/family` - Specify Gen5 to choose hardware generation for server deployment. -+ `sku/name` - Specify TierPrefix_family_capacity. For example B_Gen5_1, GP_Gen5_16, MO_Gen5_32. See the [pricing tiers](./concepts-pricing-tiers.md) documentation to understand the valid values per region and per tier. -+ `resources/properties/virtualNetworkSubnetId` - Specify the Azure identifier of the subnet in VNet where Azure MySQL server should be placed. -+ `tags(optional)` - Specify optional tags are key value pairs that you would use to categorize the resources for billing etc. --If you are looking to build an Azure Resource Manager template to automate Azure Database for MySQL deployments for your organization, the recommendation would be to start from the sample [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json) in Azure Quickstart GitHub Gallery first and build on top of it. --If you are new to Azure Resource Manager templates and would like to try it, you can start by following these steps: -+ Clone or download the Sample [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json) from Azure Quickstart gallery. -+ Modify the azuredeploy.parameters.json to update the parameter values based on your preference and save the file. -+ Use Azure CLI to create the Azure MySQL server using the following commands --You may use the Azure Cloud Shell in the browser, or Install Azure CLI on your own computer to run the code blocks in this tutorial. ---```azurecli-interactive -az login -az group create -n ExampleResourceGroup -l "West US2" -az deployment group create -g $ ExampleResourceGroup --template-file $ {templateloc} --parameters $ {parametersloc} -``` --## Get the connection information -To connect to your server, you need to provide host information and access credentials. -```azurecli-interactive -az mysql server show --resource-group myresourcegroup --name mydemoserver -``` --The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**. -```json -{ - "administratorLogin": "myadmin", - "administratorLoginPassword": null, - "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver", - "location": "westus2", - "name": "mydemoserver", - "resourceGroup": "myresourcegroup", - "sku": { - "capacity": 2, - "family": "Gen5", - "name": "GP_Gen5_2", - "size": null, - "tier": "GeneralPurpose" - }, - "sslEnforcement": "Enabled", - "storageProfile": { - "backupRetentionDays": 7, - "geoRedundantBackup": "Disabled", - "storageMb": 5120 - }, - "tags": null, - "type": "Microsoft.DBforMySQL/servers", - "userVisibleState": "Ready", - "version": "5.7" -} -``` --## Connect to the server using mysql -Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to establish a connection to your Azure Database for MySQL server. In this example, the command is: -```cmd -mysql -h mydemoserver.database.windows.net -u myadmin@mydemoserver -p -``` --## Create a blank database -Once you're connected to the server, create a blank database. -```sql -mysql> CREATE DATABASE mysampledb; -``` --At the prompt, run the following command to switch the connection to this newly created database: -```sql -mysql> USE mysampledb; -``` --## Create tables in the database -Now that you know how to connect to the Azure Database for MySQL database, complete some basic tasks. --First, create a table and load it with some data. Let's create a table that stores inventory information. -```sql -CREATE TABLE inventory ( - id serial PRIMARY KEY, - name VARCHAR(50), - quantity INTEGER -); -``` --## Load data into the tables -Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data. -```sql -INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150); -INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154); -``` --Now you have two rows of sample data into the table you created earlier. --## Query and update the data in the tables -Execute the following query to retrieve information from the database table. -```sql -SELECT * FROM inventory; -``` --You can also update the data in the tables. -```sql -UPDATE inventory SET quantity = 200 WHERE name = 'banana'; -``` --The row gets updated accordingly when you retrieve data. -```sql -SELECT * FROM inventory; -``` --## Clean up resources --When it's no longer needed, delete the resource group, which deletes the resources in the resource group. --# [Portal](#tab/azure-portal) --1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**. --2. In the resource group list, choose the name of your resource group. --3. In the **Overview** page of your resource group, select **Delete resource group**. --4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**. --# [PowerShell](#tab/PowerShell) --```azurepowershell-interactive -$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name" -Remove-AzResourceGroup -Name $resourceGroupName -Write-Host "Press [ENTER] to continue..." -``` --# [CLI](#tab/CLI) --```azurecli-interactive -echo "Enter the Resource Group name:" && -read resourceGroupName && -az group delete --name $resourceGroupName && -echo "Press [ENTER] to continue ..." -``` ----## Next steps -In this tutorial you learned to: -> [!div class="checklist"] -> * Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template -> * Use the mysql command-line tool to create a database -> * Load sample data -> * Query data -> * Update data --> [!div class="nextstepaction"] -> [How to connect applications to Azure Database for MySQL](./how-to-connection-string.md) |
postgresql | Concepts Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md | After you restore the database, you can perform the following tasks to get your - If the new server is meant to replace the original server, redirect clients and client applications to the new server. Change the server name of your connection string to point to the new server. -- Ensure that appropriate server-level firewall, private endpoints and virtual network rules are in place for user connections. In *public access* network, rules are copied over from the original server, but those might not ne the ones required in the restored environment. So, adjust them as per your requirements. Private endpoints are not carried over. Create any private endpoints you may need in the restored server. In *private access* virtual network, the restore doesn't copy over any network infrastructure artifacts from source to restored server networks. Anything related to configuration of VNET, subnets, or Network Security Groups, must be taken care of as a post-restore task.+- Ensure that appropriate server-level firewall, private endpoints and virtual network rules are in place for user connections. In *public access* network, rules are copied over from the original server, but those might not be the ones required in the restored environment. So, adjust them as per your requirements. Private endpoints are not carried over. Create any private endpoints you may need in the restored server. In *private access* virtual network, the restore doesn't copy over any network infrastructure artifacts from source to restored server networks. Anything related to configuration of VNET, subnets, or Network Security Groups, must be taken care of as a post-restore task. - Scale up or scale down the restored server's compute as needed. After you restore the database, you can perform the following tasks to get your - Configure alerts as appropriate. -- If you restored the database configured with high availability, and if you want to configure the restored server with high availability, you can then follow [the steps](./how-to-manage-high-availability-portal.md).+- If the source server from which you restored was configured with high availability, and you want to configure the restored server with high availability, you can then follow [these steps](./how-to-manage-high-availability-portal.md). ++- If the source server from which you restored was configured with read replicase, and you want to configure read replicas on the restored server, you can then follow [these steps](./how-to-read-replicas-portal.md). ## Long-term retention (preview) |
search | Search Get Started Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rest.md | You're pasting this endpoint into the `.rest` or `.http` file in a later step. Requests to the search endpoint must be authenticated and authorized. You can use API keys or roles for this task. Keys are easier to start with, but roles are more secure. +For a role-based connection, the following instructions have you connecting to Azure AI Search under your identity, not the identity of a client app. + ### Option 1: Use keys Select **Settings** > **Keys** and then copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one. For more information, see [Connect to Azure AI Search using key authentication](search-security-api-keys.md). In this section, obtain your personal identity token using either the Azure CLI, az login ``` -1. Get your personal identity. +1. Get your personal identity token. ```azurecli- az ad signed-in-user show \ - --query id -o tsv + az account get-access-token --scope https://search.azure.com/.default ``` #### [Azure PowerShell](#tab/azure-powershell) In this section, obtain your personal identity token using either the Azure CLI, Connect-AzAccount ``` -1. Get your personal identity. +1. Get your personal identity token. ```azurepowershell- (Get-AzContext).Account.ExtendedProperties.HomeAccountId.Split('.')[0] + Get-AzAccessToken -ResourceUrl https://search.azure.com ``` #### [Azure portal](#tab/portal) Add a second request to your `.rest` file. [Create Index (REST)](/rest/api/searc ### Create a new index POST {{baseUrl}}/indexes?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} { "name": "hotels-quickstart", The URI is extended to include the `docs` collections and `index` operation. ### Upload documents POST {{baseUrl}}/indexes/hotels-quickstart/docs/index?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} { "value": [ The URI is extended to include a query expression, which is specified by using t ```http ### Run a query POST {{baseUrl}}/indexes/hotels-quickstart/docs/search?api-version=2023-11-01 HTTP/1.1- Content-Type: application/json - api-key: {{apiKey}} - - { - "search": "lake view", - "select": "HotelId, HotelName, Tags, Description", - "searchFields": "Description, Tags", - "count": true - } + Content-Type: application/json + Authorization: Bearer {{token}} + + { + "search": "lake view", + "select": "HotelId, HotelName, Tags, Description", + "searchFields": "Description, Tags", + "count": true + } ``` 1. Review the response in the adjacent pane. You should have a count that indicates the number of matches found in the index, a search score that indicates relevance, and values for each field listed in the `select` statement. You can also use [Get Statistics](/rest/api/searchservice/indexes/get-statistics ### Get index statistics GET {{baseUrl}}/indexes/hotels-quickstart/stats?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} ``` 1. Review the response. This operation is an easy way to get details about index storage. |
search | Search Get Started Vector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md | If you don't have an Azure subscription, create a [free account](https://azure.m ## Prerequisites - [Visual Studio Code](https://code.visualstudio.com/download) with a [REST client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client). If you need help with getting started, see [Quickstart: Text search using REST](search-get-started-rest.md).+ - [Azure AI Search](search-what-is-azure-search.md), in any region and on any tier. You can use the Free tier for this quickstart, but Basic or higher is recommended for larger data files. [Create](search-create-service-portal.md) or [find an existing Azure AI Search resource](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. Most existing services support vector search. For a small subset of services created prior to January 2019, an index that contains vector fields fails on creation. In this situation, a new service must be created. - Optionally, to run the query example that invokes [semantic reranking](semantic-search-overview.md), your search service must be the Basic tier or higher, with [semantic ranking enabled](semantic-how-to-enable-disable.md).+ - Optionally, an [Azure OpenAI](https://aka.ms/oai/access) resource with a deployment of `text-embedding-ada-002`. The source `.rest` file includes an optional step for generating new text embeddings, but we provide pregenerated embeddings so that you can omit this dependency. ## Download files If you don't have an Azure subscription, create a [free account](https://azure.m You can also start a new file on your local system and create requests manually by using the instructions in this article. -## Copy a search service key and URL +## Get a search service endpoint ++You can find the search service endpoint in the Azure portal. ++1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). ++1. On the **Overview** home page, find the URL. An example endpoint might look like `https://mydemo.search.windows.net`. ++ :::image type="content" source="media/search-get-started-rest/get-endpoint.png" lightbox="media/search-get-started-rest/get-endpoint.png" alt-text="Screenshot of the URL property on the overview page."::: ++You're pasting this endpoint into the `.rest` or `.http` file in a later step. ++## Configure access ++Requests to the search endpoint must be authenticated and authorized. You can use API keys or roles for this task. Keys are easier to start with, but roles are more secure. ++For a role-based connection, the following instructions have you connecting to Azure AI Search under your identity, not the identity of a client app. ++### Option 1: Use keys ++Select **Settings** > **Keys** and then copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one. For more information, see [Connect to Azure AI Search using key authentication](search-security-api-keys.md). +++You're pasting this key into the `.rest` or `.http` file in a later step. ++### Option 2: Use roles ++Make sure your search service is [configured for role-based access](search-security-enable-roles.md). You must have preconfigured [role-assignments for developer access](search-security-rbac.md#assign-roles-for-development). Your role assignments must grant permission to create, load, and query a search index. ++In this section, obtain your personal identity token using either the Azure CLI, Azure PowerShell, or the Azure portal. -REST calls require the search service endpoint and an API key on every request. You can get these values from the Azure portal. +#### [Azure CLI](#tab/azure-cli) -1. Sign in to the [Azure portal](https://portal.azure.com). Go to the **Overview** page and copy the URL. An example endpoint might look like `https://mydemo.search.windows.net`. +1. Sign in to Azure CLI. -1. Select **Settings** > **Keys** and copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one. + ```azurecli + az login + ``` ++1. Get your personal identity token. ++ ```azurecli + az account get-access-token --scope https://search.azure.com/.default + ``` ++#### [Azure PowerShell](#tab/azure-powershell) ++1. Sign in with PowerShell. ++ ```azurepowershell + Connect-AzAccount + ``` ++1. Get your personal identity token. - :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot that shows the URL and API keys in the Azure portal."::: + ```azurepowershell + Get-AzAccessToken -ResourceUrl https://search.azure.com + ``` ++#### [Azure portal](#tab/portal) ++Use the steps found here: [find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id) in the Azure portal. ++++You're pasting your personal identity token into the `.rest` or `.http` file in a later step. ++> [!NOTE] +> This section assumes you're using a local client that connects to Azure AI Search on your behalf. An alternative approach is [getting a token for the client app](/entra/identity-platform/v2-oauth2-client-creds-grant-flow), assuming your application is [registered](/entra/identity-platform/quickstart-register-app) with Microsoft Entra ID. ## Create a vector index The index schema is organized around hotel content. Sample data consists of vect 1. Open a new text file in Visual Studio Code. -1. Set variables to the search endpoint and the API key that you collected earlier. +1. Set variables to the values you collected earlier. This example uses a personal identity token. ```http @baseUrl = PUT-YOUR-SEARCH-SERVICE-URL-HERE- @apiKey = PUT-YOUR-ADMIN-API-KEY-HERE + @token = PUT-YOUR-PERSONAL-IDENTITY-TOKEN-HERE ``` -1. Save the file with a `.rest` file extension. +1. Save the file with a `.rest` or `.http` file extension. 1. Paste in the following example to create the `hotels-vector-quickstart` index on your search service. The index schema is organized around hotel content. Sample data consists of vect ### Create a new index POST {{baseUrl}}/indexes?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} { "name": "hotels-vector-quickstart", The URI is extended to include the `docs` collection and the `index` operation. ### Upload documents POST {{baseUrl}}/indexes/hotels-quickstart-vectors/docs/index?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json-api-key: {{apiKey}} +Authorization: Bearer {{token}} { "value": [ The vector query string is semantically similar to the search string, but it inc ### Run a query POST {{baseUrl}}/indexes/hotels-vector-quickstart/docs/search?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} { "count": true, You can add filters, but the filters are applied to the nonvector content in you ### Run a vector query with a filter POST {{baseUrl}}/indexes/hotels-vector-quickstart/docs/search?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} { "count": true, Hybrid search consists of keyword queries and vector queries in a single search ### Run a hybrid query POST {{baseUrl}}/indexes/hotels-vector-quickstart/docs/search?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} { "count": true, Here's the last query in the collection. This hybrid query with semantic ranking ### Run a hybrid query POST {{baseUrl}}/indexes/hotels-vector-quickstart/docs/search?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} { "count": true, You can also try this `DELETE` command: ### Delete an index DELETE {{baseUrl}}/indexes/hotels-vector-quickstart?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json- api-key: {{apiKey}} + Authorization: Bearer {{token}} ``` ## Next steps |
search | Search Howto Reindex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-reindex.md | Queries continue to run, but if you're updating or removing existing fields, you + To update the contents of simple fields and subfields in complex types, list only the fields you want to change. For example, if you only need to update a description field, the payload should consist of the document key and the modified description. Omitting other fields retains their existing values. -+ To merge the inline changes into string collection, provide the entire value. Recall the `tags` field example from the previous section. New values overwrite the old values, and there's no merging at the field content level. ++ To merge inline changes into string collection, provide the entire value. Recall the `tags` field example from the previous section. New values overwrite the old values for an entire field, and there's no merging within the content of a field. Here's a [REST API example](search-get-started-rest.md) demonstrating these tips: GET {{baseUrl}}/indexes/hotels-vector-quickstart/docs('1')?api-version=2023-11- Content-Type: application/json api-key: {{apiKey}} -### Change the description and city for Secret Point Hotel +### Change the description, city, and tags for Secret Point Hotel POST {{baseUrl}}/indexes/hotels-vector-quickstart/docs/search.index?api-version=2023-11-01 HTTP/1.1 Content-Type: application/json api-key: {{apiKey}} POST {{baseUrl}}/indexes/hotels-vector-quickstart/docs/search.index?api-version= { "@search.action": "mergeOrUpload", "HotelId": "1",- "Description": "Change the description and city for Secret Point Hotel. Keep everything else." + "Description": "I'm overwriting the description for Secret Point Hotel.", + "Tags": ["my old item", "my new item"], "Address": {- "City": "Miami" + "City": "Gotham City" } } ] GET {{baseUrl}}/indexes/hotels-vector-quickstart/docs('1')?api-version=2023-11- ## Change an index schema -The index schema defines the physical data structures created on the search service, so there aren't many schema changes that you can make without incurring a full rebuild. The following list enumerates the schema changes that can be introduced seamlessly into an existing index. Generally, the list includes new fields and functionality used during query executions. +The index schema defines the physical data structures created on the search service, so there aren't many schema changes that you can make without incurring a full rebuild. The following list enumerates the schema changes that can be introduced seamlessly into an existing index. Generally, the list includes new fields and functionality used during query execution. + Add a new field + Set the `retrievable` attribute on an existing field The order of operations is: 1. Revise the schema with updates from the previous list. -1. [Update index](/rest/api/searchservice/indexes/create-or-update). +1. [Update index schema](/rest/api/searchservice/indexes/create-or-update) on the search service. -1. [Index documents](/rest/api/searchservice/documents). +1. [Update index content](#update-content) to match your revised schema if you added a new field. For all other changes, the existing indexed content is used as-is. -When you update the index schema, existing documents in the index are given a null value for the new field. On the next index documents job, values from external source data replace the nulls added by Azure AI Search. +When you update an index schema to include a new field, existing documents in the index are given a null value for that field. On the next indexing job, values from external source data replace the nulls added by Azure AI Search. -There should be no query disruptions during the updates, but query results will change as the updates take effect. +There should be no query disruptions during the updates, but query results will vary as the updates take effect. ## Drop and rebuild an index -Some modifications require an index drop and rebuild. +Some modifications require an index drop and rebuild, replacing a current index with a new one. | Action | Description | |--|-| Some modifications require an index drop and rebuild. The order of operations is: -1. [Get an index definition](/rest/api/searchservice/indexes/get) in case you need it for future reference. +1. [Get an index definition](/rest/api/searchservice/indexes/get) in case you need it for future reference, or to use as the basis for a new version. 1. Consider using a backup and restore solution to preserve a copy of index content. There are solutions in [C#](https://github.com/liamc) and in [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/index-backup-restore). We recommend the Python version because it's more up to date. -1. [Drop the existing index](/rest/api/searchservice/indexes/delete). queries targeting the index are immediately dropped. Remember that deleting an index is irreversible, destroying physical storage for the fields collection and other constructs. + If you have capacity on your search service, keep the existing index while creating and testing the new one. -1. [Post a revised index](/rest/api/searchservice/indexes/create), where the body of the request includes changed or modified field definitions. +1. [Drop the existing index](/rest/api/searchservice/indexes/delete). Queries targeting the index are immediately dropped. Remember that deleting an index is irreversible, destroying physical storage for the fields collection and other constructs. -1. [Load the index with documents](/rest/api/searchservice/documents) from an external source. +1. [Post a revised index](/rest/api/searchservice/indexes/create), where the body of the request includes changed or modified field definitions and configurations. ++1. [Load the index with documents](/rest/api/searchservice/documents) from an external source. Documents are indexed using the field definitions and configurations of the new schema. When you create the index, physical storage is allocated for each field in the index schema, with an inverted index created for each searchable field and a vector index created for each vector field. Fields that aren't searchable can be used in filters or expressions, but don't have inverted indexes and aren't full-text or fuzzy searchable. On an index rebuild, these inverted indexes and vector indexes are deleted and recreated based on the index schema you provide. |
search | Search Manage Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md | You can only regenerate one admin API key at a time. ### Regnerate admin keys POST https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}/regenerateAdminKey/primary?api-version=2023-11-01 HTTP/1.1 Content-type: application/json- Authorization: Bearer + Authorization: Bearer {{token}} ``` ## Create query API keys |
static-web-apps | Add Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/add-authentication.md | + + Title: Add authentication to your static site in Azure Static Web Apps +description: Learn to add authentication to your static site with Azure Static Web Apps. ++++ Last updated : 07/02/2024++++# Add authentication to your static site in Azure Static Web Apps ++This article is part two in a series that show you how to deploy your first site to Azure Static Web Apps. Previously, you created and deployed a static site with the [web framework](./deploy-web-framework.md) of your choice. ++In this article, you add authentication to your site and run the site locally before deploying to the cloud. ++## Prerequisites ++This tutorial continues from the previous tutorial, and has the same [prerequisites](deploy-web-framework.md#prerequisites). ++## Authentication and authorization ++Azure Static Web Apps makes it easy to use common authentication providers like Microsoft Entra and Google without writing security-related code. ++> [!NOTE] +> You can optionally [register a custom provider and assign custom roles](./authentication-custom.md) for more fine-grained control when using backend APIs. ++In this article, you configure your site to use Microsoft Entra ID for authentication. ++## Add authentication ++In the last article, you created a `staticwebapp.config.json` file. This file [controls many features](./configuration.md) for Azure Static Web Apps, including authentication. ++1. Update the `staticwebapp.config.json` to match the following configuration. ++ ```json + { + "navigationFallback": { + "rewrite": "/https://docsupdatetracker.net/index.html" + }, + "routes": [ + { + "route": "/*", + "allowedRoles": [ "authenticated" ] + } + ], + "responseOverrides": { + "401": { + "statusCode": 302, + "redirect": "/.auth/login/aad" + } + } + } + ``` ++ The `routes` section allows you to restrict access to named roles. There are two predefined roles: `authenticated` and `anonymous`. If the connected user doesn't have an allowed role, the server returns a "401 Unauthorized" response. ++ The values in the `responseOverrides` section configure your site so that instead of an unauthenticated user seeing a server error, their browser is redirected to the sign-in page instead. ++1. Run the site locally. ++ To launch the site locally, run the [Static Web Apps CLI](https://azure.github.io/static-web-apps-cli) `start` command. ++ ```bash + npx swa start + ``` ++ This command starts the Azure Static Web Apps emulator on `http://localhost:4280`. ++ This URL is shown in your terminal window after the service starts up. ++1. Select the URL to go to the site. ++ Once you open the site in your browser, the local authentication sign in page is displayed. ++ ![A screen shot of the local authentication sign in page.](./media/add-authentication/local-auth-page.png) ++ The local authentication sign in page provides an emulation of the real authentication experience without the need for external services. You can create a user ID and select which roles you'd like to apply to the user from this screen. ++1. Enter a username, then select **Login**. ++ Once you authenticate, your site is displayed. ++## Deploy the site to Azure ++Deploy your site in the same way as you did in the last tutorial. ++1. Build your site: ++ ```bash + npx swa build + ``` ++1. Deploy your site to the static web app: ++ ```bash + npx swa deploy --app-name swa-demo-site + ``` ++ The URL for your site is displayed once the deployment is finished. Select the site URL to open the site in the browser. The standard Microsoft Entra ID sign in page is displayed: ++ ![Screen shot of the Microsoft authentication sign in page.](./media/add-authentication/remote-auth-page.png) ++ Sign in with your Microsoft account. +++## Related content ++* [Authentication and authorization](./authentication-authorization.yml) +* [Custom authentication](./authentication-custom.md) |
static-web-apps | Deploy Web Framework | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-web-framework.md | + + Title: Deploy your web app to Azure Static Web Apps. +description: Learn to deploy your web app to Azure Static Web Apps. ++++ Last updated : 07/02/2024++zone_pivot_groups: swa-web-framework +++# Deploy your web app to Azure Static Web Apps ++In this article, you create a new web app with the framework of your choice, run it locally, then deploy to Azure Static Web Apps. ++## Prerequisites ++To complete this tutorial, you need: ++| Resource | Description | +||| +| [Azure subscription][1] | If you don't have one, you can [create an account for free][1]. | +| [Node.js][2] | Install version 20.0 or later. | +| [Azure CLI][3] | Install version 2.6x or later. | ++You also need a text editor. For work with Azure, [Visual Studio Code][4] is recommended. ++You can run the app you create in this article on the platform of your choice including: Linux, macOS, Windows, or Windows Subsystem for Linux. ++## Create your web app ++1. Open a terminal window. +++2. Select an appropriate directory for your code, then run the following commands. ++ ```bash + npm create vite@latest swa-vanilla-demo -- --template=vanilla + cd swa-vanilla-demo + npm install + npm run dev + ``` ++ As you run these commands, the development server prints the URL of your website. Select the link to open it in your default browser. ++ ![Screen shot of the generated vanilla web application.][img-vanilla-js] ++++2. Select an appropriate directory for your code, then run the following commands. ++ ```bash + npx --package @angular/cli@latest ng new swa-angular-demo --ssr=false --defaults + cd swa-angular-demo + npm start + ``` ++ As you run these commands, the development server prints the URL of your website. Select the link to open it in your default browser. ++ ![Screen shot of the generated angular web application.][img-angular] ++++2. Select an appropriate directory for your code, then run the following commands. ++ ```bash + npm create vite@latest swa-react-demo -- --template react + cd swa-react-demo + npm install + npm run dev + ``` ++ As you run these commands, the development server prints the URL of your website. Select the link to open it in your default browser. ++ ![Screen shot of the generated react web application.][img-react] ++++2. Select an appropriate directory for your code, then run the following commands. ++ ```bash + npm create vite@latest swa-vue-demo -- --template vue + cd swa-vue-demo + npm install + npm run dev + ``` ++ As you run these commands, the development server prints the URL of your website. Select the link to open it in your default browser. ++ ![Screen shot of the generated Vue web application.][img-vue] +++3. Select <kbd>Cmd/Ctrl</kbd>+<kbd>C</kbd> to stop the development server. +++++> [!WARNING] +> Angular v17 and later place the distributable files in a subdirectory of the output path that you can choose. The SWA CLI doesn't know the specific location of the directory. The following steps show you how to set this path correctly. ++Locate the generated *https://docsupdatetracker.net/index.html* file in your project in the *dist/swa-angular-demo/browser* folder. ++1. Set the `SWA_CLI_OUTPUT_LOCATION` environment variable to the directory containing the *https://docsupdatetracker.net/index.html* file: ++ # [bash](#tab/bash) ++ ```bash + export SWA_CLI_OUTPUT_LOCATION="dist/swa-angular-demo/browser" + ``` ++ # [csh](#tab/csh) ++ ```bash + setenv SWA_CLI_OUTPUT_LOCATION "dist/swa-angular-demo/browser" + ``` ++ # [PowerShell](#tab/pwsh) ++ ```powershell + $env:SWA_CLI_OUTPUT_LOCATION="dist/swa-angular-demo/browser" + ``` ++ # [CMD](#tab/cmd) ++ ```bash + set SWA_CLI_OUTPUT_LOCATION="dist/swa-angular-demo/browser" + ``` ++ +++## Deploy your site to Azure ++Deploy your code to your static web app: ++```bash +npx swa deploy --env production +``` ++It might take a few minutes to deploy the application. Once complete, the URL of your site is displayed. ++![Screen shot of the deploy command.][img-deploy] ++On most systems, you can select the URL of the site to open it in your default browser. +++## Next steps ++> [!div class="nextstepaction"] +> [Add authentication](./add-authentication.md) ++## Related content ++* [Authentication and authorization](./authentication-authorization.yml) +* [Database connections](./database-overview.md) +* [Custom Domains](./custom-domain.md) ++<!-- Links --> +[1]: https://azure.microsoft.com/free +[2]: https://nodejs.org/ +[3]: /cli/azure/install-azure-cli +[4]: https://code.visualstudio.com ++<!-- Images --> +[img-deploy]: media/deploy-screenshot.png +[img-vanilla-js]: media/deploy-web-framework/vanilla-js-screenshot.png +[img-angular]: media/deploy-web-framework/angular-screenshot.png +[img-react]: media/deploy-web-framework/react-screenshot.png +[img-vue]: media/deploy-web-framework/vue-screenshot.png |
storage | File Sync Agent Silent Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-agent-silent-installation.md | + + Title: How to install the Azure File Sync agent silently +description: Discusses how to perform a silent installation for a new Azure File Sync agent installation +++ Last updated : 07/03/2024++++# How to perform a silent installation for a new Azure File Sync agent installation ++This article covers how to silently install the Azure File Sync agent using default or custom settings. ++## Silent installation that uses default settings +To run a silent installation for a new agent installation that uses the default settings, run the following command at an elevated command prompt: ++``` +msiexec /i packagename.msi /qb /l*v AFSInstaller.log +``` +For example, to install the Azure File Sync agent for Windows Server 2016, run the following command: ++``` +msiexec /i StorageSyncAgent_WS2016.msi /qb /l*v AFSInstaller.log +``` ++> [!NOTE] +> Use the /qb switch to display restart prompts (if required), agent update, and server registration screens. To suppress the screens and automatically restart the server (if required), use the /qn switch. ++## Silent installation that uses custom settings +To run a silent installation for a new agent installation that uses custom settings, use the parameters that are documented in the table below. ++For example, to run a silent installation by using custom proxy settings, run the following command: ++``` +msiexec /i StorageSyncAgent_WS2016.msi USE_CUSTOM_PROXY_SETTINGS=1 PROXY_ADDRESS=10.0.0.1 PROXY_PORT=80 PROXY_AUTHREQUIRED_FLAG=1 PROXY_USERNAME=username PROXY_PASSWORD=password /qb /l*v AFSInstaller.log +``` ++For example, to run a silent installation by using an unattend answer file, run the following command: ++``` +msiexec /i StorageSyncAgent_WS2016.msi UNATTEND_ANSWER_FILE=c:\agent\unattend.ini /qb /l*v AFSInstaller.log +``` ++The unattend answer file should have the following format: ++ACCEPTEULA=1 +ENABLE_AZUREFILESYNC_FEATURE=1 +AGENTINSTALLDIR=%SYSTEMDRIVE%\Program Files\Azure\StorageSyncAgent +ENABLE_MU_ENROLL=1 +ENABLE_DATA_COLLECTION=1 +ENABLE_AGENT_UPDATE_POSTINSTALL=1 +USE_CUSTOM_PROXY_SETTINGS=1 +PROXY_ADDRESS=10.0.0.1 +PROXY_PORT=80 +PROXY_AUTHREQUIRED_FLAG=1 +PROXY_USERNAME=username +PROXY_PASSWORD=password ++**Azure File Sync agent installation parameters** ++| Parameter | Purpose | Values | Default Value | +|--||--|--| +|ACCEPTEULA|Azure File Sync license agreement|0 (Not accepted) or 1 (Accepted)|1| +|ENABLE_AZUREFILESYNC_FEATURE|Azure File Sync feature installation option|0 (Do not install) or 1 (Install)|1| +|AGENTINSTALLDIR|Agent installation directory|Local Path|%SYSTEMDRIVE%\Program Files\Azure\StorageSyncAgent| +|ENABLE_MU_ENROLL|Enroll in Microsoft Update|0 (Do not enroll) or 1 (Enroll)|1| +|ENABLE_DATA_COLLECTION|Collect data necessary to identify and fix problems|0 (No) or 1 (Yes)|1| +|ENABLE_AGENT_UPDATE_POSTINSTALL|Check for updates after agent installation completes|0 (No) or 1 (Yes)|1| +|USE_CUSTOM_PROXY_SETTINGS|Use default proxy settings (if configured) or custom proxy settings|0 (Default Proxy) or 1 (Custom Proxy)|0| +|PROXY_ADDRESS|Proxy server IP address|IP Address|| +|PROXY_PORT|Proxy server port number|Port Number|| +|PROXY_AUTHREQUIRED_FLAG|Proxy server requires credentials|0 (No) or 1 (Yes)|| +|PROXY_USERNAME|User name used for authentication|Username|| +|PROXY_PASSWORD|Password used for authentication|Password|| +|UNATTEND_ANSWER_FILE|Use unattend answer file|Path|| |
storage | File Sync Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md | The Azure File Sync agent is a downloadable package that enables Windows Server # [Portal](#tab/azure-portal) -You can download the agent from the [Microsoft Download Center](https://go.microsoft.com/fwlink/?linkid=858257). When the download is finished, double-click the MSI package to start the Azure File Sync agent installation. +You can download the agent from the [Microsoft Download Center](https://go.microsoft.com/fwlink/?linkid=858257). When the download is finished, double-click the MSI package to start the Azure File Sync agent installation or to silently install the agent, see [How to perform a silent installation for a new Azure File Sync agent installation](file-sync-agent-silent-installation.md). > [!IMPORTANT] > If you're using Azure File Sync with a Failover Cluster, the Azure File Sync agent must be installed on every node in the cluster. Each node in the cluster must be registered to work with Azure File Sync. |
storage | Isv File Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services.md | This article compares several ISV solutions that provide files services in Azure | **Panzura**| **CloudFS** is an enterprise global file system with added resiliency and high-performance. Offers ransomware protection. | - Simplified legacy storage replacement <br> - Backup and disaster recovery, with granular recovery ability <br> - Cloud native access to unstructured data for Analytics, AI/ML. <br> - Multi-site file collaboration, with automatic file locking and real time global file consistency <br> - Global remote work with cloud VDI <br> - Accelerated cloud migration for legacy workloads | | **Qumulo** | **Qumulo** on Azure offers multiple petabytes (PiB) of storage capacity, and up to 20 GB/s of performance per file system. Windows (SMB) and Linux (NFS) are both natively supported, and Qumulo provides onboard real-time workload analytics. | ΓÇô Primary file storage for High Performance Compute, Media & Entertainment, Genomics, Electronic design, and Financial modeling. | | **Tiger Technology** | **Tiger Bridge** is a data management software solution. Provides tiering between an NTFS file system and Azure Blob Storage or Azure managed disks. Creates a single namespace with local file locking. | - Cloud archive<br> - Continuous data protection (CDP) <br> - Disaster Recovery for Windows servers <br> - Multi-site sync and collaboration <br> - Remote workflows (VDI)<br> - Native access to cloud data for Analytics, AI, ML |+| **WEKA** | **WEKA Data Platform** provides a fast, and scalable file storage system with a software-defined approach to data. It accelerates storage performance, reduces cloud storage costs, and simplifies data operations across on-premises, and cloud environments. | - Enterprise, and Generative AI <br> - High-Performance Computing (HPC) <br> - Containerized workloads <br> - Tiering, backup, and disaster recovery (DR) from on-premises WEKA systems | | **XenData** | **Cloud File Gateway** creates a highly scalable global file system using windows file servers | - Global sharing of engineering and scientific files <br> - Collaborative video editing | ## ISV solutions comparison ### Supported protocols -| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData | -|--|-|--||--|--|--| -| **SMB 2.1** | Yes | Yes | Yes | Yes | Yes | Yes | -| **SMB 3.0** | Yes | Yes | Yes | Yes | Yes | Yes | -| **SMB 3.1** | Yes | Yes | Yes | Yes | Yes | Yes | -| **NFS v3** | Yes | Yes | Yes | Yes | Yes | Yes | -| **NFS v4.1** | Yes | Yes | Yes | Yes | Yes | Yes | -| **iSCSI** | No | Yes | No | No | Yes | No | +| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | WEKA | XenData | +|--|-|--||--|--|--|--| +| **SMB 2.1** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| **SMB 3.0** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| **SMB 3.1** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| **NFS v3** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| **NFS v4.1** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| **iSCSI** | No | Yes | No | No | Yes | No | No | ### Supported services for persistent storage -| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData | -|--|-|--||--|--|--| -| **Managed disks** | No | Yes | Yes | No | Yes | No | -| **Unmanaged disks** | No | No | No | No | Yes | No | -| **Azure Storage Block blobs** | Yes | Yes (tiering) | Yes | No | Yes | Yes | -| **Azure Storage Page blobs** | No | Yes (for HA) | Yes | Yes | No | No | -| **Azure Archive tier support** | No | No | Yes | No | Yes | Yes | -| **Files accessible in non-opaque format** | No | No | No | No | Yes | Yes | +| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | WEKA | XenData | +|--|-|--||--|--|--|--| +| **Managed disks** | No | Yes | Yes | No | Yes | No | No | +| **Unmanaged disks** | No | No | No | No | Yes | No | No | +| **Azure Storage Block blobs** | Yes | Yes (tiering) | Yes | No | Yes | Yes (tiering, snapshots) | Yes | +| **Azure Storage Page blobs** | No | Yes (for HA) | Yes | Yes | No | No | No | +| **Azure Archive tier support** | No | No | Yes | No | Yes | No | Yes | +| **Files accessible in non-opaque format** | No | No | No | No | Yes | No | Yes | ### Extended features -| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData | -|--|-|--||--|--|--| -| **Operating Environment** | UniFS | ONTAP | PFOS | Qumulo Core | Windows Server | Windows Server | -| **High-Availability** | Yes | Yes | Yes | Yes | Yes (requires setup) | Yes | -| **Automatic failover between nodes in the cluster** | Yes | Yes | Yes | Yes | Yes (windows cluster) | yes (windows cluster) | -| **Automatic failover across availability zones** | Yes | No | Yes | No | Yes (windows cluster) | yes (windows cluster) | -| **Automatic failover across regions** | Yes (with Nasuni support)| No | No | No | Yes (windows cluster) | yes (windows cluster) | -| **Snapshot support** | Yes | Yes | Yes | Yes | Yes | No | -| **Consistent snapshot support** | Yes | Yes | Yes | Yes | Yes | No | -| **Integrated backup** | Yes | Yes (Add-on) | No | Yes | Yes | Yes | -| **Versioning** | Yes | Yes | No | Yes | Yes | Yes | -| **File level restore** | Yes | Yes | Yes | Yes | Yes | Yes | -| **Volume level restore** | Yes | Yes | Yes | No | Yes | Yes | -| **WORM** | Yes | Yes | No | No | Yes | No | -| **Automatic tiering** | Yes | Yes | No | Yes | Yes | Yes | -| **Global file locking** | Yes | Yes (NetApp Global File Cache) | Yes | Yes | Yes | Yes | -| **Namespace aggregation over backend sources** | Yes | Yes | No | Yes | Yes | Yes | -| **Caching of active data** | Yes | Yes | Yes | Yes | yes | Yes | -| **Supported caching modes** | LRU, manually pinned | LRU | LRU, manually pinned | Predictive | LRU | LRU | -| **Encryption at rest** | Yes | Yes | Yes | Yes | Yes | No | -| **De-duplication** | Yes | Yes | Yes | No | No | No | -| **Compression** | Yes | Yes | Yes | No | No | No | +| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | WEKA | XenData | +|--|-|--||--|--|--|--| +| **Operating Environment** | UniFS | ONTAP | PFOS | Qumulo Core | Windows Server | WekaFS | Windows Server | +| **High-Availability** | Yes | Yes | Yes | Yes | Yes (requires setup) | Yes | Yes | +| **Automatic failover between nodes in the cluster** | Yes | Yes | Yes | Yes | Yes (windows cluster) | Yes | Yes (windows cluster) | +| **Automatic failover across availability zones** | Yes | No | Yes | No | Yes (windows cluster) | No | Yes (windows cluster) | +| **Automatic failover across regions** | Yes (with Nasuni support)| No | No | No | Yes (windows cluster) | No | Yes (windows cluster) | +| **Snapshot support** | Yes | Yes | Yes | Yes | Yes | Yes | No | +| **Consistent snapshot support** | Yes | Yes | Yes | Yes | Yes | Yes | No | +| **Integrated backup** | Yes | Yes (Add-on) | No | Yes | Yes | Yes | Yes | +| **Versioning** | Yes | Yes | No | Yes | Yes | Yes | Yes | +| **File level restore** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| **Volume level restore** | Yes | Yes | Yes | No | Yes | Yes | Yes | +| **WORM** | Yes | Yes | No | No | Yes | Yes | No | +| **Automatic tiering** | Yes | Yes | No | Yes | Yes | Yes | Yes | +| **Global file locking** | Yes | Yes (NetApp Global File Cache) | Yes | Yes | Yes | No | Yes | +| **Namespace aggregation over backend sources** | Yes | Yes | No | Yes | Yes | Yes | Yes | +| **Caching of active data** | Yes | Yes | Yes | Yes | yes | Yes | Yes | +| **Supported caching modes** | LRU, manually pinned | LRU | LRU, manually pinned | Predictive | LRU | | LRU | +| **Encryption at rest** | Yes | Yes | Yes | Yes | Yes | Yes | No | +| **Deduplication** | Yes | Yes | Yes | No | No | Yes | No | +| **Compression** | Yes | Yes | Yes | No | No | No | No | ### Authentication sources -| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData | -|--|-|--||--|--|--| -| **Microsoft Entra ID support** | Yes (via AD DS) | Yes (via AD DS) | Yes (via AD DS) | Yes | Yes (via AD DS) | Yes (via AD DS) | -| **Active directory support** | Yes | Yes | Yes | Yes | Yes | Yes | -| **LDAP support** | Yes | Yes | No | Yes | Yes | Yes | +| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | WEKA |XenData | +|--|-|--||--|--|--|--| +| **Microsoft Entra ID support** | Yes (via AD DS) | Yes (via AD DS) | Yes (via AD DS) | Yes | Yes (via AD DS) | Yes (via AD DS) | Yes (via AD DS) | +| **Active directory support** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| **LDAP support** | Yes | Yes | No | Yes | Yes | Yes | Yes | ### Management -| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData | -|--|-|--||--|--|--| -| **REST API** | Yes | Yes | Yes | Yes | Yes | No | -| **Web GUI** | Yes | Yes | Yes | Yes | No | No | +| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | WEKA |XenData | +|--|-|--||--|--|--|-| +| **REST API** | Yes | Yes | Yes | Yes | Yes | Yes | No | +| **Web GUI** | Yes | Yes | Yes | Yes | No | Yes | No | ### Scalability -| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData | -|--|-|--||--|--|--| -| **Maximum number of nodes in a single cluster** | 100 | 2 (HA) | Tested up to 60 nodes | 100 | N / A | N / A | -| **Maximum number of volumes** | 800 | 1024 | Unlimited | N / A | N / A | 1 | -| **Maximum number of snapshots** | Unlimited | Unlimited | Unlimited | Unlimited | N / A | N / A | -| **Maximum size of a single namespace** | Unlimited | Depends on infrastructure | Unlimited | Unlimited | N / A | N / A | +| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | WEKA |XenData | +|--|-|--||--|--|--|--| +| **Maximum number of nodes in a single cluster** | 100 | 2 (HA) | Tested up to 60 nodes | 100 | N / A | 100 per Proximity Placement Group | N / A | +| **Maximum number of volumes** | 800 | 1024 | Unlimited | N / A | N / A | 1024 | 1 | +| **Maximum number of snapshots** | Unlimited | Unlimited | Unlimited | Unlimited | N / A | 24000 per namespace | N / A | +| **Maximum size of a single namespace** | Unlimited | Depends on infrastructure | Unlimited | Unlimited | N / A | 14 EiB | N / A | ### Licensing -| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData | -|--|-|--||--|--|--| -| **BYOL** | Yes | Yes | Yes | Yes | Yes | yes | -| **Azure Benefit Eligible** | No | Yes | Yes | Yes | No | No | -| **Deployment model (IaaS, SaaS)** | IaaS | IaaS | IaaS | SaaS | IaaS | IaaS | +| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | WEKA | XenData | +|--|-|--||--|--|--|--| +| **BYOL** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| **Azure Benefit Eligible** | No | Yes | Yes | Yes | No | Yes | No | +| **Deployment model (IaaS, SaaS)** | IaaS | IaaS | IaaS | SaaS | IaaS | IaaS | IaaS | ### Other features This article compares several ISV solutions that provide files services in Azure - Support for REST, and FTP **Tiger Technology**+- [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tigertechnologyinc1692954347771.tiger_bridge_saas_soft_only_us) - Invisible to applications - Partial Restore - Windows Shell integration (overlay, context menu, property sheet) This article compares several ISV solutions that provide files services in Azure - Partial write to objects - Ransomware protection +**WEKA** +- [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/weka1652213882079.weka_data_platform) +- Consistent high performance at scale to accelerate Enterprise AI, Generative AI, and HPC workflows +- WEKA Cloud Deployment Manager, and Terraform driven automated deployment, and scaling +- Automated data tiering, and snapshots to Azure blob storage +- Supports POSIX, NFS, SMB, and S3 protocols +- Supports data encryption in flight, and at rest + **XenData** - The Azure Cosmos DB service provides fast synchronization of multiple gateways, including application-specific owner files for global collaboration - Each gateway has highly granular control of locally cached content |
update-manager | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md | Title: Troubleshoot known issues with Azure Update Manager description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. Previously updated : 06/04/2024 Last updated : 07/04/2024 If you see an `HRESULT` error code, double-click the exception displayed in red |Exception |Resolution or action | ||| |`Exception from HRESULT: 0x……C` | Search the relevant error code in the [Windows Update error code list](https://support.microsoft.com/help/938205/windows-update-error-code-list) to find more information about the cause of the exception. |-|`0x8024402C`</br>`0x8024401C`</br>`0x8024402F` | Indicates network connectivity problems. Make sure your machine has network connectivity to Update Management. For a list of required ports and addresses, see the [Network planning](../automation/update-management/plan-deployment.md#ports) section. | +|`0x8024402C`</br>`0x8024401C`</br>`0x8024402F` | Indicates network connectivity problems. Make sure your machine has network connectivity to Update Management. For a list of required ports and addresses, see the [Network planning](overview.md#network-planning) section. | |`0x8024001E`| The update operation didn't finish because the service or system was shutting down.| |`0x8024002E`| Windows Update service is disabled.| |`0x8024402C` | If you're using a WSUS server, make sure the registry values for `WUServer` and `WUStatusServer` under the `HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate` registry key specify the correct WSUS server. | |
virtual-desktop | Whats New Documentation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md | description: Learn about new and updated articles to the Azure Virtual Desktop d Previously updated : 5/31/2024 Last updated : 07/04/2024 # What's new in documentation for Azure Virtual Desktop We update documentation for Azure Virtual Desktop regularly. In this article, we highlight articles for new features and where there are important updates to existing articles. +## June 2024 ++In June 2024, we made the following changes to the documentation: ++- Published a new article to [Configure the default chroma value](configure-default-chroma-value.md). ++- Published two new articles about the [Preferred application group type behavior for pooled host pools](preferred-application-group-type.md) and how to [Set the preferred application group type for a pooled host pool](set-preferred-application-group-type.md). ++- Added information about TLS 1.3 support in [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md). ++- Updated [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md) to include New Teams SlimCore changes. ++- Added a section to [Use cases for Azure Virtual Desktop Insights](insights-use-cases.md) for how you can view [connection reliability](insights-use-cases.md#connection-reliability) information. ++- Rewrote [Configure RDP Shortpath](configure-rdp-shortpath.md) to include host pool settings and a better flow. ++- Rewrote [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md) to include more comprehensive information. This article is shared for Azure Virtual Desktop, Windows 365, Microsoft Dev Box, Remote Desktop Services, and remote PC connections. ++- Combined host pool load balancing information to the single article [Configure host pool load balancing](configure-host-pool-load-balancing.md) and added Azure CLI steps. ++- Consolidated information on [Azure Virtual Desktop business continuity and disaster recovery concepts](disaster-recovery-concepts.md) in the product documentation, focusing more on the more comprehensive information for Azure Virtual Desktop in the [Cloud Adoption Framework](/azure/cloud-adoption-framework/scenarios/wvd/eslz-business-continuity-and-disaster-recovery) and the [Azure Architecture Center](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr). + ## May 2024 In May 2024, we made the following changes to the documentation: |