Updates from: 05/18/2022 01:14:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Azure Ad Pim Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md
As a delegated approver, you'll receive an email notification when an Azure AD r
In the **Requests for role activations** section, you'll see a list of requests pending your approval.
-## View pending requests using Graph API
+## View pending requests using Microsoft Graph API
### HTTP request ````HTTP
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests/filterByCurrentUser(on='approver')?$filter=status eq 'PendingApproval'
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests/filterByCurrentUser(on='approver')?$filter=status eq 'PendingApproval'
```` ### HTTP response ````HTTP {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#Collection(unifiedRoleAssignmentScheduleRequest)",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#Collection(unifiedRoleAssignmentScheduleRequest)",
"value": [ { "@odata.type": "#microsoft.graph.unifiedRoleAssignmentScheduleRequest",
GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentSche
![Approve notification showing request was approved](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
-## Approve pending requests using Graph API
+## Approve pending requests using Microsoft Graph API
### Get IDs for the steps that require approval
active-directory Pim Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-apis.md
You can perform Privileged Identity Management (PIM) tasks using the Microsoft G
For requests and other details about PIM APIs, check out: -- [PIM for Azure AD roles API reference](/graph/api/resources/unifiedroleeligibilityschedulerequest?view=graph-rest-beta&preserve-view=true)
+- [PIM for Azure AD roles API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)
- [PIM for Azure resource roles API reference](/rest/api/authorization/roleeligibilityschedulerequests) ## PIM API history
Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the
### Iteration 2 ΓÇô Supports Azure AD roles and Azure resource roles
-Under the /beta/privilegedAccess endpoint, Microsoft supported both /aadRoles and /azureResources. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
+Under the `/beta/privilegedAccess` endpoint, Microsoft supported both `/aadRoles` and `/azureResources`. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
### Current iteration ΓÇô Azure AD roles in Microsoft Graph and Azure resource roles in Azure Resource Manager
-Now in beta, Microsoft has the final iteration of the PIM API before we release the API to general availability. Based on customer feedback, the Azure AD PIM API is now under the unifiedRoleManagement set of API and the Azure Resource PIM API is now under the Azure Resource Manager role assignment API. These locations also provide a few additional benefits including:
+Currently in general availability, this is the final iteration of the PIM API. Based on customer feedback, the PIM API for managing Azure AD roles is now under the **unifiedRoleManagement** set of APIs and the Azure Resource PIM API is now under the Azure Resource Manager role assignment API. These locations also provide a few additional benefits including:
- Alignment of the PIM API for regular role assignment API for both Azure AD roles and Azure Resource roles. - Reducing the need to call additional PIM API to onboard a resource, get a resource, or get role definition.
In the current iteration, there is no API support for PIM alerts and privileged
### Azure AD roles
- To call the PIM Graph API for Azure AD roles, you will need at least one of the following permissions:
+To understand the permissions that you need to call the PIM Microsoft Graph API for Azure AD roles, see [Role management permissions](/graph/permissions-reference#role-management-permissions).
-- RoleManagement.ReadWrite.Directory-- RoleManagement.Read.Directory-
- The easiest way to specify the required permissions is to use the Azure AD consent framework.
+The easiest way to specify the required permissions is to use the Azure AD consent framework.
### Azure resource roles
- The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+ The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Microsoft Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
## Calling PIM API with an app-only token
In the current iteration, there is no API support for PIM alerts and privileged
PIM API consists of two categories that are consistent for both the API for Azure AD roles and Azure resource roles: assignment and activation API requests, and policy settings.
-### Assignment and activation API
+### Assignment and activation APIs
-To make eligible assignments, time-bound eligible/active assignments, and to activate assignments, PIM provides the following entities:
+To make eligible assignments, time-bound eligible or active assignments, and to activate eligible assignments, PIM provides the following resources:
-- RoleAssignmentSchedule-- RoleEligibilitySchedule-- RoleAssignmentScheduleInstance-- RoleEligibilityScheduleInstance-- RoleAssignmentScheduleRequest-- RoleEligibilityScheduleRequest
+- [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest)
+- [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest)
-These entities work alongside pre-existing roleDefinition and roleAssignment entities for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
+These entities work alongside pre-existing **roleDefinition** and **roleAssignment** resources for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
- If you are trying to create or retrieve a persistent (active) role assignment that does not have a schedule (start or end time), you should avoid these PIM entities and focus on the read/write operations under the roleAssignment entity -- To create an eligible assignment with or without an expiration time you can use the write operation on roleEligibilityScheduleRequest
+- To create an eligible assignment with or without an expiration time you can use the write operation on the [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest) resource
+
+- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on the [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest) resource
+
+- To activate an eligible assignment, you should also use the [write operation on roleAssignmentScheduleRequest](/graph/api/rbacapplication-post-roleassignmentschedulerequests) with a `selfActivate` **action** property.
-- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on roleAssignmentScheduleRequest
+Each of the request objects would create the following read-only objects:
-- To activate an eligible assignment, you should also use the write operation on roleAssignmentScheduleRequest with a modified action parameter called selfActivate
+- [unifiedRoleAssignmentSchedule](/graph/api/resources/unifiedroleassignmentschedule)
+- [unifiedRoleEligibilitySchedule](/graph/api/resources/unifiedroleeligibilityschedule)
+- [unifiedRoleAssignmentScheduleInstance](/graph/api/resources/unifiedroleassignmentscheduleinstance)
+- [unifiedRoleEligibilityScheduleInstance](/graph/api/resources/unifiedroleeligibilityscheduleinstance)
-Each of the request objects would either create a roleAssignmentSchedule or a roleEligibilitySchedule object. These objects are read-only and show a schedule of all the current and future assignments.
+The **unifiedRoleAssignmentSchedule** and **unifiedRoleEligibilitySchedule** objects show a schedule of all the current and future assignments.
-When an eligible assignment is activated, the roleEligibilityScheduleInstance continues to exist. The roleAssignmentScheduleRequest for the activation would create a separate roleAssignmentSchedule and roleAssignmentScheduleInstance for that activated duration.
+When an eligible assignment is activated, the **unifiedRoleEligibilityScheduleInstance** continues to exist. The **unifiedRoleAssignmentScheduleRequest** for the activation would create a separate **unifiedRoleAssignmentSchedule** object and a **unifiedRoleAssignmentScheduleInstance** for that activated duration.
The instance objects are the actual assignments that currently exist whether it is an eligible assignment or an active assignment. You should use the GET operation on the instance entity to retrieve a list of eligible assignments / active assignments to a role/user.
-### Policy setting API
+For more information about assignment and activation APIs, see [PIM API for managing role assignments and eligibilities](/graph/api/resources/privilegedidentitymanagementv3-overview#pim-api-for-managing-role-assignment).
+
+### Policy settings APIs
+
+To manage the settings of Azure AD roles, we provide the following entities:
-To manage the setting, we provide the following entities:
+- [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy)
+- [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment)
-- roleManagementPolicy-- roleManagementPolicyAssignment
+The [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy) resource through it's **rules** relationship defines the rules or settings of the Azure AD role. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment) object attaches the policy to a specific role.
-The *role management policy* defines the setting of the rule. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The *policy assignment* attaches the policy to a specific role.
+Use the APIs supported by these resources retrieve role management policy assignments for all Azure AD role or filter the list by a **roleDefinitionId**, and then update the rules or settings in the policy associated with the Azure AD role.
-Use this API is to get a list of all the roleManagementPolicyAssignments, filter it by the roleDefinitionID you want to modify, and then update the policy associated with the policyAssignment.
+For more information about the policy settings APIs, see [role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim).
## Relationship between PIM entities and role assignment entities
-The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the roleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and roleAssignmentScheduleInstance would both include:
+The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the unifiedRoleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and unifiedRoleAssignmentScheduleInstance would both include:
- Persistent (active) assignments made outside of PIM - Persistent (active) assignments with a schedule made inside PIM
The only link between the PIM entity and the role assignment entity for persiste
## Next steps -- [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagement-root?view=graph-rest-beta&preserve-view=true)
+- [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
When you need to assume an Azure AD role, you can request activation by opening
![Activation request is pending approval notification](./media/pim-resource-roles-activate-your-roles/resources-my-roles-activate-notification.png)
-## Activate a role using Graph API
+## Activate a role using Microsoft Graph API
+
+For more information about Microsoft Graph APIs for PIM, see [Overview of role management through the privileged identity management (PIM) API](/graph/api/resources/privilegedidentitymanagementv3-overview).
### Get all eligible roles that you can activate
-When a user gets their role eligibility via group membership, this Graph request doesn't return their eligibility.
+When a user gets their role eligibility via group membership, this Microsoft Graph request doesn't return their eligibility.
#### HTTP request ````HTTP
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilityScheduleRequests/filterByCurrentUser(on='principal')
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleEligibilityScheduleRequests/filterByCurrentUser(on='principal')
```` #### HTTP response
GET https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilitySch
To save space we're showing only the response for one role, but all eligible role assignments that you can activate will be listed. ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#Collection(unifiedRoleEligibilityScheduleRequest)",
- "value": [
- {
- "@odata.type": "#microsoft.graph.unifiedRoleEligibilityScheduleRequest",
- "id": "<request-ID-GUID>",
- "status": "Provisioned",
- "createdDateTime": "2021-07-15T19:39:53.33Z",
- "completedDateTime": "2021-07-15T19:39:53.383Z",
- "approvalId": null,
- "customData": null,
- "action": "AdminAssign",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:39:53.3846704Z",
- "recurrence": null,
- "expiration": {
- "type": "noExpiration",
- "endDateTime": null,
- "duration": null
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
- },
-}
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#Collection(unifiedRoleEligibilityScheduleRequest)",
+ "value": [
+ {
+ "@odata.type": "#microsoft.graph.unifiedRoleEligibilityScheduleRequest",
+ "id": "50d34326-f243-4540-8bb5-2af6692aafd0",
+ "status": "Provisioned",
+ "createdDateTime": "2022-04-12T18:26:08.843Z",
+ "completedDateTime": "2022-04-12T18:26:08.89Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "adminAssign",
+ "principalId": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5",
+ "roleDefinitionId": "8424c6f0-a189-499e-bbd0-26c1753c96d4",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "50d34326-f243-4540-8bb5-2af6692aafd0",
+ "justification": "Assign Attribute Assignment Admin eligibility to myself",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-04-12T18:26:08.8911834Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDateTime",
+ "endDateTime": "2024-04-10T00:00:00Z",
+ "duration": null
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+ }
+ ]
+}
````
-### Activate a role assignment with justification
+### Self-activate a role eligibility with justification
#### HTTP request ````HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests
-
-{
- "action": "SelfActivate",
- "justification": "adssadasasd",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "principalId": "<principal-ID-GUID>"
-}
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests
+
+{
+ "action": "selfActivate",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "8424c6f0-a189-499e-bbd0-26c1753c96d4",
+ "directoryScopeId": "/",
+ "justification": "I need access to the Attribute Administrator role to manage attributes to be assigned to restricted AUs",
+ "scheduleInfo": {
+ "startDateTime": "2022-04-14T00:00:00.000Z",
+ "expiration": {
+ "type": "AfterDuration",
+ "duration": "PT5H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": "CONTOSO:Normal-67890",
+ "ticketSystem": "MS Project"
+ }
+}
```` #### HTTP response ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
- "id": "f1ccef03-8750-40e0-b488-5aa2f02e2e55",
- "status": "PendingApprovalProvisioning",
- "createdDateTime": "2021-07-15T19:51:07.1870599Z",
- "completedDateTime": "2021-07-15T19:51:17.3903028Z",
- "approvalId": "<approval-ID-GUID>",
- "customData": null,
- "action": "SelfActivate",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": null,
- "recurrence": null,
- "expiration": {
- "type": "afterDuration",
- "endDateTime": null,
- "duration": "PT5H30M"
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
-}
+HTTP/1.1 201 Created
+Content-Type: application/json
+
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
+ "id": "911bab8a-6912-4de2-9dc0-2648ede7dd6d",
+ "status": "Granted",
+ "createdDateTime": "2022-04-13T08:52:32.6485851Z",
+ "completedDateTime": "2022-04-14T00:00:00Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "selfActivate",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "8424c6f0-a189-499e-bbd0-26c1753c96d4",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "911bab8a-6912-4de2-9dc0-2648ede7dd6d",
+ "justification": "I need access to the Attribute Administrator role to manage attributes to be assigned to restricted AUs",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "071cc716-8147-4397-a5ba-b2105951cc0b"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-04-14T00:00:00Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDuration",
+ "endDateTime": null,
+ "duration": "PT5H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": "CONTOSO:Normal-67890",
+ "ticketSystem": "MS Project"
+ }
+}
```` ## View the status of activation requests
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
For certain roles, the scope of the granted permissions can be restricted to a s
For more information about creating administrative units, see [Add and remove administrative units](../roles/admin-units-manage.md).
-## Assign a role using Graph API
+## Assign a role using Microsoft Graph API
+
+For more information about Microsoft Graph APIs for PIM, see [Overview of role management through the privileged identity management (PIM) API](/graph/api/resources/privilegedidentitymanagementv3-overview).
For permissions required to use the PIM API, see [Understand the Privileged Identity Management APIs](pim-apis.md). ### Eligible with no end date
-The following is a sample HTTP request to create an eligible assignment with no end date. For details on the API commands including samples such as C# and JavaScript, see [Create unifiedRoleEligibilityScheduleRequest](/graph/api/unifiedroleeligibilityschedulerequest-post-unifiedroleeligibilityschedulerequests?view=graph-rest-beta&tabs=http&preserve-view=true).
+The following is a sample HTTP request to create an eligible assignment with no end date. For details on the API commands including request samples in languages such as C# and JavaScript, see [Create roleEligibilityScheduleRequests](/graph/api/rbacapplication-post-roleeligibilityschedulerequests).
#### HTTP request ````HTTP
-POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilityScheduleRequests
-
- "action": "AdminAssign",
- "justification": "abcde",
- "directoryScopeId": "/",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:15:08.941Z",
- "expiration": {
- "type": "NoExpiration" }
- }
-{
-}
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleEligibilityScheduleRequests
+Content-Type: application/json
+
+{
+ "action": "adminAssign",
+ "justification": "Permanently assign the Global Reader to the auditor",
+ "roleDefinitionId": "f2ef992c-3afb-46b9-b7cf-a126ee74c451",
+ "directoryScopeId": "/",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "scheduleInfo": {
+ "startDateTime": "2022-04-10T00:00:00Z",
+ "expiration": {
+ "type": "noExpiration"
+ }
+ }
+}
```` #### HTTP response
POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilitySc
The following is an example of the response. The response object shown here might be shortened for readability. ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
- "id": "<schedule-ID-GUID>",
- "status": "Provisioned",
- "createdDateTime": "2021-07-15T19:47:41.0939004Z",
- "completedDateTime": "2021-07-15T19:47:42.4376681Z",
- "approvalId": null,
- "customData": null,
- "action": "AdminAssign",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:47:42.4376681Z",
- "recurrence": null,
- "expiration": {
- "type": "noExpiration",
- "endDateTime": null,
- "duration": null
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
-}
+HTTP/1.1 201 Created
+Content-Type: application/json
+
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
+ "id": "42159c11-45a9-4631-97e4-b64abdd42c25",
+ "status": "Provisioned",
+ "createdDateTime": "2022-05-13T13:40:33.2364309Z",
+ "completedDateTime": "2022-05-13T13:40:34.6270851Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "adminAssign",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "f2ef992c-3afb-46b9-b7cf-a126ee74c451",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "42159c11-45a9-4631-97e4-b64abdd42c25",
+ "justification": "Permanently assign the Global Reader to the auditor",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-05-13T13:40:34.6270851Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "noExpiration",
+ "endDateTime": null,
+ "duration": null
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
```` ### Active and time-bound
-The following is a sample HTTP request to create an active assignment that's time-bound. For details on the API commands including samples such as C# and JavaScript, see [Create unifiedRoleEligibilityScheduleRequest](/graph/api/unifiedroleeligibilityschedulerequest-post-unifiedroleeligibilityschedulerequests?view=graph-rest-beta&tabs=http&preserve-view=true).
+The following is a sample HTTP request to create an active assignment that's time-bound. For details on the API commands including request samples in languages such as C# and JavaScript, see [Create roleAssignmentScheduleRequests](/graph/api/rbacapplication-post-roleassignmentschedulerequests).
#### HTTP request ````HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests
-
-{
- "action": "AdminAssign",
- "justification": "abcde",
- "directoryScopeId": "/",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:15:08.941Z",
- "expiration": {
- "type": "AfterDuration",
- "duration": "PT3H"
- }
- }
-}
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests
+
+{
+ "action": "adminAssign",
+ "justification": "Assign the Exchange Recipient Administrator to the mail admin",
+ "roleDefinitionId": "31392ffb-586c-42d1-9346-e59415a2cc4e",
+ "directoryScopeId": "/",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "scheduleInfo": {
+ "startDateTime": "2022-04-10T00:00:00Z",
+ "expiration": {
+ "type": "afterDuration",
+ "duration": "PT3H"
+ }
+ }
+}
```` #### HTTP response
POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentSch
The following is an example of the response. The response object shown here might be shortened for readability. ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
- "id": "<schedule-ID-GUID>",
- "status": "Provisioned",
- "createdDateTime": "2021-07-15T19:15:09.7093491Z",
- "completedDateTime": "2021-07-15T19:15:11.4437343Z",
- "approvalId": null,
- "customData": null,
- "action": "AdminAssign",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:15:11.4437343Z",
- "recurrence": null,
- "expiration": {
- "type": "afterDuration",
- "endDateTime": null,
- "duration": "PT3H"
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
-}
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
+ "id": "ac643e37-e75c-4b42-960a-b0fc3fbdf4b3",
+ "status": "Provisioned",
+ "createdDateTime": "2022-05-13T14:01:48.0145711Z",
+ "completedDateTime": "2022-05-13T14:01:49.8589701Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "adminAssign",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "31392ffb-586c-42d1-9346-e59415a2cc4e",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "ac643e37-e75c-4b42-960a-b0fc3fbdf4b3",
+ "justification": "Assign the Exchange Recipient Administrator to the mail admin",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-05-13T14:01:49.8589701Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDuration",
+ "endDateTime": null,
+ "duration": "PT3H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
```` ## Update or remove an existing role assignment
Follow these steps to update or remove an existing role assignment. **Azure AD P
1. Select **Update** or **Remove** to update or remove the role assignment.
-## Remove eligible assignment via API
+## Remove eligible assignment via Microsoft Graph API
+
+The following is a sample HTTP request to revoke an eligible assignment to a role from a principal. For details on the API commands including request samples in languages such as C# and JavaScript, see [Create roleEligibilityScheduleRequests](/graph/api/rbacapplication-post-roleeligibilityschedulerequests).
### Request ````HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilityScheduleRequests
-
-
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleEligibilityScheduleRequests
{ "action": "AdminRemove",
POST https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilitySc
````HTTP {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
"id": "fc7bb2ca-b505-4ca7-ad2a-576d152633de", "status": "Revoked", "createdDateTime": "2021-07-15T20:23:23.85453Z",
active-directory Pim How To Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-renew-extend.md
To extend a role assignment, browse to the role or assignment view in Privileged
![Azure AD Roles - Assignments page listing eligible roles with links to extend](./media/pim-how-to-renew-extend/extend-admin-extend.png)
-## Extend role assignments using Graph API
+## Extend role assignments using Microsoft Graph API
-Extend an active assignment using Graph API.
+In the following request, an administrator extends an active assignment using Microsoft Graph API.
#### HTTP request ````HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests
-{
- "action": "AdminExtend",
- "justification": "abcde",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- `"principalId": "<principal-ID-GUID>",
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:15:08.941Z",
- "expiration": {
- "type": "AfterDuration",
- "duration": "PT3H"
- }
- }
+{
+ "action": "adminExtend",
+ "justification": "TEST",
+ "roleDefinitionId": "31392ffb-586c-42d1-9346-e59415a2cc4e",
+ "directoryScopeId": "/",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "scheduleInfo": {
+ "startDateTime": "2022-04-10T00:00:00Z",
+ "expiration": {
+ "type": "afterDuration",
+ "duration": "PT3H"
+ }
+ }
} ```` #### HTTP response ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
- "id": "<assignment-ID-GUID>",
- "status": "Provisioned",
- "createdDateTime": "2021-07-15T20:26:44.865248Z",
- "completedDateTime": "2021-07-15T20:26:47.9434068Z",
- "approvalId": null,
- "customData": null,
- "action": "AdminExtend",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": "2021-07-15T20:26:47.9434068Z",
- "recurrence": null,
- "expiration": {
- "type": "afterDuration",
- "endDateTime": null,
- "duration": "PT3H"
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
-}
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
+ "id": "c3a3aa36-22e2-4240-8e4c-ea2a3af7c30f",
+ "status": "Provisioned",
+ "createdDateTime": "2022-05-13T16:18:36.3647674Z",
+ "completedDateTime": "2022-05-13T16:18:40.0835993Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "adminExtend",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "31392ffb-586c-42d1-9346-e59415a2cc4e",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "c3a3aa36-22e2-4240-8e4c-ea2a3af7c30f",
+ "justification": "TEST",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-05-13T16:18:40.0835993Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDuration",
+ "endDateTime": null,
+ "duration": "PT3H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
```` ## Renew role assignments
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Clust
Some of the subnets for this cluster's node pools are full and cannot take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there is not enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster cannot be upgraded if the node subnet is full.
-Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet-preview).
+Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet).
### Disable the Application Routing Addon
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
Title: Configure Azure CNI networking in Azure Kubernetes Service (AKS)
description: Learn how to configure Azure CNI (advanced) networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. Previously updated : 06/03/2019 Last updated : 05/16/2022
The following screenshot from the Azure portal shows an example of configuring t
![Advanced networking configuration in the Azure portal][portal-01-networking-advanced]
-## Dynamic allocation of IPs and enhanced subnet support (preview)
-
+## Dynamic allocation of IPs and enhanced subnet support
A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allotting pod IPs from a subnet separate from the subnet hosting the AKS cluster. It offers the following benefits:
The [prerequisites][prerequisites] already listed for Azure CNI still apply, but
* Only linux node clusters and node pools are supported. * AKS Engine and DIY clusters are not supported.-
-### Install the `aks-preview` Azure CLI
-
-You will need the *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `PodSubnetPreview` preview feature
-
-To use the feature, you must also enable the `PodSubnetPreview` feature flag on your subscription.
-
-Register the `PodSubnetPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "PodSubnetPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/PodSubnetPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+* Azure CLI version `2.37.0` or later.
### Planning IP addressing
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Title: Use multiple node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) Previously updated : 02/11/2021 Last updated : 05/16/2022
The following example output shows that *mynodepool* has been successfully creat
> [!TIP] > If no *VmSize* is specified when you add a node pool, the default size is *Standard_D2s_v3* for Windows node pools and *Standard_DS2_v2* for Linux node pools. If no *OrchestratorVersion* is specified, it defaults to the same version as the control plane.
-### Add a node pool with a unique subnet (preview)
+### Add a node pool with a unique subnet
A workload may require splitting a cluster's nodes into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
+> [!NOTE]
+> Make sure to use Azure CLI version `2.35.0` or later.
+ #### Limitations * All subnets assigned to nodepools must belong to the same virtual network.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Logic | [Logic Apps](../../logic-apps/index.yml) | | Microsoft.MachineLearning | [Machine Learning Studio](../../machine-learning/classic/index.yml) | | Microsoft.MachineLearningServices | [Azure Machine Learning](../../machine-learning/index.yml) |
-| Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-control-cli.md) |
+| Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-configurations.md) |
| Microsoft.ManagedIdentity | [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/index.yml) | | Microsoft.ManagedNetwork | Virtual networks managed by PaaS services | | Microsoft.ManagedServices | [Azure Lighthouse](../../lighthouse/index.yml) |
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes
-description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles.
+ Title: Protect your Azure resources with a lock
+description: You can safeguard Azure resources from updates or deletions by locking down all users and roles.
Previously updated : 04/13/2022 Last updated : 05/13/2022
-# Lock resources to prevent unexpected changes
+# Lock your resources to protect your infrastructure
-As an administrator, you can lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. The lock overrides any permissions the user might have.
+As an administrator, you can lock an Azure subscription, resource group, or resource to protect them from accidental user deletions and modifications. The lock overrides any user permissions.
-You can set the lock level to **CanNotDelete** or **ReadOnly**. In the portal, the locks are called **Delete** and **Read-only** respectively.
+You can set locks that prevent either deletions or modifications. In the portal, these locks are called Delete and Read-only. In the command line, these locks are called **CanNotDelete** or **ReadOnly**. In the left navigation panel, the subscription lock feature's name is **Resource locks**, while the resource group lock feature's name is **Locks**.
-- **CanNotDelete** means authorized users can still read and modify a resource, but they can't delete the resource.-- **ReadOnly** means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role.
+- **CanNotDelete** means authorized users can read and modify a resource, but they can't delete it.
+- **ReadOnly** means authorized users can read a resource, but they can't delete or update it. Applying this lock is similar to restricting all authorized users to the permissions that the **Reader** role provides.
Unlike role-based access control, you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md). ## Lock inheritance
-When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence.
+When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the same parent lock. The most restrictive lock in the inheritance takes precedence.
-If you have a **Delete** lock on a resource and attempt to delete its resource group, the whole delete operation is blocked. Even if the resource group or other resources in the resource group aren't locked, the deletion doesn't happen. You never have a partial deletion.
+If you have a **Delete** lock on a resource and attempt to delete its resource group, the feature blocks the whole delete operation. Even if the resource group or other resources in the resource group are unlocked, the deletion doesn't happen. You never have a partial deletion.
-When you [cancel an Azure subscription](../../cost-management-billing/manage/cancel-azure-subscription.md#what-happens-after-subscription-cancellation), the resources are initially deactivated but not deleted. A resource lock doesn't block canceling the subscription. After a waiting period, the resources are permanently deleted. The resource lock doesn't prevent the permanent deletion of the resources.
+When you [cancel an Azure subscription](../../cost-management-billing/manage/cancel-azure-subscription.md#what-happens-after-subscription-cancellation):
+* A resource lock doesn't block the subscription cancellation.
+* Azure preserves your resources by deactivating them instead of immediately deleting them.
+* Azure only deletes your resources permanently after a waiting period.
## Understand scope of locks > [!NOTE]
-> It's important to understand that locks don't apply to all types of operations. Azure operations can be divided into two categories - control plane and data plane. **Locks only apply to control plane operations**.
+> Locks only apply to control plane Azure operations and not data plane operations.
-Control plane operations are operations sent to `https://management.azure.com`. Data plane operations are operations sent to your instance of a service, such as `https://myaccount.blob.core.windows.net/`. For more information, see [Azure control plane and data plane](control-plane-and-data-plane.md). To discover which operations use the control plane URL, see the [Azure REST API](/rest/api/azure/).
+Azure control plane operations go to `https://management.azure.com`. Azure data plane operations go to your service instance, such as `https://myaccount.blob.core.windows.net/`. See [Azure control plane and data plane](control-plane-and-data-plane.md). To discover which operations use the control plane URL, see the [Azure REST API](/rest/api/azure/).
-This distinction means locks prevent changes to a resource, but they don't restrict how resources perform their own functions. For example, a ReadOnly lock on a SQL Database logical server prevents you from deleting or modifying the server. It doesn't prevent you from creating, updating, or deleting data in the databases on that server. Data transactions are permitted because those operations aren't sent to `https://management.azure.com`.
+The distinction means locks protect a resource from changes, but they don't restrict how a resource performs its functions. A ReadOnly lock, for example, on an SQL Database logical server, protects it from deletions or modifications. It allows you to create, update, or delete data in the server database. Data plane operations allow data transactions. These requests don't go to `https://management.azure.com`.
-More examples of the differences between control and data plane operations are described in the next section.
+## Considerations before applying your locks
-## Considerations before applying locks
+Applying locks can lead to unexpected results. Some operations, which don't seem to modify a resource, require blocked actions. Locks prevent the POST method from sending data to the Azure Resource Manager API. Some common examples of blocked operations are:
-Applying locks can lead to unexpected results because some operations that don't seem to modify the resource actually require actions that are blocked by the lock. Locks will prevent any operations that require a POST request to the Azure Resource Manager API. Some common examples of the operations that are blocked by locks are:
+-A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
-- A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
+- A read-only lock on a **storage account** protects Azure Role-Based Access Control (RBAC) assignments scoped for a storage account or a data container (blob container or queue).
-- A cannot-delete lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted. If a request uses [data plane operations](control-plane-and-data-plane.md#data-plane), the lock on the storage account doesn't protect blob, queue, table, or file data within that storage account. However, if the request uses [control plane operations](control-plane-and-data-plane.md#control-plane), the lock protects those resources.
+- A cannot-delete lock on a **storage account** doesn't protect account data from deletion or modification. It only protects the storage account from deletion. If a request uses [data plane operations](control-plane-and-data-plane.md#data-plane), the lock on the storage account doesn't protect blob, queue, table, or file data within that storage account. If the request uses [control plane operations](control-plane-and-data-plane.md#control-plane), however, the lock protects those resources.
- For example, if a request uses [File Shares - Delete](/rest/api/storagerp/file-shares/delete), which is a control plane operation, the deletion is denied. If the request uses [Delete Share](/rest/api/storageservices/delete-share), which is a data plane operation, the deletion succeeds. We recommend that you use the control plane operations.
+ If a request uses [File Shares - Delete](/rest/api/storagerp/file-shares/delete), for example, which is a control plane operation, the deletion fails. If the request uses [Delete Share](/rest/api/storageservices/delete-share), which is a data plane operation, the deletion succeeds. We recommend that you use a control plane operation.
-- A read-only lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted or modified, and doesn't protect blob, queue, table, or file data within that storage account.
+- A read-only lock on a **storage account** doesn't prevent its data from deletion or modification. It also doesn't protect its blob, queue, table, or file data.
- A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access. -- A read-only lock on a **resource group** that contains an **App Service plan** prevents you from [scaling up or out the plan](../../app-service/manage-scale-up.md).
+- A read-only lock on a **resource group** that contains an **App Service plan** prevents you from [scaling up or out of the plan](../../app-service/manage-scale-up.md).
-- A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting the virtual machine. These operations require a POST request.
+- A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting a virtual machine. These operations require a POST method request.
-- A read-only lock on a **resource group** that contains an **automation account** prevents all runbooks from starting. These operations require a POST request.
+- A read-only lock on a **resource group** that contains an **automation account** prevents all runbooks from starting. These operations require a POST method request.
-- A cannot-delete lock on a **resource group** prevents Azure Resource Manager from [automatically deleting deployments](../templates/deployment-history-deletions.md) in the history. If you reach 800 deployments in the history, your deployments will fail.
+- A cannot-delete lock on a **resource group** prevents Azure Resource Manager from [automatically deleting deployments](../templates/deployment-history-deletions.md) in the history. If you reach 800 deployments in the history, your deployments fail.
- A cannot-delete lock on the **resource group** created by **Azure Backup Service** causes backups to fail. The service supports a maximum of 18 restore points. When locked, the backup service can't clean up restore points. For more information, see [Frequently asked questions-Back up Azure VMs](../../backup/backup-azure-vm-backup-faq.yml).
Applying locks can lead to unexpected results because some operations that don't
- A read-only lock on a **subscription** prevents **Azure Advisor** from working correctly. Advisor is unable to store the results of its queries. -- A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses POST](/rest/api/application-gateway/application-gateways/backend-health), which is blocked by the read-only lock.
+- A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses a POST method](/rest/api/application-gateway/application-gateways/backend-health), which a read-only lock blocks.
-- A read-only lock on a **AKS cluster** prevents all users from accessing any cluster resources from the **Kubernetes Resources** section of AKS cluster on the left of the Azure portal. These operations require a POST request for authentication.
+- A read-only lock on a AKS cluster limits how you can access cluster resources through the portal. A read-only lock prevents you from using the AKS cluster's Kubernetes Resources section in the Azure portal to choose a cluster resource. These operations require a POST method request for authentication.
## Who can create or delete locks
-To create or delete management locks, you must have access to `Microsoft.Authorization/*` or `Microsoft.Authorization/locks/*` actions. Of the built-in roles, only **Owner** and **User Access Administrator** are granted those actions.
+To create or delete management locks, you need access to `Microsoft.Authorization/*` or `Microsoft.Authorization/locks/*` actions. Only the **Owner** and the **User Access Administrator** built-in roles can create and delete management locks. You can create a custom role with the required permissions.
-## Managed Applications and locks
+## Managed applications and locks
-Some Azure services, such as Azure Databricks, use [managed applications](../managed-applications/overview.md) to implement the service. In that case, the service creates two resource groups. One resource group contains an overview of the service and isn't locked. The other resource group contains the infrastructure for the service and is locked.
+Some Azure services, such as Azure Databricks, use [managed applications](../managed-applications/overview.md) to implement the service. In that case, the service creates two resource groups. One is an unlocked resource group that contains a service overview. The other is a locked resource group that contains the service infrastructure.
-If you try to delete the infrastructure resource group, you get an error stating that the resource group is locked. If you try to delete the lock for the infrastructure resource group, you get an error stating that the lock can't be deleted because it's owned by a system application.
+If you try to delete the infrastructure resource group, you get an error stating that the resource group is locked. If you try to delete the lock for the infrastructure resource group, you get an error stating that the lock can't be deleted because a system application owns it.
Instead, delete the service, which also deletes the infrastructure resource group.
For managed applications, select the service you deployed.
![Select service](./media/lock-resources/select-service.png)
-Notice the service includes a link for a **Managed Resource Group**. That resource group holds the infrastructure and is locked. It can't be directly deleted.
+Notice the service includes a link for a **Managed Resource Group**. That resource group holds the infrastructure and is locked. You can only delete it indirectly.
![Show managed group](./media/lock-resources/show-managed-group.png)
-To delete everything for the service, including the locked infrastructure resource group, select **Delete** for the service.
+To delete everything for the service, including the locked infrastructure resource group, choose **Delete** for the service.
![Delete service](./media/lock-resources/delete-service.png)
To delete everything for the service, including the locked infrastructure resour
### Template
-When using an Azure Resource Manager template (ARM template) or Bicep file to deploy a lock, you need to be aware of the scope of the lock and the scope of the deployment. To apply a lock at the deployment scope, such as locking a resource group or subscription, don't set the scope property. When locking a resource within the deployment scope, set the scope property.
+When using an Azure Resource Manager template (ARM template) or Bicep file to deploy a lock, it's good to understand how the deployment scope and the lock scope work together. To apply a lock at the deployment scope, such as locking a resource group or a subscription, leave the scope property unset. When locking a resource, within the deployment scope, set the scope property on the lock.
-The following template applies a lock to the resource group it's deployed to. Notice there isn't a scope property on the lock resource because the scope of the lock matches the scope of deployment. This template is deployed at the resource group level.
+The following template applies a lock to the resource group it's deployed to. Notice there isn't a scope property on the lock resource because the lock scope matches the deployment scope. Deploy this template at the resource group level.
# [JSON](#tab/json)
resource createRgLock 'Microsoft.Authorization/locks@2016-09-01' = {
When applying a lock to a **resource** within the resource group, add the scope property. Set scope to the name of the resource to lock.
-The following example shows a template that creates an app service plan, a website, and a lock on the website. The scope of the lock is set to the website.
+The following example shows a template that creates an app service plan, a website, and a lock on the website. The lock's scope is set to the website.
# [JSON](#tab/json)
az lock delete --ids $lockid
### REST API
-You can lock deployed resources with the [REST API for management locks](/rest/api/resources/managementlocks). The REST API enables you to create and delete locks, and retrieve information about existing locks.
+You can lock deployed resources with the [REST API for management locks](/rest/api/resources/managementlocks). The REST API lets you create and delete locks and retrieve information about existing locks.
To create a lock, run:
To create a lock, run:
PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/locks/{lock-name}?api-version={api-version} ```
-The scope could be a subscription, resource group, or resource. The lock-name is whatever you want to call the lock. For api-version, use **2016-09-01**.
+The scope could be a subscription, resource group, or resource. The lock name is whatever you want to call it. For the api-version, use **2016-09-01**.
-In the request, include a JSON object that specifies the properties for the lock.
+In the request, include a JSON object that specifies the lock properties.
```json {
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md
For each area, we have external pages to track and review our SDKs. You can cons
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - | | Identity | [npm](https://www.npmjs.com/package/@azure/communication-identity) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Identity) | [PyPi](https://pypi.org/project/azure-communication-identity/) | [Maven](https://search.maven.org/search?q=a:azure-communication-identity) | - | - | - |
-| Network Traversal | [npm](https://www.npmjs.com/package/@azure/communication-network-traversal) | [NuGet](https://www.nuget.org/packages/Azure.Communication.NetworkTraversal) | [PyPi]https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | - | - | - |
+| Network Traversal | [npm](https://www.npmjs.com/package/@azure/communication-network-traversal) | [NuGet](https://www.nuget.org/packages/Azure.Communication.NetworkTraversal) | [PyPi](https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | - | - | - |
| Phone numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.nuget.org/packages/Azure.Communication.phonenumbers) | [PyPi](https://pypi.org/project/azure-communication-phonenumbers/) | [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | - | - | - | | Signaling | [npm](https://www.npmjs.com/package/@azure/communication-signaling) | - | | - | - | - | - | | SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - |
communication-services Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/learn-modules.md
+
+ Title: Microsoft Learn modules for Azure Communication Services
+description: Learn about the available Learn modules for Azure Communication Services.
+++++ Last updated : 06/30/2021+++
+# Learn modules
+
+If you're looking for more guided experiences that teach you how to use Azure Communication Services then we have several Learn modules at your disposal. These modules provide a more structured experience of learning by providing a step by step guide to learning particular topics. Check them out, we'd love to know what you think.
+
+- [Introduction to Communication Services](/learn/modules/intro-azure-communication-services/)
+- [Send an SMS message from a C# console application with Azure Communication Services](/learn/modules/communication-service-send-sms-console-app/)
+- [Create a voice calling web app with Azure Communication Services](/learn/modules/communication-services-voice-calling-web-app)
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/apis-list.md
Title: Connectors overview for Azure Logic Apps
-description: Learn about connectors and how they help you quickly and easily build automated integration workflows using Azure Logic Apps.
+ Title: Overview about connectors in Azure Logic Apps
+description: Learn about connectors to create automated integration workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 09/13/2021 Last updated : 05/10/2022 # About connectors in Azure Logic Apps
-When you build workflows using Azure Logic Apps, you can use *connectors* to help you quickly and easily access data, events, and resources in other apps, services, systems, protocols, and platforms - often without writing any code. A connector provides prebuilt operations that you can use as steps in your workflows. Azure Logic Apps provides hundreds of connectors that you can use. If no connector is available for the resource that you want to access, you can use the generic HTTP operation to communicate with the service, or you can [create a custom connector](#custom-apis-and-connectors).
+When you build workflows using Azure Logic Apps, you can use *connectors* to help you quickly and easily access data, events, and resources in other apps, services, systems, protocols, and platforms - often without writing any code. A connector provides prebuilt operations that you can use as steps in your workflows. Azure Logic Apps provides hundreds of connectors that you can use. If no connector is available for the resource that you want to access, you can use the generic HTTP operation to communicate with the service, or you can [create a custom connector](#custom-connectors-and-apis).
-This overview offers an introduction to connectors, how they generally work, and the more popular and commonly used connectors in Azure Logic Apps. For more information, review the following documentation:
+This overview provides a high-level introduction to connectors and how they generally work. For information about the more popular and commonly used connectors in Azure Logic Apps, review the following documentation:
-* [Connectors overview for Azure Logic Apps, Microsoft Power Automate, and Microsoft Power Apps](/connectors)
* [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](built-in.md)
+* [Managed connectors in Azure Logic Apps](managed.md)
* [Pricing and billing models in Azure Logic Apps](../logic-apps/logic-apps-pricing.md) * [Azure Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/) ## What are connectors?
-Technically, a connector is a proxy or a wrapper around an API that the underlying service uses to communicate with Azure Logic Apps. This connector provides operations that you use in your workflows to perform tasks. An operation is available either as a *trigger* or *action* with properties you can configure. Some triggers and actions also require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, for example, so that you can authenticate access to a user account.
+Technically, a connector is a proxy or a wrapper around an API that the underlying service uses to communicate with Azure Logic Apps. This connector provides operations that you use in your workflows to perform tasks. An operation is available either as a *trigger* or *action* with properties you can configure. Some triggers and actions also require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, for example, so that you can authenticate access to a user account. For more overview information, review [Connectors overview for Azure Logic Apps, Microsoft Power Automate, and Microsoft Power Apps](/connectors).
### Triggers A *trigger* specifies the event that starts the workflow and is always the first step in any workflow. Each trigger also follows a specific firing pattern that controls how the trigger monitors and responds to events. Usually, a trigger follows the *polling* pattern or *push* pattern, but sometimes, a trigger is available in both versions. - *Polling triggers* regularly check a specific service or system on a specified schedule to check for new data or a specific event. If new data is available, or the specific event happens, these triggers create and run a new instance of your workflow. This new instance can then use the data that's passed as input.+ - *Push triggers* listen for new data or for an event to happen, without polling. When new data is available, or when the event happens, these triggers create and run a new instance of your workflow. This new instance can then use the data that's passed as input. For example, you might want to build a workflow that does something when a file is uploaded to your FTP server. As the first step in your workflow, you can use the FTP trigger named **When a file is added or modified**, which follows the polling pattern. You can then specify a schedule to regularly check for upload events.
A trigger also passes along any inputs and other required data into your workflo
### Actions
-An *action* is an operation that follows the trigger and performs some kind of task in your workflow. You can use multiple actions in your workflow. For example, you might start the workflow with a SQL trigger that detects new customer data in a SQL database. Following the trigger, your workflow can have a SQL action that gets the customer data. Following the SQL action, your workflow can have another action, not necessarily SQL, that processes the data.
+An *action* is an operation that follows the trigger and performs some kind of task in your workflow. You can use multiple actions in your workflow. For example, you might start the workflow with a SQL trigger that detects new customer data in an SQL database. Following the trigger, your workflow can have a SQL action that gets the customer data. Following the SQL action, your workflow can have another action, not necessarily SQL, that processes the data.
## Connector categories
-In Azure Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A few triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+In Azure Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A few triggers and actions are available in both versions. The versions available depend on whether you create a *Consumption* logic app that runs in multi-tenant Azure Logic Apps, or a *Standard* logic app that runs in single-tenant Azure Logic Apps.
-[Built-in triggers and actions](built-in.md) run natively on the Logic Apps runtime, don't require creating connections, and perform these kinds of tasks:
+* [Built-in connectors](built-in.md) run natively on the Azure Logic Apps runtime.
-- [Run code in your workflows](built-in.md#run-code-from-workflows).-- [Organize and control your data](built-in.md#control-workflow).-- [Manage or manipulate data](built-in.md#manage-or-manipulate-data).
+* [Managed connectors](managed.md) are deployed, hosted, and managed by Microsoft. These connectors provide triggers and actions for cloud services, on-premises systems, or both.
-[Managed connectors](managed.md) are deployed, hosted, and managed by Microsoft. These connectors provide triggers and actions for cloud services, on-premises systems, or both. Managed connectors are available in these categories:
+ In a *Standard* logic app, all managed connectors are organized as **Azure** connectors. However, in a *Consumption* logic app, managed connectors are organized as **Standard** or **Enterprise**, based on pricing level.
-- [On-premises connectors](managed.md#on-premises-connectors) that help you access data and resources in on-premises systems.-- [Enterprise connectors](managed.md#enterprise-connectors) that provide access to enterprise systems.-- [Integration account connectors](managed.md#integration-account-connectors) that support business-to-business (B2B) communication scenarios.-- [Integration service environment (ISE) connectors](managed.md#ise-connectors) that are a small group of [managed connectors available only for ISEs](#ise-and-connectors).
+For more information about logic app types, review [Resource types and host environment differences](../logic-apps/logic-apps-overview.md#resource-environment-differences).
<a name="connection-configuration"></a> ## Connection configuration
-To create or manage logic app resources and connections, you need certain permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has these specific roles:
+In Consumption logic apps, before you can create or manage logic apps and their connections, you need specific permissions. For more information about these permissions, review [Secure operations - Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md#secure-operations).
+
+Before you can use a managed connector's triggers or actions in your workflow, many connectors require that you first create a *connection* to the target service or system. To create a connection from within the logic app workflow designer, you have to authenticate your identity with account credentials and sometimes other connection information. For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account. For some built-in connectors and managed connectors, you can [set up and use a managed identity for authentication](../logic-apps/create-managed-service-identity.md#triggers-actions-managed-identity), rather than provide your credentials.
-* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
+Although you create connections within a workflow, these connections are actually separate Azure resources with their own resource definitions. To review these connection resource definitions, follow these steps based on whether you have a Consumption or Standard logic app:
-* [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator): Lets you read, enable, and disable logic apps, but you can't edit or update them.
+* Consumption: To view these connections in the Azure portal, review [View connections for Consumption logic apps in the Azure portal](../logic-apps/manage-logic-apps-with-azure-portal.md#view-connections).
-* [Contributor](../role-based-access-control/built-in-roles.md#contributor): Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
+ To view and manage these connections in Visual Studio, review [Manage Consumption logic apps with Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md), and download your logic app from Azure into Visual Studio. For more information about connection resource definitions for Consumption logic apps, review [Connection resource definitions](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#connection-resource-definitions).
- For example, suppose you have to work with a logic app that you didn't create and authenticate connections used by that logic app's workflow. Your Azure subscription requires Contributor permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
+* Standard: To view these connections in the Azure portal, review [View connections for Standard logic apps in the Azure portal](../logic-apps/create-single-tenant-workflows-azure-portal.md#view-connections).
-Before you can use a connector's triggers or actions in your workflow, most connectors require that you first create a *connection* to the target service or system. To create a connection from within a logic app workflow, you have to authenticate your identity with account credentials and sometimes other connection information. For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account. For a small number of built-in operations and managed connectors, you can [set up and use a managed identity for authentication](../logic-apps/create-managed-service-identity.md#triggers-actions-managed-identity), rather than provide your credentials.
+ To view and manage these connections in Visual Studio Code, review [View your logic app in Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md#manage-deployed-apps-vs-code). The **connections.json** file contains the required configuration for the connections created by connectors.
<a name="connection-security-encryption"></a>
Before you can use a connector's triggers or actions in your workflow, most conn
Connection configuration details, such as server address, username, and password, credentials, and secrets are [encrypted and stored in the secured Azure environment](../security/fundamentals/encryption-overview.md). This information can be used only in logic app resources and by clients who have permissions for the connection resource, which is enforced using linked access checks. Connections that use Azure Active Directory Open Authentication (Azure AD OAuth), such as Office 365, Salesforce, and GitHub, require that you sign in, but Azure Logic Apps stores only access and refresh tokens as secrets, not sign-in credentials.
-Established connections can access the target service or system for as long as that service or system allows. For services that use Azure AD OAuth connections, such as Office 365 and Dynamics, the Logic Apps service refreshes access tokens indefinitely. Other services might have limits on how long Logic Apps can use a token without refreshing. Some actions, such as changing your password, invalidate all access tokens.
-
-Although you create connections from within a workflow, connections are separate Azure resources with their own resource definitions. To review these connection resource definitions, [download your logic app from Azure into Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md). This method is the easiest way to create a valid, parameterized logic app template that's mostly ready for deployment.
+Established connections can access the target service or system for as long as that service or system allows. For services that use Azure AD OAuth connections, such as Office 365 and Dynamics, Azure Logic Apps refreshes access tokens indefinitely. Other services might have limits on how long Logic Apps can use a token without refreshing. Some actions, such as changing your password, invalidate all access tokens.
> [!TIP]
-> If your organization doesn't permit you to access specific resources through Logic Apps connectors, you can [block the capability to create such connections](../logic-apps/block-connections-connectors.md) using [Azure Policy](../governance/policy/overview.md).
+> If your organization doesn't permit you to access specific resources through connectors in Azure Logic Apps, you can [block the capability to create such connections](../logic-apps/block-connections-connectors.md) using [Azure Policy](../governance/policy/overview.md).
For more information about securing logic apps and connections, review [Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md).
For more information about securing logic apps and connections, review [Secure a
### Firewall access for connections
-If you use a firewall that limits traffic, and your logic app workflows need to communicate through that firewall, you have to set up your firewall to allow access for both the [inbound](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by the Logic Apps service or runtime in the Azure region where your logic app workflows exist. If your workflows also use managed connectors, such as the Office 365 Outlook connector or SQL connector, or use custom connectors, your firewall also needs to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in your logic app's Azure region. For more information, review [Firewall configuration](../logic-apps/logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags).
+If you use a firewall that limits traffic, and your logic app workflows need to communicate through that firewall, you have to set up your firewall to allow access for both the [inbound](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by the Azure Logic Apps platform or runtime in the Azure region where your logic app workflows exist. If your workflows also use managed connectors, such as the Office 365 Outlook connector or SQL connector, or use custom connectors, your firewall also needs to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in your logic app's Azure region. For more information, review [Firewall configuration](../logic-apps/logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags).
## Recurrence behavior
-Recurring built-in triggers, such as the [Recurrence trigger](connectors-native-recurrence.md), run natively on the Logic Apps runtime and differ from recurring connection-based triggers, such as the Office 365 Outlook connector trigger where you need to create a connection first.
+Recurring built-in triggers, such as the [Recurrence trigger](connectors-native-recurrence.md), run natively on the Azure Logic Apps runtime and differ from recurring connection-based triggers, such as the Office 365 Outlook connector trigger where you need to create a connection first.
For both kinds of triggers, if a recurrence doesn't specify a specific start date and time, the first recurrence runs immediately when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior, provide a start date and time for when you want the first recurrence to run.
+Some managed connectors have both recurrence-based and webhook-based triggers, so if you use a recurrence-based trigger, review the [Recurrence behavior overview](apis-list.md#recurrence-behavior).
+ ### Recurrence for built-in triggers Recurring built-in triggers follow the schedule that you set, including any specified time zone. However, if a recurrence doesn't specify other advanced scheduling options, such as specific times to run future recurrences, those recurrences are based on the last trigger execution. As a result, the start times for those recurrences might drift due to factors such as latency during storage calls.
To make sure that your workflow runs at your specified start time and doesn't mi
* Consider using a [**Sliding Window** trigger](connectors-native-sliding-window.md) instead of a **Recurrence** trigger to avoid missed recurrences.
-## Custom APIs and connectors
+## Custom connectors and APIs
+
+In Consumption logic apps that run in multi-tenant Azure Logic Apps, you can call Swagger-based or SOAP-based APIs that aren't available as out-of-the-box connectors. You can also run custom code by creating custom API Apps. For more information, review the following documentation:
+
+* [Swagger-based or SOAP-based custom connectors for Consumption logic apps](../logic-apps/custom-connector-overview.md#custom-connector-consumption)
+
+* Create a [Swagger-based](/connectors/custom-connectors/define-openapi-definition) or [SOAP-based](/connectors/custom-connectors/create-register-logic-apps-soap-connector) custom connector, which makes these APIs available to any Consumption logic app in your Azure subscription. To make your custom connector public for anyone to use in Azure, [submit your connector for Microsoft certification](/connectors/custom-connectors/submit-certification).
+
+* [Create custom API Apps](../logic-apps/logic-apps-create-api-app.md)
+
+In Standard logic apps that run in single-tenant Azure Logic Apps, you can create natively running service provider-based custom built-in connectors that are available to any Standard logic app. For more information, review the following documentation:
+
+* [Service provider-based custom built-in connectors for Standard logic apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
-To call APIs that run custom code or aren't available as connectors, you can extend the Logic Apps platform by [creating custom API Apps](../logic-apps/logic-apps-create-api-app.md). You can also [create custom connectors](../logic-apps/custom-connector-overview.md) for any REST or SOAP-based APIs, which make those APIs available to any logic app in your Azure subscription. To make custom API Apps or connectors public for anyone to use in Azure, you can [submit connectors for Microsoft certification](/connectors/custom-connectors/submit-certification).
+* [Create service provider-based custom built-in connectors for Standard logic apps](../logic-apps/create-custom-built-in-connector-standard.md)
## ISE and connectors
For workflows that need direct access to resources in an Azure virtual network,
Custom connectors created within an ISE don't work with the on-premises data gateway. However, these connectors can directly access on-premises data sources that are connected to an Azure virtual network hosting the ISE. So, logic apps in an ISE most likely don't need the data gateway when communicating with those resources. If you have custom connectors that you created outside an ISE that require the on-premises data gateway, logic apps in an ISE can use those connectors.
-In the Logic Apps Designer, when you browse the built-in triggers and actions or managed connectors that you want to use for logic apps in an ISE, the **CORE** label appears on built-in triggers and actions, while the **ISE** label appears on managed connectors that are designed to work with an ISE.
+In the workflow designer, when you browse the built-in connectors or managed connectors that you want to use for logic apps in an ISE, the **CORE** label appears on built-in connectors, while the **ISE** label appears on managed connectors that are designed to work with an ISE.
:::row::: :::column:::
In the Logic Apps Designer, when you browse the built-in triggers and actions or
**CORE** \ \
- Built-in triggers and actions with this label run in the same ISE as your logic apps.
+ Built-in connectors with this label run in the same ISE as your logic apps.
:::column-end::: :::column::: ![Example ISE connector](./media/apis-list/example-ise-connector.png)
In the Logic Apps Designer, when you browse the built-in triggers and actions or
Managed connectors with this label run in the same ISE as your logic apps. \ \
- If you have an on-premises system that's connected to an Azure virtual network, an ISE lets your workflows directly access that system without using the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). Instead, you can either use that system's **ISE** connector if available, an HTTP action, or a [custom connector](#custom-apis-and-connectors).
+ If you have an on-premises system that's connected to an Azure virtual network, an ISE lets your workflows directly access that system without using the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). Instead, you can either use that system's **ISE** connector if available, an HTTP action, or a [custom connector](#custom-connectors-and-apis).
\ \ For on-premises systems that don't have **ISE** connectors, use the on-premises data gateway. To find available ISE connectors, review [ISE connectors](#ise-and-connectors).
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
Title: Built-in triggers and actions
-description: Use built-in triggers and actions to create automated workflows that integrate apps, data, services, and systems, to control workflows, and to manage data using Azure Logic Apps.
+ Title: Overview about built-in connectors in Azure Logic Apps
+description: Learn about built-in connectors that run natively to create automated integration workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 04/15/2021 Last updated : 05/10/2022
-# Built-in triggers and actions in Azure Logic Apps
+# Built-in connectors in Azure Logic Apps
-[Built-in triggers and actions](apis-list.md) provide ways for you to [control your workflow's schedule and structure](#control-workflow), [run your own code](#run-code-from-workflows), [manage or manipulate data](#manage-or-manipulate-data), and complete other tasks in your workflows. Different from [managed connectors](managed.md), many built-in operations aren't tied to a specific service, system, or protocol. For example, you can start almost any workflow on a schedule by using the Recurrence trigger. Or, you can have your workflow wait until called by using the Request trigger. All built-in operations run natively in Azure Logic Apps, and most don't require that you create a connection before you use them.
+Built-in connectors provide ways for you to control your workflow's schedule and structure, run your own code, manage or manipulate data, and complete other tasks in your workflows. Different from managed connectors, some built-in connectors aren't tied to a specific service, system, or protocol. For example, you can start almost any workflow on a schedule by using the Recurrence trigger. Or, you can have your workflow wait until called by using the Request trigger. All built-in connectors run natively on the Azure Logic Apps runtime. Some don't require that you create a connection before you use them.
-For a smaller number of services, systems and protocols, Azure Logic Apps provides built-in operations, such as Azure API Management, Azure App Services, Azure Functions, and for calling other Azure Logic Apps logic app workflows. The number and range available vary based on whether you create a Consumption plan-based logic app resource that runs in multi-tenant Azure Logic Apps, or a Standard plan-based logic app resource that runs in single-tenant Azure Logic Apps. For more information, review [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md). In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app that runs in multi-tenant Azure Logic Apps, or a Standard logic app that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app type and not the other.
-For example, if you create a single-tenant logic app, both built-in operations and [managed connector operations](managed.md) are available for a few services, specifically Azure Blob, Azure Event Hubs, Azure Cosmos DB, Azure Service Bus, DB2, MQ, and SQL Server. In few cases, some built-in operations are available only for one logic app resource type. For example, Batch operations are currently available only for Consumption logic app workflows. In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+For example, a Standard logic app provides both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption logic app doesn't have the built-in versions. A Consumption logic app provides built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard logic app doesn't have these built-in connectors. For more information, review the following documentation: [Managed connectors in Azure Logic Apps](managed.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
-The following list describes only some of the tasks that you can accomplish with [built-in triggers and actions](#general-built-in-triggers-and-actions):
+This article provides a general overview about built-in connectors in Consumption logic apps versus Standard logic apps.
-- Run workflows using custom and advanced schedules. For more information about scheduling, review the [recurrence behavior section in the connector overview for Azure Logic Apps](apis-list.md#recurrence-behavior).
+<a name="built-in-operations-lists"></a>
-- Organize and control your workflow's structure, for example, using loops and conditions.
+## Built-in connectors in Consumption versus Standard
-- Work with variables, dates, data operations, content transformations, and batch operations.
+| Consumption | Standard |
+|-|-|
+| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | Azure Blob <br>Azure Cosmos DB <br>Azure Functions <br>Azure Table Storage <br>Control <br>Data Operations <br>Date Time <br>DB2 <br>Event Hubs <br>Flat File <br>FTP <br>HTTP <br>IBM Host File <br>Inline Code <br>Liquid operations <br>MQ <br>Request <br>Schedule <br>Service Bus <br>SFTP <br>SQL Server <br>Variables <br>Workflow operations <br>XML operations |
+|||
-- Communicate with other endpoints using HTTP triggers and actions.
+<a name="custom-built-in"></a>
-- Receive and respond to requests.
+## Custom built-in connectors
-- Call your own functions (Azure Functions), web apps (Azure App Services), APIs (Azure API Management), other Azure Logic Apps workflows that can receive requests, and so on.
+For Standard logic apps, if a built-connector isn't available for your scenario, you can create your own built-in connector. You can use the same [*service provider interface implementation*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation) that's used by service provider-based built-in connectors, such as SQL Server, Service Bus, Blob Storage, Event Hubs, and Blob Storage. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard logic apps.
-## General built-in triggers and actions
+For more information, review the following documentation:
-Azure Logic Apps provides the following built-in triggers and actions:
+* [Custom connectors for Standard logic apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
+* [Create custom built-in connectors for Standard logic apps](../logic-apps/create-custom-built-in-connector-standard.md)
+
+<a name="general-built-in"></a>
+
+## General built-in connectors
+
+You can use the following built-in connectors to perform general tasks, for example:
+
+* Run workflows using custom and advanced schedules. For more information about scheduling, review the [Recurrence behavior in the connector overview for Azure Logic Apps](apis-list.md#recurrence-behavior).
+
+* Organize and control your workflow's structure, for example, using loops and conditions.
+
+* Work with variables, dates, data operations, content transformations, and batch operations.
+
+* Communicate with other endpoints using HTTP triggers and actions.
+
+* Receive and respond to requests.
+
+* Call your own functions (Azure Functions) or other Azure Logic Apps workflows that can receive requests, and so on.
:::row::: :::column:::
Azure Logic Apps provides the following built-in triggers and actions:
[**Recurrence**][schedule-recurrence-doc]: Trigger a workflow based on the specified recurrence. \ \
- [**Sliding Window**][schedule-sliding-window-doc]: Trigger a workflow that needs to handle data in continuous chunks.
+ [**Sliding Window**][schedule-sliding-window-doc]<br>(*Consumption logic app only*): <br>Trigger a workflow that needs to handle data in continuous chunks.
\ \ [**Delay**][schedule-delay-doc]: Pause your workflow for the specified duration.
Azure Logic Apps provides the following built-in triggers and actions:
:::column-end::: :::row-end:::
-## Service-based built-in trigger and actions
+<a name="service-built-in"></a>
+
+## Service-based built-in connectors
-Azure Logic Apps provides the following built-in actions for the following
+Connectors for some services provide both built-in connectors and managed connectors, which might differ across these versions.
:::row::: :::column::: [![Azure API Management icon][azure-api-management-icon]][azure-api-management-doc] \ \
- [**Azure API Management**][azure-api-management-doc]
+ [**Azure API Management**][azure-api-management-doc]<br>(*Consumption logic app only*)
\ \ Call your own triggers and actions in APIs that you define, manage, and publish using [Azure API Management](../api-management/api-management-key-concepts.md). <p><p>**Note**: Not supported when using [Consumption tier for API Management](../api-management/api-management-features.md).
Azure Logic Apps provides the following built-in actions for the following servi
[![Azure App Services icon][azure-app-services-icon]][azure-app-services-doc] \ \
- [**Azure App Services**][azure-app-services-doc]
+ [**Azure App Services**][azure-app-services-doc]<br>(*Consumption logic app only*)
\ \ Call apps that you create and host on [Azure App Service](../app-service/overview.md), for example, API Apps and Web Apps.
Azure Logic Apps provides the following built-in actions for the following servi
\ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. :::column-end::: :::column::: [![Azure Functions icon][azure-functions-icon]][azure-functions-doc] \
Azure Logic Apps provides the following built-in actions for the following servi
\ Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow. :::column-end::: :::column::: [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc] \ \
- [**Azure Logic Apps**][nested-logic-app-doc]
+ [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption logic app*) <br><br>-or-<br><br>[**Workflow operations**][nested-logic-app-doc]<br>(*Standard logic app*)
\ \ Call other workflows that start with the Request trigger named **When a HTTP request is received**.
Azure Logic Apps provides the following built-in actions for the following servi
Manage asynchronous messages, queues, sessions, topics, and topic subscriptions. :::column-end::: :::column:::
- ![Azure Table Storage icon][azure-table-storage-icon]
+ [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]
\ \
- **Azure Table Storage**<br>(*Standard logic app only*)
+ [**Azure Table Storage**][azure-table-storage-doc]<br>(*Standard logic app only*)
\ \
- Connect to your Azure Table Storage account so you can create and manage tables.
+ Connect to your Azure Storage account so that you can create, update, query, and manage tables.
+ :::column-end:::
+ :::column:::
+ [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
+ \
+ \
+ [**Event Hubs**][azure-event-hubs-doc]<br>(*Standard logic app only*)
+ \
+ \
+ Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
:::column-end::: :::column::: [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc] \
Azure Logic Apps provides the following built-in actions for the following servi
\ Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more. :::column-end::: :::column:::
- [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
+ ![IBM Host File icon][ibm-host-file-icon]
\ \
- [**Event Hubs**][azure-event-hubs-doc]<br>(*Standard logic app only*)
+ **IBM Host File**<br>(*Standard logic app only*)
\ \
- Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
+ Connect to IBM Host File and generate or parse contents.
:::column-end::: :::column::: [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
Azure Logic Apps provides the following built-in actions for the following servi
[**SQL Server**][sql-server-doc]<br>(*Standard logic app only*) \ \
- Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. <p>**Note**: Single-tenant Azure Logic Apps provides both SQL built-in and managed connector operations, while multi-tenant Azure Logic Apps provides only managed connector operations. <p>For more information, review [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries.
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::column:::
:::column-end::: :::row-end:::
Azure Logic Apps provides the following built-in actions for working with data o
:::column-end::: :::row-end:::
-## Integration account built-in actions
+<a name="integration-account-built-in"></a>
+
+## Integration account built-in connectors
+
+Integration account operations specifically support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, maps, and schemas, you can use integration account built-in actions to encode and decode messages, transform content, and more.
+
+* Consumption logic apps
+
+ Before you use any integration account operations in a Consumption logic app, you have to [link your logic app to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+
+* Standard logic apps
+
+ Integration account operations don't require that you link your logic app to your integration account. Instead, you create a connection to your integration account when you add the operation to your Standard logic app workflow. Actually, the built-in Liquid operations and XML operations don't even need an integration account. However, you have to upload Liquid maps, XML maps, or XML schemas through the respective operations in the Azure portal or add these files to your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
-Azure Logic Apps provides the following built-in actions, which either require an integration account when using multi-tenant, Consumption plan-based Azure Logic Apps or don't require an integration account when using single-tenant, Standard plan-based Azure Logic Apps:
+For more information, review the following documentation:
-> [!NOTE]
-> Before you can use integration account action in multi-tenant, Consumption plan-based Azure Logic Apps, you must
-> [link your logic app resource to an integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
-> However, in single-tenant, Standard plan-based Azure Logic Apps, some integration account operations don't require linking your
-> logic app resource to an integration account, for example, Liquid operations and XML operations. To use these actions, you need
-> to have Liquid maps, XML maps, or XML schemas that you can upload through the respective actions in the Azure portal or add to
-> your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
+* [Business-to-business (B2B) enterprise integration workflows](../logic-apps/logic-apps-enterprise-integration-overview.md)
+* [Create and manage integration accounts for B2B workflows](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md)
:::row::: :::column:::
Azure Logic Apps provides the following built-in actions, which either require a
[![Integration account icon][integration-account-icon]][integration-account-doc] \ \
- [**Integration Account Artifact Lookup**<br>(*Multi-tenant only*)][integration-account-doc]
+ [**Integration Account Artifact Lookup**][integration-account-doc]<br>(*Consumption logic app only*)
\ \ Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account.
Azure Logic Apps provides the following built-in actions, which either require a
[http-swagger-icon]: ./media/apis-list/http-swagger.png [http-webhook-icon]: ./media/apis-list/http-webhook.png [ibm-db2-icon]: ./media/apis-list/ibm-db2.png
+[ibm-host-file-icon]: ./media/apis-list/ibm-host-file.png
[ibm-mq-icon]: ./media/apis-list/ibm-mq.png [inline-code-icon]: ./media/apis-list/inline-code.png [schedule-icon]: ./media/apis-list/recurrence.png
Azure Logic Apps provides the following built-in actions, which either require a
[azure-event-hubs-doc]: ./connectors-create-api-azure-event-hubs.md "Connect to Azure Event Hubs so that you can receive and send events between logic apps and Event Hubs" [azure-functions-doc]: ../logic-apps/logic-apps-azure-functions.md "Integrate logic apps with Azure Functions" [azure-service-bus-doc]: ./connectors-create-api-servicebus.md "Manage messages from Service Bus queues, topics, and topic subscriptions"
+[azure-table-storage-doc]: /connectors/azuretables/ "Connect to your Azure Storage account so that you can create, update, and query tables and more"
[batch-doc]: ../logic-apps/logic-apps-batch-process-send-receive-messages.md "Process messages in groups, or as batches" [condition-doc]: ../logic-apps/logic-apps-control-flow-conditional-statement.md "Evaluate a condition and run different actions based on whether the condition is true or false" [data-operations-doc]: ../logic-apps/logic-apps-perform-data-operations.md "Perform data operations such as filtering arrays or creating CSV and HTML tables"
Azure Logic Apps provides the following built-in actions, which either require a
[schedule-sliding-window-doc]: ./connectors-native-sliding-window.md "Run logic apps that need to handle data in contiguous chunks" [scope-doc]: ../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md "Organize actions into groups, which get their own status after the actions in group finish running" [sftp-ssh-doc]: ./connectors-sftp-ssh.md "Connect to your SFTP account by using SSH. Upload, get, delete files, and more"
-[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in a SQL database table"
+[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table"
[switch-doc]: ../logic-apps/logic-apps-control-flow-switch-statement.md "Organize actions into cases, which are assigned unique values. Run only the case whose value matches the result from an expression, object, or token. If no matches exist, run the default case" [terminate-doc]: ../logic-apps/logic-apps-workflow-actions-triggers.md#terminate-action "Stop or cancel an actively running workflow for your logic app" [until-doc]: ../logic-apps/logic-apps-control-flow-loops.md#until-loop "Repeat actions until the specified condition is true or some state has changed"
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
Title: Managed connector operations
-description: Use Microsoft-managed triggers and actions to create automated workflows that integrate other apps, data, services, and systems using Azure Logic Apps.
+ Title: Overview about managed connectors in Azure Logic Apps
+description: Learn about Microsoft-managed connectors to create automated integration workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 05/16/2021 Last updated : 05/10/2022 # Managed connectors in Azure Logic Apps
-[Managed connectors](apis-list.md) provide ways for you to access other services and systems where [built-in triggers and actions](built-in.md) aren't available. You can use these triggers and actions to create workflows that integrate data, apps, cloud-based services, and on-premises systems. Compared to built-in triggers and actions, these connectors are usually tied to a specific service or system such as Azure Blob Storage, Office 365, SQL, Salesforce, or SFTP servers. Managed by Microsoft and hosted in Azure, managed connectors usually require that you first create a connection from your workflow and authenticate your identity. Both recurrence-based and webhook-based triggers are available, so if you use a recurrence-based trigger, review the [Recurrence behavior overview](apis-list.md#recurrence-behavior).
+Managed connectors provide ways for you to access other services and systems where built-in connectors aren't available. You can use these triggers and actions to create workflows that integrate data, apps, cloud-based services, and on-premises systems. Different from built-in connectors, managed connectors are usually tied to a specific service or system such as Office 365, SharePoint, Azure Key Vault, Salesforce, Azure Automation, and so on. Managed by Microsoft and hosted in Azure, managed connectors usually require that you first create a connection from your workflow and authenticate your identity.
-For a small number of services, systems and protocols, Azure Logic Apps provides built-in operations along with their [managed connector versions](managed.md). The number and range available vary based on whether you create a Consumption plan-based logic app resource that runs in multi-tenant Azure Logic Apps, or a Standard plan-based logic app resource that runs in single-tenant Azure Logic Apps. For more information, review [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md). In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app that runs in multi-tenant Azure Logic Apps, or a Standard logic app that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app type, and not the other.
-For example, if you create a single-tenant logic app, built-in operations are available for Azure Service Bus, Azure Event Hubs, SQL Server, and MQ. In a few cases, both a built-in version and a managed connector version are available. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. If you create a multi-tenant logic app, built-in operations are available for Azure Functions, Azure App Services, and Azure API Management.
+For example, a Standard logic app provides both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption logic app doesn't have the built-in versions. A Consumption logic app provides built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard logic app doesn't have these built-in connectors. For more information, review the following documentation: [Built-in connectors in Azure Logic Apps](built-in.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
-Some managed connectors in Azure Logic Apps belong to multiple sub-categories. For example, the SAP connector is both an [enterprise connector](#enterprise-connectors) and an [on-premises connector](#on-premises-connectors).
+This article provides a general overview about managed connectors and how they're organized in Consumption logic apps versus Standard logic apps with examples. For technical reference information about each managed connector in Azure Logic Apps, review [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
+
+## Managed connector categories
+
+In a *Standard* logic app, all managed connectors are organized into the **Azure** group. In a *Consumption* logic app, managed connectors are organized into the **Standard** group or **Enterprise** group. However, pricing for managed connectors works the same in both Standard and Consumption logic apps. For more information, review [Trigger and action operations in the Consumption model](../logic-apps/logic-apps-pricing.md#consumption-operations) and [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
* [Standard connectors](#standard-connectors) provide access to services such as Azure Blob Storage, Office 365, SharePoint, Salesforce, Power BI, OneDrive, and many more.
-* [Enterprise connectors](#enterprise-connectors) provide access to enterprise systems, such as SAP, IBM MQ, and IBM 3270.
+
+* [Enterprise connectors](#enterprise-connectors) provide access to enterprise systems, such as SAP, IBM MQ, and IBM 3270 for an additional cost.
+
+Some managed connectors also belong to the following informal groups:
+ * [On-premises connectors](#on-premises-connectors) provide access to on-premises systems such as SQL Server, SharePoint Server, SAP, Oracle DB, file shares, and others.+ * [Integration account connectors](#integration-account-connectors) help you transform and validate XML, encode and decode flat files, and process business-to-business (B2B) messages using AS2, EDIFACT, and X12 protocols.
-* [Integration service environment connectors](#ise-connectors) and are designed to run specifically in an ISE and offer benefits over their non-ISE versions.
+
+* [Integration service environment connectors](#ise-connectors) and are designed to run specifically in an ISE and provide benefits over their non-ISE versions.
+
+<a name="standard-connectors"></a>
## Standard connectors
-Azure Logic Apps provides these popular Standard connectors for building automated workflows using these services and systems. Some Standard connectors also support [on-premises systems](#on-premises-connectors) or [integration accounts](#integration-account-connectors).
+For a *Consumption* logic app, this section lists *some* of the popular connectors in the **Standard** group. In a *Standard* logic app, all managed connectors are in the **Azure** group, but pricing works the same as Consumption logic apps. For more information, review [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
:::row::: :::column:::
- [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]
+ [![Azure Blob Storage icon][azure-blob-storage-icon]][azure-blob-storage-doc]
\ \
- [**Azure Service Bus**][azure-service-bus-doc]
+ [**Azure Blob Storage**][azure-blob-storage-doc]
\ \
- Manage asynchronous messages, sessions, and topic subscriptions with the most commonly used connector in Logic Apps.
+ Connect to your Azure Storage account so that you can create and manage blob content.
:::column-end::: :::column:::
- [![SQL Server icon][sql-server-icon]][sql-server-doc]
+ [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
\ \
- [**SQL Server**][sql-server-doc]
+ [**Azure Event Hubs**][azure-event-hubs-doc]
\ \
- Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries.
+ Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
:::column-end::: :::column:::
- [![Azure Blog Storage icon][azure-blob-storage-icon]][azure-blob-storage-doc]
+ [![Azure Queues icon][azure-queues-icon]][azure-queues-doc]
\ \
- [**Azure Blob Storage**][azure-blob-storage-doc]
+ [**Azure Queues**][azure-queues-doc]
\ \
- Connect to your Azure Storage account so that you can create and manage blob content.
+ Connect to your Azure Storage account so that you can create and manage queues and messages.
:::column-end::: :::column:::
- [![Office 365 Outlook icon][office-365-outlook-icon]][office-365-outlook-doc]
+ [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]
\ \
- [**Office 365 Outlook**][office-365-outlook-doc]
+ [**Azure Service Bus**][azure-service-bus-doc]
\ \
- Connect to your work or school email account so that you can create and manage emails, tasks, calendar events and meetings, contacts, requests, and more.
+ Manage asynchronous messages, sessions, and topic subscriptions with the most commonly used connector in Logic Apps.
:::column-end::: :::row-end::: :::row::: :::column:::
- [![STFP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]
+ [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]
\ \
- [**STFP-SSH**][sftp-ssh-doc]
+ [**Azure Table Storage**][azure-table-storage-doc]
\ \
- Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders.
+ Connect to your Azure Storage account so that you can create, update, query, and manage tables.
:::column-end::: :::column:::
- [![SharePoint Online icon][sharepoint-online-icon]][sharepoint-online-doc]
+ [![File System icon][file-system-icon]][file-system-doc]
\ \
- [**SharePoint Online**][sharepoint-online-doc]
+ [**File System**][file-system-doc]
\ \
- Connect to SharePoint Online so that you can manage files, attachments, folders, and more.
+ Connect to your on-premises file share so that you can create and manage files.
:::column-end::: :::column:::
- [![Azure Queues icon][azure-queues-icon]][azure-queues-doc]
+ [![FTP icon][ftp-icon]][ftp-doc]
\ \
- [**Azure Queues**][azure-queues-doc]
+ [**FTP**][ftp-doc]
\ \
- Connect to your Azure Storage account so that you can create and manage queues and messages.
+ Connect to FTP servers you can access from the internet so that you can work with your files and folders.
:::column-end::: :::column:::
- [![FTP icon][ftp-icon]][ftp-doc]
+ [![Office 365 Outlook icon][office-365-outlook-icon]][office-365-outlook-doc]
\ \
- [**FTP**][ftp-doc]
+ [**Office 365 Outlook**][office-365-outlook-doc]
\ \
- Connect to FTP servers you can access from the internet so that you can work with your files and folders.
+ Connect to your work or school email account so that you can create and manage emails, tasks, calendar events and meetings, contacts, requests, and more.
:::column-end::: :::row-end::: :::row::: :::column:::
- [![File System icon][file-system-icon]][file-system-doc]
+ [![Salesforce icon][salesforce-icon]][salesforce-doc]
\ \
- [**File System**][file-system-doc]
+ [**Salesforce**][salesforce-doc]
\ \
- Connect to your on-premises file share so that you can create and manage files.
+ Connect to your Salesforce account so that you can create and manage items such as records, jobs, objects, and more.
:::column-end::: :::column:::
- [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
+ [![SharePoint Online icon][sharepoint-online-icon]][sharepoint-online-doc]
\ \
- [**Azure Event Hubs**][azure-event-hubs-doc]
+ [**SharePoint Online**][sharepoint-online-doc]
\ \
- Consume and publish events through an Event Hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
+ Connect to SharePoint Online so that you can manage files, attachments, folders, and more.
:::column-end::: :::column:::
- [![Azure Event Grid icon][azure-event-grid-icon]][azure-event-grid-doc]
+ [![SFTP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]
\ \
- [**Azure Event Grid**][azure-event-grid-doc]
+ [**SFTP-SSH**][sftp-ssh-doc]
\ \
- Monitor events published by an Event Grid, for example, when Azure resources or third-party resources change.
+ Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders.
:::column-end::: :::column:::
- [![Salesforce icon][salesforce-icon]][salesforce-doc]
+ [![SQL Server icon][sql-server-icon]][sql-server-doc]
\ \
- [**Salesforce**][salesforce-doc]
+ [**SQL Server**][sql-server-doc]
\ \
- Connect to your Salesforce account so that you can create and manage items such as records, jobs, objects, and more.
+ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries.
+ :::column-end:::
+
+<a name="enterprise-connectors"></a>
+
+## Enterprise connectors
+
+For a *Consumption* logic app, this section lists connectors in the **Enterprise** group, which can access enterprise systems for an additional cost. In a *Standard* logic app, all managed connectors are in the **Azure** group, but pricing is the same as for Consumption logic apps. For more information, review [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
+
+ :::column:::
+ [![IBM 3270 icon][ibm-3270-icon]][ibm-3270-doc]
+ \
+ \
+ [**IBM 3270**][ibm-3270-doc]
+ :::column-end:::
+ :::column:::
+ [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
+ \
+ \
+ [**MQ**][ibm-mq-doc]
+ :::column-end:::
+ :::column:::
+ [![SAP icon][sap-icon]][sap-connector-doc]
+ \
+ \
+ [**SAP**][sap-connector-doc]
+ :::column-end:::
+ :::column:::
:::column-end::: :::row-end:::
+<a name="on-premises-connectors"></a>
+ ## On-premises connectors Before you can create a connection to an on-premises system, you must first [download, install, and set up an on-premises data gateway][gateway-doc]. This gateway provides a secure communication channel without having to set up the necessary network infrastructure.
-The following connectors are some commonly used [Standard connectors](#standard-connectors) that Azure Logic Apps provides for accessing data and resources in on-premises systems. For the on-premises connectors list, see [Supported data sources](../logic-apps/logic-apps-gateway-connection.md#supported-connections).
+For a *Consumption* logic app, this section lists example [Standard connectors](#standard-connectors) that can access on-premises systems. For the expanded on-premises connectors list, review [Supported data sources](../logic-apps/logic-apps-gateway-connection.md#supported-connections).
:::row:::
+ :::column:::
+ [![Apache Impala][apache-impala-icon]][apache-impala-doc]
+ \
+ \
+ [**Apache Impala**][apache-impala-doc]
+ :::column-end:::
:::column::: [![Biztalk Server icon][biztalk-server-icon]][biztalk-server-doc] \
The following connectors are some commonly used [Standard connectors](#standard-
\ [**IBM Informix**][ibm-informix-doc] :::column-end::: :::column::: [![MySQL icon][mysql-icon]][mysql-doc] \ \ [**MySQL**][mysql-doc] :::column-end::: :::column::: [![Oracle DB icon][oracle-db-icon]][oracle-db-doc] \
The following connectors are some commonly used [Standard connectors](#standard-
\ [**PostgreSQL**][postgre-sql-doc] :::column-end:::
+ :::column:::
+ [![SAP icon][sap-icon]][sap-connector-doc]
+ \
+ \
+ [**SAP**][sap-connector-doc]
+ :::column-end:::
:::column::: [![SharePoint Server icon][sharepoint-server-icon]][sharepoint-server-doc] \ \ [**SharePoint Server**][sharepoint-server-doc] :::column-end::: :::column::: [![SQL Server icon][sql-server-icon]][sql-server-doc] \
The following connectors are some commonly used [Standard connectors](#standard-
\ [**Teradata**][teradata-doc] :::column-end:::
- :::column:::
- :::column-end:::
- :::column:::
- :::column-end:::
:::row-end::: <a name="integration-account-connectors"></a> ## Integration account connectors
-Integration account connectors specifically support [business-to-business (B2B) communication scenarios](../logic-apps/logic-apps-enterprise-integration-overview.md) in Azure Logic Apps. After you [create an integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) and define your B2B artifacts, such as trading partners, agreements, maps, and schemas, you can use integration account connectors to encode and decode messages, transform content, and more.
+Integration account operations specifically support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, maps, and schemas, you can use integration account connectors to encode and decode messages, transform content, and more.
-For example, if you use Microsoft BizTalk Server, you can create a connection from your workflow using the [BizTalk Server on-premises connector](#on-premises-connectors). You can then extend or perform BizTalk-like operations in your workflow by using these integration account connectors.
+For example, if you use Microsoft BizTalk Server, you can create a connection from your workflow using the [on-premises BizTalk Server connector](/connectors/biztalk/). You can then extend or perform BizTalk-like operations in your workflow by using these integration account connectors.
-> [!NOTE]
-> Before you can use integration account connectors in multi-tenant, Consumption plan-based Azure Logic Apps, you must
-> [link your logic app resource to an integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+* Consumption logic apps
+
+ Before you use any integration account operations in a Consumption logic app, you have to [link your logic app to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+
+* Standard logic apps
+
+ Integration account operations don't require that you link your logic app to your integration account. Instead, you create a connection to your integration account when you add the operation to your Standard logic app workflow.
+
+For more information, review the following documentation:
+
+* [Business-to-business (B2B) enterprise integration workflows](../logic-apps/logic-apps-enterprise-integration-overview.md)
+* [Create and manage integration accounts for B2B workflows](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md)
:::row::: :::column:::
For example, if you use Microsoft BizTalk Server, you can create a connection fr
:::column-end::: :::row-end:::
-## Enterprise connectors
-
-The following connectors provide access to enterprise systems for an additional cost:
-
- :::column:::
- [![IBM 3270 icon][ibm-3270-icon]][ibm-3270-doc]
- \
- \
- [**IBM 3270**][ibm-3270-doc]
- :::column-end:::
- :::column:::
- [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
- \
- \
- [**IBM MQ**][ibm-mq-doc]
- :::column-end:::
- :::column:::
- [![SAP icon][sap-icon]][sap-connector-doc]
- \
- \
- [**SAP**][sap-connector-doc]
- :::column-end:::
- :::column:::
- :::column-end:::
- ## ISE connectors In an integration service environment (ISE), these managed connectors also have [ISE versions](apis-list.md#ise-and-connectors), which have different capabilities than their multi-tenant versions:
For more information, see these topics:
> [Create custom APIs you can call from Logic Apps](../logic-apps/logic-apps-create-api-app.md) <!--Managed connector icons-->
+[apache-impala-icon]: ./media/apis-list/apache-impala.png
[appfigures-icon]: ./media/apis-list/appfigures.png [asana-icon]: ./media/apis-list/asana.png [azure-automation-icon]: ./media/apis-list/azure-automation.png
For more information, see these topics:
[youtube-icon]: ./media/apis-list/youtube.png <!--Managed connector doc links-->
+[apache-impala-doc]: /connectors/azureimpala/ "Connect to your Impala database to read data from tables"
[azure-automation-doc]: /connectors/azureautomation/ "Create and manage automation jobs for your cloud and on-premises infrastructure" [azure-blob-storage-doc]: ./connectors-create-api-azureblobstorage.md "Manage files in your blob container with Azure blob storage connector" [azure-cosmos-db-doc]: ./connectors-create-api-cosmos-db.md "Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents"
For more information, see these topics:
[slack-doc]: ./connectors-create-api-slack.md "Connect to Slack and post messages to Slack channels" [smtp-doc]: ./connectors-create-api-smtp.md "Connect to an SMTP server and send email with attachments" [sparkpost-doc]: ./connectors-create-api-sparkpost.md "Connects to SparkPost for communication"
-[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in a SQL database table"
+[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table"
[teradata-doc]: /connectors/teradata/ "Connect to your Teradata database to read data from tables" [twilio-doc]: ./connectors-create-api-twilio.md "Connect to Twilio. Send and get messages, get available numbers, manage incoming phone numbers, and more" [youtube-doc]: ./connectors-create-api-youtube.md "Connect to YouTube. Manage your videos and channels"
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Based on your needs, you can "plug in" certain Dapr component types like state s
# [YAML](#tab/yaml)
-When defining a Dapr component via YAML, you will pass your component manifest into the Azure CLI. For example, deploy a `pubsub.yaml` component using the following command:
+When defining a Dapr component via YAML, you will pass your component manifest into the Azure CLI. When configuring multiple components, you will need to create a separate YAML file and run the Azure CLI command for each component.
+
+For example, deploy a `pubsub.yaml` component using the following command:
```azurecli
-az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub--yaml "./pubsub.yaml"
+az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub --yaml "./pubsub.yaml"
``` The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`.
The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with ap
# [Bicep](#tab/bicep)
-This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
+
+The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
```bicep resource daprComponent 'daprComponents@2022-01-01-preview' = {
resource daprComponent 'daprComponents@2022-01-01-preview' = {
# [ARM](#tab/arm)
-This resource defines a Dapr component called `dapr-pubsub` via ARM. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+A Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
+
+This resource defines a Dapr component called `dapr-pubsub` via ARM. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
```json {
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
If you've changed the `STORAGE_ACCOUNT_CONTAINER` variable from its original val
Navigate to the directory in which you stored the *statestore.yaml* file and run the following command to configure the Dapr component in the Container Apps environment.
-If you need to add multiple components, run the `az containerapp env dapr-component set` command multiple times to add each component.
+If you need to add multiple components, create a separate YAML file for each component and run the `az containerapp env dapr-component set` command multiple times to add each component. For more information about configuring Dapr components, see [Configure Dapr components](dapr-overview.md#configure-dapr-components).
+ # [Bash](#tab/bash)
databox-online Azure Stack Edge Gpu Deploy Add Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-add-shares.md
Previously updated : 02/22/2021 Last updated : 05/09/2022 # Customer intent: As an IT admin, I need to understand how to add and connect to shares on Azure Stack Edge Pro so I can use it to transfer data to Azure.
To create a share, do the following procedure:
The type of service you select depends on which format you want the data to use in Azure. In this example, because we want to store the data as block blobs in Azure, we select **Block Blob**. If you select **Page Blob**, make sure that your data is 512 bytes aligned. For example, a VHDX is always 512 bytes aligned. > [!IMPORTANT]
- > Make sure that the Azure Storage account that you use does not have immutability policies set on it if you are using it with a Azure Stack Edge Pro or Data Box Gateway device. For more information, see [Set and manage immutability policies for blob storage](../storage/blobs/immutable-policy-configure-version-scope.md).
+ > Make sure that the Azure Storage account that you use does not have immutability policies or archiving policies set on it if you are using it with a Azure Stack Edge Pro or Data Box Gateway device. If the blob policies are immutable or if the blobs are aggressively archived, you'll experience upload errors when the blob is changed in the share. For more information, see [Set and manage immutability policies for blob storage](../storage/blobs/immutable-policy-configure-version-scope.md).
e. Create a new blob container or use an existing one from the dropdown list. If creating a blob container, provide a container name. If a container doesn't already exist, it's created in the storage account with the newly created share name.
In this tutorial, you learned about the following Azure Stack Edge Pro topics:
To learn how to transform your data by using Azure Stack Edge Pro, advance to the next tutorial: > [!div class="nextstepaction"]
-> [Transform data with Azure Stack Edge Pro](./azure-stack-edge-j-series-deploy-configure-compute.md)
+> [Transform data with Azure Stack Edge Pro](./azure-stack-edge-j-series-deploy-configure-compute.md)
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
Title: Understanding just-in-time virtual machine access in Microsoft Defender for Cloud description: This document explains how just-in-time VM access in Microsoft Defender for Cloud helps you control access to your Azure virtual machines-- Previously updated : 11/09/2021 Last updated : 05/15/2022 # Understanding just-in-time (JIT) VM access
This page explains the principles behind Microsoft Defender for Cloud's just-in-
To learn how to apply JIT to your VMs using the Azure portal (either Defender for Cloud or Azure Virtual Machines) or programmatically, see [How to secure your management ports with JIT](just-in-time-access-usage.md). - ## The risk of open management ports on a virtual machine Threat actors actively hunt accessible machines with open management ports, like RDP or SSH. All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment. -- ## Why JIT VM access is the solution As with all cybersecurity prevention techniques, your goal should be to reduce the attack surface. In this case, that means having fewer open ports, especially management ports.
Your legitimate users also use these ports, so it's not practical to keep them c
To solve this dilemma, Microsoft Defender for Cloud offers JIT. With JIT, you can lock down the inbound traffic to your VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed.
+## How JIT operates with network resources in Azure and AWS
-
-## How JIT operates with network security groups and Azure Firewall
-
-When you enable just-in-time VM access, you can select the ports on the VM to which inbound traffic will be blocked. Defender for Cloud ensures "deny all inbound traffic" rules exist for your selected ports in the [network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG) and [Azure Firewall rules](../firewall/rule-processing.md). These rules restrict access to your Azure VMsΓÇÖ management ports and defend them from attack.
+In Azure, you can block inbound traffic on specific ports, by enabling just-in-time VM access. Defender for Cloud ensures "deny all inbound traffic" rules exist for your selected ports in the [network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG) and [Azure Firewall rules](../firewall/rule-processing.md). These rules restrict access to your Azure VMsΓÇÖ management ports and defend them from attack.
If other rules already exist for the selected ports, then those existing rules take priority over the new "deny all inbound traffic" rules. If there are no existing rules on the selected ports, then the new rules take top priority in the NSG and Azure Firewall.
-When a user requests access to a VM, Defender for Cloud checks that the user has [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) permissions for that VM. If the request is approved, Defender for Cloud configures the NSGs and Azure Firewall to allow inbound traffic to the selected ports from the relevant IP address (or range), for the amount of time that was specified. After the time has expired, Defender for Cloud restores the NSGs to their previous states. Connections that are already established are not interrupted.
+In AWS, by enabling JIT-access the relevant rules in the attached EC2 security groups, for the selected ports, are revoked which blocks inbound traffic on those specific ports.
+
+When a user requests access to a VM, Defender for Cloud checks that the user has [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) permissions for that VM. If the request is approved, Defender for Cloud configures the NSGs and Azure Firewall to allow inbound traffic to the selected ports from the relevant IP address (or range), for the amount of time that was specified. In AWS, Defender for Cloud creates a new EC2 security group that allow inbound traffic to the specified ports. After the time has expired, Defender for Cloud restores the NSGs to their previous states. Connections that are already established are not interrupted.
> [!NOTE] > JIT does not support VMs protected by Azure Firewalls controlled by [Azure Firewall Manager](../firewall-manager/overview.md). The Azure Firewall must be configured with Rules (Classic) and cannot use Firewall policies. --- ## How Defender for Cloud identifies which VMs should have JIT applied The diagram below shows the logic that Defender for Cloud applies when deciding how to categorize your supported VMs:
+### [**Azure**](#tab/defender-for-container-arch-aks)
[![Just-in-time (JIT) virtual machine (VM) logic flow.](media/just-in-time-explained/jit-logic-flow.png)](media/just-in-time-explained/jit-logic-flow.png#lightbox)
+### [**AWS**](#tab/defender-for-container-arch-eks)
+++ When Defender for Cloud finds a machine that can benefit from JIT, it adds that machine to the recommendation's **Unhealthy resources** tab. ![Just-in-time (JIT) virtual machine (VM) access recommendation.](./media/just-in-time-explained/unhealthy-resources.png) - ## FAQ - Just-in-time virtual machine access ### What permissions are needed to configure and use JIT?
JIT Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introd
If you want to create custom roles that can work with JIT, you'll need the details from the table below.
+If you are setting up JIT on your Amazon Web Service (AWS) VM, you will need to [connect your AWS account](quickstart-onboard-aws.md) to Microsoft Defender for Cloud.
+ > [!TIP] > To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.
If you want to create custom roles that can work with JIT, you'll need the detai
|Request JIT access to a VM | *Assign these actions to the user:* <ul><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> `Microsoft.Compute/virtualMachines/read` </li><li> `Microsoft.Network/networkInterfaces/*/read` </li> <li> `Microsoft.Network/publicIPAddresses/read` </li></ul> | |Read JIT policies| *Assign these actions to the user:* <ul><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/read`</li><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action`</li><li>`Microsoft.Security/policies/read`</li><li>`Microsoft.Security/pricings/read`</li><li>`Microsoft.Compute/virtualMachines/read`</li><li>`Microsoft.Network/*/read`</li>| -
+> [!Note]
+> Only the `Microsoft.Security` permissions are relevant for AWS.
## Next steps
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
Title: Just-in-time virtual machine access in Microsoft Defender for Cloud | Microsoft Docs description: Learn how just-in-time VM access (JIT) in Microsoft Defender for Cloud helps you control access to your Azure virtual machines. Previously updated : 01/06/2022 Last updated : 05/17/2022 # Secure your management ports with just-in-time access
For a full explanation of the privilege requirements, see [What permissions are
This page teaches you how to include JIT in your security program. You'll learn how to: -- **Enable JIT on your VMs** - You can enable JIT with your own custom options for one or more VMs using Defender for Cloud, PowerShell, or the REST API. Alternatively, you can enable JIT with default, hard-coded parameters, from Azure virtual machines. When enabled, JIT locks down inbound traffic to your Azure VMs by creating a rule in your network security group.
+- **Enable JIT on your VMs** - You can enable JIT with your own custom options for one or more VMs using Defender for Cloud, PowerShell, or the REST API. Alternatively, you can enable JIT with default, hard-coded parameters, from Azure virtual machines. When enabled, JIT locks down inbound traffic to your Azure and AWS VMs by creating a rule in your network security group.
- **Request access to a VM that has JIT enabled** - The goal of JIT is to ensure that even though your inbound traffic is locked down, Defender for Cloud still provides easy access to connect to VMs when needed. You can request access to a JIT-enabled VM from Defender for Cloud, Azure virtual machines, PowerShell, or the REST API. - **Audit the activity** - To ensure your VMs are secured appropriately, review the accesses to your JIT-enabled VMs as part of your regular security checks. ## Availability
-|Aspect|Details|
-|-|:-|
-| Release state: | General availability (GA) |
-| Supported VMs: | :::image type="icon" source="./medi). |
+| Aspect | Details |
+|--|:-|
+| Release state: | General availability (GA) |
+| Supported VMs: | :::image type="icon" source="./medi). <br> :::image type="icon" source="./media/icons/yes-icon.png"::: AWS EC2 instances (Preview) |
| Required roles and permissions: | **Reader** and **SecurityReader** roles can both view the JIT status and parameters.<br>To create custom roles that can work with JIT, see [What permissions are needed to configure and use JIT?](just-in-time-access-overview.md#what-permissions-are-needed-to-configure-and-use-jit).<br>To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages. |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts |
-
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) |
<sup><a name="footnote1"></a>1</sup> For any VM protected by Azure Firewall, JIT will only fully protect the machine if it's in the same VNET as the firewall. VMs using VNET peering will not be fully protected.
From Defender for Cloud, you can enable and configure the JIT VM access.
1. Select **Save**. -- ### Edit the JIT configuration on a JIT-enabled VM using Defender for Cloud <a name="jit-modify"></a> You can modify a VM's just-in-time configuration by adding and configuring a new port to protect for that VM, or by changing any other setting related to an already protected port.
To edit the existing JIT rules for a VM:
1. When you've finished editing the ports, select **Save**. -- ### [**Azure virtual machines**](#tab/jit-config-avm) ### Enable JIT on your VMs from Azure virtual machines
When a VM has a JIT enabled, you have to request access to connect to it. You ca
> [!NOTE] > If a user who is requesting access is behind a proxy, the option **My IP** may not work. You may need to define the full IP address range of the organization. -- ### [**Azure virtual machines**](#tab/jit-request-avm) ### Request access to a JIT-enabled VM from the Azure virtual machine's connect page
To request access from Azure virtual machines:
> [!NOTE] > After a request is approved for a VM protected by Azure Firewall, Defender for Cloud provides the user with the proper connection details (the port mapping from the DNAT table) to use to connect to the VM. -- ### [**PowerShell**](#tab/jit-request-powershell) ### Request access to a JIT-enabled VM using PowerShell
Run the following in PowerShell:
Learn more in the [PowerShell cmdlet documentation](/powershell/scripting/developer/cmdlet/cmdlet-overview). -- ### [**REST API**](#tab/jit-request-api) ### Request access to a JIT-enabled VMs using the REST API
You can gain insights into VM activities using log search. To view the logs:
1. To download the log information, select **Download as CSV**. -- ## Next steps In this article, you learned _how_ to configure and use just-in-time VM access. To learn _why_ JIT should be used, read the concept article explaining the threats it defends against:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/16/2022 Last updated : 05/17/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in May include: - [Multi-cloud settings of Servers plan are now available in connector level](#multi-cloud-settings-of-servers-plan-are-now-available-in-connector-level)
+- [JIT is now available for AWS (Preview)](#jit-is-now-available-for-aws-preview)
### Multi-cloud settings of Servers plan are now available in connector level
Updates in the UI include a reflection of the selected pricing tier and the requ
:::image type="content" source="media/release-notes/auto-provision.png" alt-text="Screenshot of the auto-provision page with the multi-cloud connector enabled.":::
+### JIT is now available for AWS (Preview)
+
+We would like to announce that Just-in-Time VM access (JIT) is now available (in preview) to protect your AWS EC2 instances.
+
+Learn how to [JIT protects](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws) your AWS EC2 instances.
+ ## April 2022 Updates in April include:
expressroute Expressroute Howto Set Global Reach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach.md
You can run the Get operation to verify the status.
After the previous operation is complete, you no longer have connectivity between your on-premises network through your ExpressRoute circuits.
+## Update connectivity configuration
+
+To update the Global Reach connectivity configuration run the following command against one of the ExpressRoute circuits.
+
+```azurepowershell-interactive
+$ckt_1 = Get-AzExpressRouteCircuit -Name "Your_circuit_1_name" -ResourceGroupName "Your_resource_group"
+$ckt_2 = Get-AzExpressRouteCircuit -Name "Your_circuit_2_name" -ResourceGroupName "Your_resource_group"
+$addressSpace = 'aa:bb::0/125'
+$addressPrefixType = 'IPv6'
+Set-AzExpressRouteCircuitConnectionConfig -Name "Your_connection_name" -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering $ckt_2.Peerings[0].Id -AddressPrefix $addressSpace -AddressPrefixType $addressPrefixType
+Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt_1
+```
+ ## Next steps 1. [Learn more about ExpressRoute Global Reach](expressroute-global-reach.md) 2. [Verify ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md)
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
description: Learn about Azure ExpressRoute monitoring, metrics, and alerts usin
Previously updated : 09/14/2021 Last updated : 05/10/2022 # ExpressRoute monitoring, metrics, and alerts
This article helps you understand ExpressRoute monitoring, metrics, and alerts u
## ExpressRoute metrics
-To view **Metrics**, navigate to the *Azure Monitor* page and select *Metrics*. To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
+To view **Metrics**, go to the *Azure Monitor* page and select *Metrics*. To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
-| [Arp Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | PeeringType, Peer | Yes |
-| [Bgp Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | PeeringType, Peer | Yes |
-| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | PeeringType | No |
-| [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | PeeringType | No |
+| [Arp Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
+| [Bgp Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
+| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Peering Type | No |
+| [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Peering Type | No |
| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Peering Type | Yes | | DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Peering Type | Yes | | GlobalReachBitsInPerSecond | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | PeeredCircuitSKey | No |
When you deploy an ExpressRoute gateway, Azure manages the compute and functions
* Frequency of routes changed * Number of VMs in the virtual network
-It's highly recommended you set alerts for each of these metrics so that you are aware of when your gateway could be seeing performance issues.
+It's highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
### <a name = "gwbits"></a>Bits received per second - Split by instance
This metric captures the number of inbound packets traversing the ExpressRoute g
### <a name = "advertisedroutes"></a>Count of Routes Advertised to Peer - Split by instance
-Aggregation type: *Count*
+Aggregation type: *Max*
-This metric is the count for the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces may include virtual networks that are connected using VNet peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
+This metric shows the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces may include virtual networks that are connected using VNet peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-advertised-to-peer.png" alt-text="Screenshot of count of routes advertised to peer.":::
This metric shows the bits per second for ingress and egress to Azure through th
## Alerts for ExpressRoute gateway connections
-1. To configure alerts, navigate to **Azure Monitor**, then select **Alerts**.
+1. To set up alerts, go to **Azure Monitor**, then select **Alerts**.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/eralertshowto.jpg" alt-text="alerts"::: 2. Select **+Select Target** and select the ExpressRoute gateway connection resource.
This metric shows the bits per second for ingress and egress to Azure through th
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/basedpeering.jpg" alt-text="each peering":::
-## Configure alerts for activity logs on circuits
+## Set up alerts for activity logs on circuits
In the **Alert Criteria**, you can select **Activity Log** for the Signal Type and select the Signal.
In the **Alert Criteria**, you can select **Activity Log** for the Signal Type a
## More metrics in Log Analytics
-You can also view ExpressRoute metrics by navigating to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output will contain the columns below.
+You can also view ExpressRoute metrics by going to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output will contain the columns below.
| **Column** | **Type** | **Description** | | | | |
You can also view ExpressRoute metrics by navigating to your ExpressRoute circui
## Next steps
-Configure your ExpressRoute connection.
+Set up your ExpressRoute connection.
* [Create and modify a circuit](expressroute-howto-circuit-arm.md) * [Create and modify peering configuration](expressroute-howto-routing-arm.md)
logic-apps Create Custom Built In Connector Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-custom-built-in-connector-standard.md
+
+ Title: Create built-in connectors for Standard logic apps
+description: Create your own custom built-in connectors for Standard workflows in single-tenant Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 05/17/2022
+# As a developer, I want learn how to create my own custom built-in connector operations to use and run in my Standard logic app workflows.
++
+# Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps
+
+If you need connectors that aren't available in Standard logic app workflows, you can create your own built-in connectors using the same extensibility model that's used by the [*service provider-based built-in connectors*](custom-connector-overview.md#service-provider-interface-implementation) available for Standard workflows in single-tenant Azure Logic Apps. This extensibility model is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
+
+This article shows how to create an example custom built-in Cosmos DB connector, which has a single Azure Functions-based trigger and no actions. The trigger fires when a new document is added to the lease collection or container in Cosmos DB and then runs a workflow that uses the input payload as the Cosmos document.
+
+| Operation | Operation details | Description |
+|--|-|-|
+| Trigger | When a document is received | This trigger operation runs when an insert operation happens in the specified Cosmos DB database and collection. |
+| Action | None | This connector doesn't define any action operations. |
+||||
+
+This sample connector uses the same functionality as the [Azure Cosmos DB trigger for Azure Functions](../azure-functions/functions-bindings-cosmosdb-v2-trigger.md), which is based on [Azure Functions triggers and bindings](../azure-functions/functions-triggers-bindings.md). For the complete sample, review [Sample custom built-in Cosmos DB connector - Azure Logic Apps Connector Extensions](https://github.com/Azure/logicapps-connector-extensions/tree/CosmosDB/src/CosmosDB).
+
+For more information, review the following documentation:
+
+* [Custom connectors for Standard logic apps](custom-connector-overview.md#custom-connector-standard)
+* [Service provider-based built-in connectors](custom-connector-overview.md#service-provider-interface-implementation)
+* [Single-tenant Azure Logic Apps](single-tenant-overview-compare.md)
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* Basic knowledge about single-tenant Azure Logic Apps, Standard logic app workflows, connectors, and how to use Visual Studio Code for creating single tenant-based workflows. For more information, review the following documentation:
+
+ * [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
+
+ * [Create an integration workflow with single-tenant Azure Logic Apps (Standard) - Azure portal](create-single-tenant-workflows-azure-portal.md)
+
+* [Visual Studio Code with the Azure Logic Apps (Standard) extension and other prerequisites installed](create-single-tenant-workflows-azure-portal.md#prerequisites). Your installation should already include the [NuGet package for Microsoft.Azure.Workflows.WebJobs.Extension](https://www.nuget.org/packages/Microsoft.Azure.Workflows.WebJobs.Extension/).
+
+* An Azure Cosmos account, database, and container or collection. For more information, review [Quickstart: Create an Azure Cosmos account, database, container and items from the Azure portal](../cosmos-db/sql/create-cosmosdb-resources-portal.md).
+
+## High-level steps
+
+The following outline describes the high-level steps to build the example connector:
+
+1. Create a class library project.
+
+1. In your project, add the **Microsoft.Azure.Workflows.WebJobs.Extension** NuGet package as a NuGet reference.
+
+1. Provide the operations for your built-in connector by using the NuGet package to implement the methods for the interfaces named [**IServiceOperationsProvider**](custom-connector-overview.md#iserviceoperationsprovider) and [**IServiceOperationsTriggerProvider**](custom-connector-overview.md#iserviceoperationstriggerprovider).
+
+1. Register your custom built-in connector with the Azure Functions runtime extension.
+
+1. Install the connector for use.
+
+## Create your class library project
+
+1. In Visual Studio Code, create a .NET Core 3.1 class library project.
+
+1. In your project, add the NuGet package named **Microsoft.Azure.Workflows.WebJobs.Extension** as a NuGet reference.
+
+## Implement the service provider interface
+
+To provide the operations for the sample built-in connector, in the **Microsoft.Azure.Workflows.WebJobs.Extension** NuGet package, implement the methods for the following interfaces. The following diagram shows the interfaces with the method implementations that the Azure Logic Apps designer and runtime expect for a custom built-in connector that has an Azure Functions-based trigger:
+
+![Conceptual class diagram showing method implementation for sample Cosmos DB custom built-in connector.](./media/create-custom-built-in-connector-standard/service-provider-cosmos-db-example.png)
+
+### IServiceOperationsProvider
+
+This interface includes the following methods that provide the operation manifest and performs your service provider's specific tasks or actual business logic in your custom built-in connector. For more information, review [IServiceOperationsProvider](custom-connector-overview.md#iserviceoperationsprovider).
+
+* [**GetService()**](#getservice)
+
+ The designer in Azure Logic Apps requires the [**GetService()**](#getservice) method to retrieve the high-level metadata for your custom service, including the service description, connection input parameters required on the designer, capabilities, brand color, icon URL, and so on.
+
+* [**GetOperations()**](#getoperations)
+
+ The designer in Azure Logic Apps requires the [**GetOperations()**](#getoperations) method to retrieve the operations implemented by your custom service. The operations list is based on Swagger schema. The designer also uses the operation metadata to understand the input parameters for specific operations and generate the outputs as property tokens, based on the schema of the output for an operation.
+
+* [**GetBindingConnectionInformation()**](#getbindingconnectioninformation)
+
+ If your trigger is an Azure Functions-based trigger type, the runtime in Azure Logic Apps requires the [**GetBindingConnectionInformation()**](#getbindingconnectioninformation) method to provide the required connection parameters information to the Azure Functions trigger binding.
+
+* [**InvokeOperation()**](#invokeoperation)
+
+ If your connector has actions, the runtime in Azure Logic Apps requires the [**InvokeOperation()**](#invokeoperation) method to call each action in your connector that runs during workflow execution. If your connector doesn't have actions, you don't have to implement the **InvokeOperation()** method.
+
+ In this example, the Cosmos DB custom built-in connector doesn't have actions. However, the method is included in this example for completeness.
+
+For more information about these methods and their implementation, review these methods later in this article.
+
+### IServiceOperationsTriggerProvider
+
+You can add or expose an [Azure Functions trigger or action](../azure-functions/functions-bindings-example.md) as a service provider trigger in your custom built-in connector. To use the Azure Functions-based trigger type and the same Azure Functions binding as the Azure managed connector trigger, implement the following methods to provide the connection information and trigger bindings as required by Azure Functions. For more information, review [IServiceOperationsTriggerProvider](custom-connector-overview.md#iserviceoperationstriggerprovider).
+
+* The [**GetFunctionTriggerType()**](#getfunctiontriggertype) method is required to return the string that's the same as the **type** parameter in the Azure Functions trigger binding.
+
+* The [**GetFunctionTriggerDefinition()**](#getfunctiontriggerdefinition) has a default implementation, so you don't need to explicitly implement this method. However, if you want to update the trigger's default behavior, such as provide extra parameters that the designer doesn't expose, you can implement this method and override the default behavior.
+
+### Methods to implement
+
+The following sections describe the methods that the example connector implements. For the complete sample, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetService()
+
+The designer requires the following method to get the high-level description for your service:
+
+```csharp
+public ServiceOperationApi GetService()
+{
+ return this.CosmosDBApis.ServiceOperationServiceApi();
+}
+```
+
+#### GetOperations()
+
+The designer requires the following method to get the operations implemented by your service. This operations list is based on Swagger schema.
+
+```csharp
+public IEnumerable<ServiceOperation> GetOperations(bool expandManifest)
+{
+ return expandManifest ? serviceOperationsList : GetApiOperations();
+}
+```
+
+#### GetBindingConnectionInformation()
+
+To use the Azure Functions-based trigger type, the following method provides the required connection parameters information to the Azure Functions trigger binding.
+
+```csharp
+public string GetBindingConnectionInformation(string operationId, InsensitiveDictionary<JToken> connectionParameters)
+{
+ return ServiceOperationsProviderUtilities
+ .GetRequiredParameterValue(
+ serviceId: ServiceId,
+ operationId: operationID,
+ parameterName: "connectionString",
+ parameters: connectionParameters)?
+ .ToValue<string>();
+}
+```
+
+#### InvokeOperation()
+
+The example Cosmos DB custom built-in connector doesn't have actions, but the following method is included for completeness:
+
+```csharp
+public Task<ServiceOperationResponse> InvokeOperation(string operationId, InsensitiveDictionary<JToken> connectionParameters, ServiceOperationRequest serviceOperationRequest)
+{
+ throw new NotImplementedException();
+}
+```
+
+#### GetFunctionTriggerType()
+
+To use an Azure Functions-based trigger as a trigger in your connector, you have to return the string that's the same as the **type** parameter in the Azure Functions trigger binding.
+
+The following example returns the string for the out-of-the-box built-in Azure Cosmos DB trigger, `"type": "cosmosDBTrigger"`:
+
+```csharp
+public string GetFunctionTriggerType()
+{
+ return "CosmosDBTrigger";
+}
+```
+
+#### GetFunctionTriggerDefinition()
+
+This method has a default implementation, so you don't need to explicitly implement this method. However, if you want to update the trigger's default behavior, such as provide extra parameters that the designer doesn't expose, you can implement this method and override the default behavior.
+
+<a name="register-connector"></a>
+
+## Register your connector
+
+To load your custom built-in connector extension during the Azure Functions runtime start process, you have to add the Azure Functions extension registration as a startup job and register your connector as a service provider in service provider list. Based on the type of data that your built-in trigger needs as inputs, optionally add the converter. This example converts the **Document** data type for Cosmos DB Documents to a **JObject** array.
+
+The following sections show how to register your custom built-in connector as an Azure Functions extension.
+
+### Create the startup job
+
+1. Create a startup class using the assembly attribute named **[assembly:WebJobsStartup]**.
+
+1. Implement the **IWebJobsStartup** interface. In the **Configure()** method, register the extension and inject the service provider.
+
+ For example, the following code snippet shows the startup class implementation for the sample custom built-in Cosmos DB connector:
+
+ ```csharp
+ using Microsoft.Azure.WebJobs;
+ using Microsoft.Azure.WebJobs.Hosting;
+ using Microsoft.Extensions.DependencyInjection.Extensions;
+
+ [assembly: Microsoft.Azure.WebJobs.Hosting.WebJobsStartup(typeof(ServiceProviders.CosmosDb.Extensions.CosmosDbTriggerStartup))]
+
+ namespace ServiceProviders.CosmosDb.Extensions
+ {
+ public class CosmosDbServiceProviderStartup : IWebJobsStartup
+ {
+ // Initialize the workflow service.
+ public void Configure(IWebJobsBuilder builder)
+ {
+ // Register the extension.
+ builder.AddExtension<CosmosDbServiceProvider>)();
+
+ // Use dependency injection (DI) for the trigger service operation provider.
+ builder.Services.TryAddSingleton<CosmosDbTriggerServiceOperationsProvider>();
+ }
+ }
+ }
+ ```
+
+ For more information, review [Register services - Use dependency injection in .NET Azure Functions](../azure-functions/functions-dotnet-dependency-injection.md#register-services).
+
+### Register the service provider
+
+Now, register the service provider implementation as an Azure Functions extension with the Azure Logic Apps engine. This example uses the built-in [Azure Cosmos DB trigger for Azure Functions](../azure-functions/functions-bindings-cosmosdb-v2-trigger.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp) as a new trigger. This example also registers the new Cosmos DB service provider for an existing list of service providers, which is already part of the Azure Logic Apps extension. For more information, review [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md).
+
+```csharp
+using Microsoft.Azure.Documents;
+using Microsoft.Azure.WebJobs.Description;
+using Microsoft.Azure.WebJobs.Host.Config;
+using Microsoft.Azure.Workflows.ServiceProviders.Abstractions;
+using Microsoft.WindowsAzure.ResourceStack.Common.Extensions;
+using Microsoft.WindowsAzure.ResourceStack.Common.Json;
+using Microsoft.WindowsAzure.ResourceStack.Common.Storage.Cosmos;
+using Newtonsoft.Json.Linq;
+using System;
+using System.Collections.Generic;
+
+namespace ServiceProviders.CosmosDb.Extensions
+{
+ [Extension("CosmosDbServiceProvider", configurationSection: "CosmosDbServiceProvider")]
+ public class CosmosDbServiceProvider : IExtensionConfigProvider
+ {
+ // Initialize a new instance for the CosmosDbServiceProvider class.
+ public CosmosDbServiceProvider(ServiceOperationsProvider serviceOperationsProvider, CosmosDbTriggerServiceOperationsProvider operationsProvider)
+ {
+ serviceOperationsProvider.RegisterService(serviceName: CosmosDBServiceOperationsProvider.ServiceName, serviceOperationsProviderId: CosmosDBServiceOperationsProvider.ServiceId, serviceOperationsProviderInstance: operationsProvider);
+ }
+
+ // Convert the Cosmos Document array to a generic JObject array.
+ public static JObject[] ConvertDocumentToJObject(IReadOnlyList<Document> data)
+ {
+ List<JObject> jobjects = new List<JObject>();
+
+ foreach(var doc in data)
+ {
+ jobjects.Add((JObject)doc.ToJToken());
+ }
+
+ return jobjects.ToArray();
+ }
+
+ // In the Initialize method, you can add any custom implementation.
+ public void Initialize(ExtensionConfigContext context)
+ {
+ // Convert the Cosmos Document list to a JObject array.
+ context.AddConverter<IReadOnlyList<Document>, JObject[]>(ConvertDocumentToJObject);
+ }
+ }
+}
+```
+
+### Add a converter
+
+Azure Logic Apps has a generic way to handle any Azure Functions built-in trigger by using the **JObject** array. However, if you want to convert the read-only list of Azure Cosmos DB documents into a **JObject** array, you can add a converter. When the converter is ready, register the converter as part of **ExtensionConfigContext** as shown earlier in this example:
+
+```csharp
+// Convert the Cosmos document list to a JObject array.
+context.AddConverter<IReadOnlyList<Document>, JObject[]>(ConvertDocumentToJObject);
+```
+
+### Class library diagram for implemented classes
+
+When you're done, review the following class diagram that shows the implementation for all the classes in the **Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB.dll** extension bundle:
+
+* **CosmosDbServiceOperationsProvider**
+* **CosmosDbServiceProvider**
+* **CosmosDbServiceProviderStartup**
+
+![Conceptual code map diagram that shows complete class implementation.](./media/create-custom-built-in-connector-standard/methods-implementation-code-map-diagram.png)
+
+## Install your connector
+
+To add the NuGet reference from the previous section, in the extension bundle named **Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB.dll**, update the **extensions.json** file. For more information, go to the **Azure/logicapps-connector-extensions** repo, and review the PowerShell script named [**add-extension.ps1**](https://github.com/Azure/logicapps-connector-extensions/blob/main/src/Common/tools/add-extension.ps1).
+
+1. Update the extension bundle to include the custom built-in connector.
+
+1. In Visual Studio Code, which should have the **Azure Logic Apps (Standard) for Visual Studio Code** extension installed, create a logic app project, and install the extension package using the following PowerShell command:
+
+ **PowerShell**
+
+ ```powershell
+ dotnet add package "Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB" --version 1.0.0 --source $extensionPath
+ ```
+
+ Alternatively, from your logic app project's directory using a PowerShell prompt, run the PowerShell script named [**add-extension.ps1**](https://github.com/Azure/logicapps-connector-extensions/blob/main/src/Common/tools/add-extension.ps1):
+
+ ```powershell
+ .\add-extension.ps1 {Cosmos-DB-output-bin-NuGet-folder-path} CosmosDB
+ ```
+
+ **Bash**
+
+ To use Bash instead, from your logic app project's directory, run the PowerShell script with the following command:
+
+ ```bash
+ powershell -file add-extension.ps1 {Cosmos-DB-output-bin-NuGet-folder-path} CosmosDB
+ ```
+
+ If the extension for your custom built-in connector was successfully installed, you get output that looks similar to the following example:
+
+ ```output
+ C:\Users\{your-user-name}\Desktop\demoproj\cdbproj>powershell - file C:\myrepo\github\logicapps-connector-extensions\src\Common\tools\add-extension.ps1 C:\myrepo\github\logicapps-connector-extensions\src\CosmosDB\bin\Debug\CosmosDB
+
+ Nuget extension path is C:\myrepo\github\logicapps-connector-extensions\src\CosmosDB\bin\Debug\
+ Extension dll path is C:\myrepo\github\logicapps-connector-extensions\src\CosmosDB\bin\Debug\netcoreapp3.1\Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB.dll
+ Extension bundle module path is C:\Users\{your-user-name}\.azure-functions-core-tools\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows1.1.9
+ EXTENSION PATH is C:\Users\{your-user-name}\.azure-functions-core-tools\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows\1.1.9\bin\extensions.json and dll Path is C:\myrepo\github\logicapps-connector-extensions\src\CosmosDB\bin\Debug\netcoreapp3.1\Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB.dll
+ SUCCESS: The process "func.exe" with PID 26692 has been terminated.
+ Determining projects to restore...
+ Writing C:\Users\{your-user-name}\AppData\Local\Temp\tmpD343.tmp`<br>
+ info : Adding PackageReference for package 'Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB' into project 'C:\Users\{your-user-name}\Desktop\demoproj\cdbproj.csproj'.
+ info : Restoring packages for C:\Users\{your-user-name}\Desktop\demoproj\cdbproj.csproj...
+ info : Package 'Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB' is compatible with all the specified frameworks in project 'C:\Users\{your-user-name}\Desktop\demoproj\cdbproj.csproj'.
+ info : PackageReference for package 'Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB' version '1.0.0' updated in file 'C:\Users\{your-user-name}\Desktop\demoproj\cdbproj.csproj'.
+ info : Committing restore...
+ info : Generating MSBuild file C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\obj\cdbproj.csproj.nuget.g.props.
+ info : Generating MSBuild file C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\obj\cdbproj.csproj.nuget.g.targets.
+ info : Writing assets file to disk. Path: C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\obj\project.assets.json.
+ log : Restored C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\cdbproj.csproj (in 1.5 sec).
+ Extension CosmosDB is successfully added.
+
+ C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\>
+ ```
+
+1. If any **func.exe** process is running, make sure to close or exit that process before you continue to the next step.
+
+## Test your connector
+
+1. In Visual Studio Code, open your Standard logic app and blank workflow in the designer.
+
+1. On the designer surface, select **Choose an operation** to open the connector operations picker.
+
+1. Under the operations search box, select **Built-in**. In the search box, enter **cosmos db**.
+
+ The operations picker shows your custom built-in connector and trigger, for example:
+
+ ![Screenshot showing Visual Studio Code and the designer for a Standard logic app workflow with the new custom built-in Cosmos DB connector.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-picker.png)
+
+1. From the **Triggers** list, select your custom built-in trigger to start your workflow.
+
+1. On the connection pane, provide the following property values to create a connection, for example:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*Cosmos-DB-connection-name*> | The name for the Cosmos DB connection to create |
+ | **Connection String** | Yes | <*Cosmos-DB-connection-string*> | The connection string for the Azure Cosmos DB database collection or lease collection where you want to add each new received document. |
+ |||||
+
+ ![Screenshot showing the connection pane when using the connector for the first time.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-create-connection.png)
+
+1. When you're done, select **Create**.
+
+1. On the trigger properties pane, provide the following property values for your trigger, for example:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Database name** | Yes | <*Cosmos-DB-database-name*> | The name for the Cosmos DB database to use |
+ | **Collection name** | Yes | <*Cosmos-DB-collection-name*> | The name for the Cosmos DB collection where you want to add each new received document. |
+ |||||
+
+ ![Screenshot showing the trigger properties pane.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-trigger-properties.png)
+
+ For this example, in code view, the workflow definition, which is in the **workflow.json** file, has a `triggers` JSON object that appears similar to the following sample:
+
+ ```json
+ {
+ "definition": {
+ "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
+ "actions": {},
+ "contentVersion": "1.0.0.0",
+ "outputs": {},
+ "triggers": {
+ "When_a_document_is_received": {
+ "inputs":{
+ "parameters": {
+ "collectionName": "States",
+ "databaseName": "SampleCosmosDB"
+ },
+ "serviceProviderConfiguration": {
+ "connectionName": "cosmosDb",
+ "operationId": "whenADocumentIsReceived",
+ "serviceProviderId": "/serviceProviders/CosmosDb"
+ },
+ "splitOn": "@triggerOutputs()?['body']",
+ "type": "ServiceProvider"
+ }
+ }
+ }
+ },
+ "kind": "Stateful"
+ }
+ ```
+
+ The connection definition, which is in the **connections.json** file, has a `serviceProviderConnections` JSON object that appears similar to the following sample:
+
+ ```json
+ {
+ "serviceProviderConnections": {
+ "cosmosDb": {
+ "parameterValues": {
+ "connectionString": "@appsetting('cosmosDb_connectionString')"
+ },
+ "serviceProvider": {
+ "id": "/serviceProviders/CosmosDb"
+ },
+ "displayName": "myCosmosDbConnection"
+ }
+ },
+ "managedApiConnections": {}
+ }
+ ```
+
+1. In Visual Studio Code, on the **Run** menu, select **Start Debugging**. (Press F5)
+
+1. To trigger your workflow, in the Azure portal, open your Azure Cosmos DB account. On the account menu, select **Data Explorer**. Browse to the database and collection that you specified in the trigger. Add an item to the collection.
+
+ ![Screenshot showing the Azure portal, Cosmos DB account, and Data Explorer open to the specified database and collection.](./media/create-custom-built-in-connector-standard/cosmos-db-account-test-add-item.png)
+
+## Next steps
+
+* [Source for sample custom built-in Cosmos DB connector - Azure Logic Apps Connector Extensions](https://github.com/Azure/logicapps-connector-extensions/tree/CosmosDB/src/CosmosDB)
+
+* [Built-in Service Bus trigger: batching and session handling](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-running-anywhere-built-in-service-bus-trigger/ba-p/2079995)
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
To debug a stateless workflow more easily, you can enable the run history for th
1. To disable the run history when you're done, either set the `Workflows.{yourWorkflowName}.OperationOptions`property to `None`, or delete the property and its value.
+<a name="view-connections"></a>
+
+## View connections
+
+When you create connections within a workflow using [managed connectors](../connectors/managed.md) or [service provider based, built-in connectors](../connectors/built-in.md), these connections are actually separate Azure resources with their own resource definitions.
+
+1. From your logic app's menu, under **Workflows**, select **Connections**.
+
+1. Based on the connection type, you want to view, select one of the following options:
+
+ | Option | Description |
+ |--|-|
+ | **API Connections** | Connections created by managed connectors |
+ | **Service Provider Connections** | Connections created by built-in connectors based on the service provider interface implementation. a specific connection instance, which shows more information about that connection. To view the selected connection's underlying resource definition, select **JSON View**. |
+ | **JSON View** | The underlying resource definitions for all connections in the logic app |
+ |||
+ <a name="delete-from-designer"></a> ## Delete items from the designer
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
Title: Custom connector topic links
-description: Links to topics about how to create, use, share, and certify custom connectors in Azure Logic Apps.
+ Title: Custom connectors
+description: Learn about creating custom connectors in Azure Logic Apps.
ms.suite: integration-+ Previously updated : 1/30/2018 Last updated : 05/17/2022
+# As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows.
-# Custom connectors in Logic Apps
+# Custom connectors in Azure Logic Apps
-Without writing any code, you can build workflows and apps with
-[Azure Logic Apps](https://azure.microsoft.com/services/logic-apps),
-[Power Automate](https://flow.microsoft.com),
-and [Power Apps](https://powerapps.microsoft.com).
-To help you integrate apps, data, and business processes,
-these services offer [~200 connectors](/connectors/) -
-for Microsoft services and products, as well as other services,
-like GitHub, Salesforce, Twitter, and more.
+Without writing any code, you can quickly create automated integration workflows when you use the prebuilt connector operations in Azure Logic Apps. A connector helps your workflows connect and access data, events, and actions across other apps, services, systems, protocols, and platforms. Each connector offers operations as triggers, actions, or both that you can add to your workflows. By using these operations, you expand the capabilities for your cloud apps and on-premises apps to work with new and existing data.
-Sometimes though, you might want to call APIs, services, and systems that aren't available as prebuilt connectors.
-To support more tailored scenarios, you can build *custom connectors* with their own triggers and actions.
-The Connectors documentation site has complete basic and advanced tutorials about custom connectors.
-You can start with the [custom connector overview](/connectors/custom-connectors/),
-but you can also go directly to these topics for details about a specific area:
+Connectors in Azure Logic Apps are either *built in* or *managed*. A *built-in* connector runs natively on the Azure Logic Apps runtime, which means they're hosted in the same process as the runtime and provide higher throughput, low latency, and local connectivity. A *managed connector* is a proxy or a wrapper around an API, such as Office 365 or Salesforce, that helps the underlying service talk to Azure Logic Apps. Managed connectors are powered by the connector infrastructure in Azure and are deployed, hosted, run, and managed by Microsoft. You can choose from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) to use with your workflows in Azure Logic Apps.
-* [Create a Logic Apps connector](/connectors/custom-connectors/create-logic-apps-connector)
+When you use a connector operation for the first time in a workflow, some connectors don't require that you create a connection first, but many other connectors require this step. Each connection that you create is actually a separate Azure resource that provides access to the target app, service, system, protocol, or platform.
-* [Create a custom connector from an OpenAPI definition](/connectors/custom-connectors/define-openapi-definition)
+Sometimes though, you might want to call REST APIs that aren't available as prebuilt connectors. To support more tailored scenarios, you can create your own [*custom connectors*](/connectors/custom-connectors/) to offer triggers and actions that aren't available as prebuilt operations.
-* [Create a custom connector from a Postman collection](/connectors/custom-connectors/define-postman-collection)
+This article provides an overview about custom connectors for [Consumption logic app workflows and Standard logic app workflows](logic-apps-overview.md). Each logic app type is powered by a different Azure Logic Apps runtime, respectively hosted in multi-tenant Azure and single-tenant Azure. For more information about connectors in Azure Logic Apps, review the following documentation:
-* [Use a custom connector from a logic app](/connectors/custom-connectors/use-custom-connector-logic-apps)
+* [About connectors in Azure Logic Apps](../connectors/apis-list.md)
+* [Built-in connectors in Azure Logic Apps](../connectors/built-in.md)
+* [Managed connectors in Azure Logic Apps](../connectors/managed.md)
+* [Connector overview](/connectors/connectors)
+* [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
-* [Share custom connectors in your organization](/connectors/custom-connectors/share)
+<a name="custom-connector-consumption"></a>
-* [Submit your connectors for Microsoft certification](/connectors/custom-connectors/submit-certification)
+## Consumption logic apps
-* [Custom connector FAQ](/connectors/custom-connectors/faq)
+In [multi-tenant Azure Logic Apps](logic-apps-overview.md), you can create [custom connectors from Swagger-based or SOAP-based APIs](/connectors/custom-connectors/) up to [specific limits](../logic-apps/logic-apps-limits-and-config.md#custom-connector-limits) for use in Consumption logic app workflows. The [Connectors documentation](/connectors/connectors) provides more overview information about how to create custom connectors for Consumption logic apps, including complete basic and advanced tutorials. The following list also provides direct links to information about custom connectors for Consumption logic apps:
+
+ * [Create an Azure Logic Apps connector](/connectors/custom-connectors/create-logic-apps-connector)
+ * [Create a custom connector from an OpenAPI definition](/connectors/custom-connectors/define-openapi-definition)
+ * [Create a custom connector from a Postman collection](/connectors/custom-connectors/define-postman-collection)
+ * [Use a custom connector from a logic app](/connectors/custom-connectors/use-custom-connector-logic-apps)
+ * [Share custom connectors in your organization](/connectors/custom-connectors/share)
+ * [Submit your connectors for Microsoft certification](/connectors/custom-connectors/submit-certification)
+ * [Custom connector FAQ](/connectors/custom-connectors/faq)
+
+<a name="custom-connector-standard"></a>
+
+## Standard logic apps
+
+In [single-tenant Azure Logic Apps](logic-apps-overview.md), the redesigned Azure Logic Apps runtime powers Standard logic app workflows. This runtime differs from the multi-tenant Azure Logic Apps runtime that powers Consumption logic app workflows. The single-tenant runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), which provides a key capability for you to create your own [built-in connectors](../connectors/built-in.md) for anyone to use in Standard workflows. In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+
+When single-tenant Azure Logic Apps officially released, new built-in connectors included Azure Blob Storage, Azure Event Hubs, Azure Service Bus, and SQL Server. Over time, this list of built-in connectors continues to grow. However, if you need connectors that aren't available in Standard logic app workflows, you can [create your own built-in connectors](create-custom-built-in-connector-standard.md) using the same extensibility model that's used by built-in connectors in Standard workflows.
+
+<a name="service-provider-interface-implementation"></a>
+
+### Built-in connectors as service providers
+
+In single-tenant Azure Logic Apps, a built-in connector that has the following attributes is called a *service provider*:
+
+* Is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
+
+* Provides access from a Standard logic app workflow to a service, such as Azure Blob Storage, Azure Service Bus, Azure Event Hubs, SFTP, and SQL Server.
+
+ Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity.
+
+* Runs in the same process as the redesigned Azure Logic Apps runtime.
+
+A built-in connector that's *not a service provider* has the following attributes:
+
+* Isn't based on the Azure Functions extensibility model.
+
+* Is directly implemented as a job within the Azure Logic Apps runtime, such as Schedule, HTTP, Request, and XML operations.
+
+No capability is currently available to create a non-service provider built-in connector or a new job type that runs directly in the Azure Logic Apps runtime. However, you can create your own built-in connectors using the service provider infrastructure.
+
+The following section provides more information about how the extensibility model works for custom built-in connectors.
+
+<a name="built-in-connector-extensibility-model"></a>
+
+### Built-in connector extensibility model
+
+Based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), the built-in connector extensibility model in single-tenant Azure Logic Apps has a service provider infrastructure that you can use to [create, package, register, and install your own built-in connectors](create-custom-built-in-connector-standard.md) as Azure Functions extensions that anyone can use in their Standard workflows. This model includes custom built-in trigger capabilities that support exposing an [Azure Functions trigger or action](../azure-functions/functions-bindings-example.md) as a service provider trigger in your custom built-in connector.
+
+The following diagram shows the method implementations that the Azure Logic Apps designer and runtime expects for a custom built-in connector with an [Azure Functions-based trigger](../azure-functions/functions-bindings-example.md):
+
+![Conceptual diagram showing Azure Functions-based service provider infrastructure.](./media/custom-connector-overview/service-provider-azure-functions-based.png)
+
+The following sections provide more information about the interfaces that your connector needs to implement.
+
+#### IServiceOperationsProvider
+
+This interface includes the methods that provide the operations manifest for your custom built-in connector.
+
+* Operations manifest
+
+ The operations manifest includes metadata about the implemented operations in your custom built-in connector. The Azure Logic Apps designer primarily uses this metadata to drive the authoring and monitoring experiences for your connector's operations. For example, the designer uses operation metadata to understand the input parameters required by a specific operation and to facilitate generating the outputs' property tokens, based on the schema for the operation's outputs.
+
+ The designer requires and uses the [**GetService()**](#getservice) and [**GetOperations()**](#getoperations) methods to query the operations that your connector provides and shows on the designer surface. The **GetService()** method also specifies the connection's input parameters that are required by the designer.
+
+ For more information about these methods and their implementation, review the [Methods to implement](#method-implementation) section later in this article.
+
+* Operation invocations
+
+ Operation invocations are the method implementations used during workflow execution by the Azure Logic Apps runtime to call the specified operations in the workflow definition.
+
+ * If your trigger is an Azure Functions-based trigger type, the [**GetBindingConnectionInformation()**](#getbindingconnectioninformation) method is used by the runtime in Azure Logic Apps to provide the required connection parameters information to the Azure Functions trigger binding.
+
+ * If your connector has actions, the [**InvokeOperation()**](#invokeoperation) method is used by the runtime to call each action in your connector that runs during workflow execution. Otherwise, you don't have to implement this method.
+
+For more information about these methods and their implementation, review the [Methods to implement](#method-implementation) section later in this article.
+
+#### IServiceOperationsTriggerProvider
+
+Custom built-in trigger capabilities support adding or exposing an [Azure Functions trigger or action](../azure-functions/functions-bindings-example.md) as a service provider trigger in your custom built-in connector. To use the Azure Functions-based trigger type and the same Azure Functions binding as the Azure managed connector trigger, implement the following methods to provide the connection information and trigger bindings as required by Azure Functions.
+
+* The [**GetFunctionTriggerType()**](#getfunctiontriggertype) method is required to return the string that's the same as the **type** parameter in the Azure Functions trigger binding.
+
+* The [**GetFunctionTriggerDefinition()**](#getfunctiontriggerdefinition) has a default implementation, so you don't need to explicitly implement this method. However, if you want to update the trigger's default behavior, such as provide extra parameters that the designer doesn't expose, you can implement this method and override the default behavior.
+
+<a name="method-implementation"></a>
+
+### Methods to implement
+
+The following sections provide more information about the methods that your connector needs to implement. For the complete sample, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs) and [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md).
+
+#### GetService()
+
+The designer requires this method to get the high-level metadata for your service, including the service description, connection input parameters, capabilities, brand color, icon URL, and so on.
+
+```csharp
+public ServiceOperationApi GetService()
+{
+ return this.{custom-service-name-apis}.ServiceOperationServiceApi();
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetOperations()
+
+The designer requires this method to get the operations implemented by your service. The operations list is based on Swagger schema. The designer also uses the operation metadata to understand the input parameters for specific operations and generate the outputs as property tokens, based on the schema of the output for an operation.
+
+```csharp
+public IEnumerable<ServiceOperation> GetOperations(bool expandManifest)
+{
+ return expandManifest ? serviceOperationsList : GetApiOperations();
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetBindingConnectionInformation()
+
+If you want to use the Azure Functions-based trigger type, this method provides the required connection parameters information to the Azure Functions trigger binding.
+
+```csharp
+public string GetBindingConnectionInformation(string operationId, InsensitiveDictionary<JToken> connectionParameters)
+{
+ return ServiceOperationsProviderUtilities
+ .GetRequiredParameterValue(
+ serviceId: ServiceId,
+ operationId: operationID,
+ parameterName: "connectionString",
+ parameters: connectionParameters)?
+ .ToValue<string>();
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### InvokeOperation()
+
+If your custom built-in connector only has a trigger, you don't have to implement this method. However, if your connector has actions to implement, you have to implement the **InvokeOperation()** method, which is called for each action in your connector that runs during workflow execution. You can use any client, such as FTPClient, HTTPClient, and so on, as required by your connector's actions. This example uses HTTPClient.
+
+```csharp
+public Task<ServiceOperationResponse> InvokeOperation(string operationId, InsensitiveDictionary<JToken> connectionParameters, ServiceOperationRequest serviceOperationRequest)
+{
+ using (var client = new HttpClient())
+ {
+ response = client.SendAsync(httpRequestMessage).ConfigureAwait(false).ToJObject();
+ }
+ return new ServiceOperationResponse(body: response);
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetFunctionTriggerType()
+
+To use an Azure Functions-based trigger as a trigger in your connector, you have to return the string that's the same as the **type** parameter in the Azure Functions trigger binding.
+
+The following example returns the string for the out-of-the-box built-in Azure Cosmos DB trigger, `"type": "cosmosDBTrigger"`:
+
+```csharp
+public string GetFunctionTriggerType()
+{
+ return "CosmosDBTrigger";
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetFunctionTriggerDefinition()
+
+This method has a default implementation, so you don't need to explicitly implement this method. However, if you want to update the trigger's default behavior, such as provide extra parameters that the designer doesn't expose, you can implement this method and override the default behavior.
+
+## Next steps
+
+When you're ready to start the implementation steps, continue to the following article:
+
+* [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md)
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
Last updated 11/02/2021
# Encode and decode flat files in Azure Logic Apps
-Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. By building a logic app workflow, you can encode and decode flat files by using the [built-in](../connectors/built-in.md#integration-account-built-in-actions) **Flat File** actions.
+Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. By building a logic app workflow, you can encode and decode flat files by using the [built-in](../connectors/built-in.md#integration-account-built-in) **Flat File** actions.
Although no **Flat File** triggers are available, you can use a different trigger or action to get or feed the XML content from various sources into your workflow for encoding or decoding. For example, you can use the Request trigger, another app, or other [connectors supported by Azure Logic Apps](../connectors/apis-list.md). You can use **Flat File** actions with workflows in the [**Logic App (Consumption)** and **Logic App (Standard)** resource types](single-tenant-overview-compare.md).
This article shows how to add the Flat File encoding and decoding actions to an
* A [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account).
- * If you're using use the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you don't store schemas in your integration account. Instead, you can [directly add schemas to your logic app resource](logic-apps-enterprise-integration-schemas.md) using either the Azure portal or Visual Studio Code. You can then use these schemas across multiple workflows within the *same logic app resource*.
+ * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you don't store schemas in your integration account. Instead, you can [directly add schemas to your logic app resource](logic-apps-enterprise-integration-schemas.md) using either the Azure portal or Visual Studio Code. You can then use these schemas across multiple workflows within the *same logic app resource*.
You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
The following table lists the values for custom connectors:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-| | Custom connectors | 1,000 per Azure subscription | Unlimited | 1,000 per Azure subscription ||
-| Custom connectors - Number of APIs | SOAP-based: 50 | Not applicable | SOAP-based: 50 ||
+| APIs per service | SOAP-based: 50 | Not applicable | SOAP-based: 50 ||
+| Parameters per API | SOAP-based: 50 | Not applicable | SOAP-based: 50 ||
| Requests per minute for a custom connector | 500 requests per minute per connection | Based on your implementation | 2,000 requests per minute per *custom connector* || | Connection timeout | 2 min | Idle connection: <br>4 min <p><p>Active connection: <br>10 min | 2 min || ||||||
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md
The following table summarizes how the Consumption model handles metering and bi
|-|-|-| | [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime. In the designer, you can find these operations under the **Built-in** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The Consumption model includes an *initial number of free built-in operations*, per Azure subscription, that a workflow can run. Above this number, built-in operation executions follow the [*Actions* pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Some managed connector operations are *also* available as built-in operations, which are included in the initial free operations. Above the initially free operations, billing follows the [*Actions* pricing](https://azure.microsoft.com/pricing/details/logic-apps/), not the [*Standard* or *Enterprise* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). | | [*Managed connector*](../connectors/managed.md) | These operations run separately in Azure. In the designer, you can find these operations under the **Standard** or **Enterprise** label. | These operation executions follow the [*Standard* or *Enterprise* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Preview Enterprise connector operation executions follow the [Consumption *Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
-| [*Custom connector*](../connectors/apis-list.md#custom-apis-and-connectors) | These operations run separately in Azure. In the designer, you can find these operations under the **Custom** label. For limits number of connectors, throughput, and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). | These operation executions follow the [*Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
+| [*Custom connector*](../connectors/apis-list.md#custom-connectors-and-apis) | These operations run separately in Azure. In the designer, you can find these operations under the **Custom** label. For limits number of connectors, throughput, and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). | These operation executions follow the [*Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
|||| For more information about how the Consumption model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
The following table summarizes how the Standard model handles metering and billi
| Component | Metering and billing | |--|-| | Virtual CPU (vCPU) and memory | The Standard model *requires* that your logic app uses the **Workflow Standard** hosting plan and a pricing tier, which determines the resource levels and pricing rates that apply to compute and memory capacity. For more information, review [Pricing tiers in the Standard model](#standard-pricing-tiers). |
-| Trigger and action operations | The Standard model includes an *unlimited number* of free built-in operations that your workflow can run. <p>If your workflow uses any managed connector operations, metering for those operations applies to *each call*, while billing follows the [same *Standard* or *Enterprise* connector pricing as the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For more information, review [Trigger and action operations in the Standard model](#standard-operations). |
+| Trigger and action operations | The Standard model includes an *unlimited number* of free built-in operations that your workflow can run. <p>If your workflow uses any managed connector operations, metering applies to *each call*, while billing follows the [same *Standard* or *Enterprise* connector pricing as the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For more information, review [Trigger and action operations in the Standard model](#standard-operations). |
| Storage operations | Metering applies to any storage operations run by Azure Logic Apps. For example, storage operations run when the service saves inputs and outputs from your workflow's run history. Billing follows your chosen [pricing tier](#standard-pricing-tiers). For more information, review [Storage operations](#storage-operations). | | Integration accounts | If you create an integration account for your logic app to use, metering is based on the integration account type that you create. Billing follows the [*Integration Account* pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For more information, review [Integration accounts](#integration-accounts). | |||
The following table summarizes how the Standard model handles metering and billi
|-|-|-| | [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime. In the designer, you can find these operations under the **Built-in** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The Standard model includes unlimited free built-in operations. <p><p>**Note**: Some managed connector operations are *also* available as built-in operations. While built-in operations are free, the Standard model still meters and bills managed connector operations using the [same *Standard* or *Enterprise* connector pricing as the Consumption model](https://azure.microsoft.com/pricing/details/logic-apps/). | | [*Managed connector*](../connectors/managed.md) | These operations run separately in Azure. In the designer, you can find these operations under the combined **Azure** label. | The Standard model meters and bills managed connector operations based on the [same *Standard* and *Enterprise* connector pricing as the Consumption model](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Preview Enterprise connector operations follow the [Consumption *Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
-| [*Custom connector*](../connectors/apis-list.md#custom-apis-and-connectors) | Currently, you can create and use only [custom built-in connector operations](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272) in single-tenant based logic app workflows. | The Standard model includes unlimited free built-in operations. For limits on throughput and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
+| [*Custom connector*](../connectors/apis-list.md#custom-connectors-and-apis) | Currently, you can create and use only [custom built-in connector operations](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272) in single-tenant based logic app workflows. | The Standard model includes unlimited free built-in operations. For limits on throughput and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
|||| For more information about how the Standard model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
The following table summarizes how the ISE model handles the following operation
|-|-|-| | [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime and in the same ISE as your logic app workflow. In the designer, you can find these operations under the **Built-in** label, but each operation also displays the **CORE** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The ISE model includes these operations *for free*, but are subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | | [*Managed connector*](../connectors/managed.md) | Whether *Standard* or *Enterprise*, managed connector operations run in either your ISE or multi-tenant Azure, based on whether the connector or operation displays the **ISE** label. <p><p>- **ISE** label: These operations run in the same ISE as your logic app and work without requiring the [on-premises data gateway](#data-gateway). <p><p>- No **ISE** label: These operations run in multi-tenant Azure. | The ISE model includes both **ISE** and no **ISE** labeled operations *for free*, but are subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). |
-| [*Custom connector*](../connectors/apis-list.md#custom-apis-and-connectors) | In the designer, you can find these operations under the **Custom** label. | The ISE model includes these operations *for free*, but are subject to [custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
+| [*Custom connector*](../connectors/apis-list.md#custom-connectors-and-apis) | In the designer, you can find these operations under the **Custom** label. | The ISE model includes these operations *for free*, but are subject to [custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
|||| For more information about how the ISE model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
For more information about security in Azure, review these topics:
## Access to logic app operations
-On Consumption logic apps only, you can set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, use [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
+For Consumption logic apps only, before you can create or manage logic apps and their connections, you need specific permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can also
+you can set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, you can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
logic-apps Manage Logic Apps With Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/manage-logic-apps-with-azure-portal.md
ms.suite: integration
Previously updated : 01/28/2022 Last updated : 04/01/2022 # Manage logic apps in the Azure portal
-You can manage logic apps using the [Azure portal](https://portal.azure.com) or [Visual Studio](manage-logic-apps-with-visual-studio.md). This article shows how to edit, disable, enable, or delete logic apps in the Azure portal. If you're new to Azure Logic Apps, see [What is Azure Logic Apps](logic-apps-overview.md)?
+This article shows how to edit, disable, enable, or delete Consumption logic apps with the Azure portal. You can also [manage Consumption logic apps in Visual Studio](manage-logic-apps-with-visual-studio.md).
+
+To manage Standard logic apps, review [Create a Standard workflow with single-tenant Azure Logic Apps in the Azure portal](create-single-tenant-workflows-azure-portal.md). If you're new to Azure Logic Apps, review [What is Azure Logic Apps](logic-apps-overview.md)?
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)).
-* An existing logic app. To learn how to create a logic app in the Azure portal, see [Quickstart: Create your first workflow by using Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md).
+* An existing logic app. To learn how to create a logic app in the Azure portal, review [Quickstart: Create your first workflow by using Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md).
<a name="find-logic-app"></a>
You can manage logic apps using the [Azure portal](https://portal.azure.com) or
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-1. In the portal search box, enter `logic apps`, and select **Logic apps**.
+1. In the portal search box, enter **logic apps**, and select **Logic apps**.
1. From the logic apps list, find your logic app by either browsing or filtering the list.
You can manage logic apps using the [Azure portal](https://portal.azure.com) or
* **Access endpoint IP addresses** * **Connector outgoing IP addresses**
+<a name="view-connections"></a>
+
+## View connections
+
+When you create connections within a workflow using [managed connectors](../connectors/managed.md), these connections are actually separate Azure resources with their own resource definitions. To view and manage these connections, follow these steps:
+
+1. In the Azure portal, [find and open your logic app](#find-logic-app).
+
+1. From your logic app's menu, under **Development tools**, select **API Connections**.
+
+1. On the **API Connections** pane, select a specific connection instance, which shows more information about that connection. To view the underlying connection resource definition, select **JSON View**.
+ <a name="disable-enable-logic-apps"></a> ## Disable or enable logic apps
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
This table specifies the child workflow's behavior based on whether the parent a
The single-tenant model and **Logic App (Standard)** resource type include many current and new capabilities, for example:
-* Create logic apps and their workflows from [400+ managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
+* Create logic apps and their workflows from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
* More managed connectors are now available as built-in operations and run similarly to other built-in operations, such as Azure Functions. Built-in operations run natively on the single-tenant Azure Logic Apps runtime. For example, new built-in operations include Azure Service Bus, Azure Event Hubs, SQL Server, MQ, DB2, and IBM Host File.
The single-tenant model and **Logic App (Standard)** resource type include many
> For the built-in SQL Server version, only the **Execute Query** action can directly connect to Azure > virtual networks without using the [on-premises data gateway](logic-apps-gateway-connection.md).
- * You can create your own built-in connectors for any service that you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors), which aren't currently supported, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime.
+ * You can create your own built-in connectors for any service that you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis), which aren't currently supported, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime.
The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, [switch your project from extension bundle-based (Node.js) to NuGet package-based (.NET)](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring). For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
For the **Logic App (Standard)** resource, these capabilities have changed, or t
* The Gmail connector currently isn't supported.
- * [Custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors) currently aren't currently supported. However, you can create *custom built-in operations* when you use Visual Studio Code. For more information, review [Create single-tenant based workflows using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring).
+ * [Custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis) currently aren't currently supported. However, you can create *custom built-in operations* when you use Visual Studio Code. For more information, review [Create single-tenant based workflows using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring).
* **Authentication**: The following authentication types are currently unavailable for the **Logic App (Standard)** resource type:
mysql App Development Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/app-development-best-practices.md
- Title: App development best practices - Azure Database for MySQL
-description: Learn about best practices for building an app by using Azure Database for MySQL.
----- Previously updated : 08/11/2020--
-# Best practices for building an application with Azure Database for MySQL
--
-Here are some best practices to help you build a cloud-ready application by using Azure Database for MySQL. These best practices can reduce development time for your app.
-
-## Configuration of application and database resources
-
-### Keep the application and database in the same region
-
-Make sure all your dependencies are in the same region when deploying your application in Azure. Spreading instances across regions or availability zones creates network latency, which might affect the overall performance of your application.
-
-### Keep your MySQL server secure
-
-Configure your MySQL server to be [secure](./concepts-security.md) and not accessible publicly. Use one of these options to secure your server:
--- [Firewall rules](./concepts-firewall-rules.md)-- [Virtual networks](./concepts-data-access-and-security-vnet.md)-- [Azure Private Link](./concepts-data-access-security-private-link.md)-
-For security, you must always connect to your MySQL server over SSL and configure your MySQL server and your application to use TLS 1.2. See [How to configure SSL/TLS](./concepts-ssl-connection-security.md).
-
-### Use advanced networking with AKS
-
-When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. To learn more , see [Best practices for Azure Kubernetes Service and Azure Database for MySQL](concepts-aks.md)
-
-### Tune your server parameters
-
-For read-heavy workloads tuning server parameters, `tmp_table_size` and `max_heap_table_size` can help optimize for better performance. To calculate the values required for these variables, look at the total per-connection memory values and the base memory. The sum of per-connection memory parameters, excluding `tmp_table_size`, combined with the base memory accounts for total memory of the server.
-
-To calculate the largest possible size of `tmp_table_size` and `max_heap_table_size`, use the following formula:
-
-`(total memory - (base memory + (sum of per-connection memory * # of connections)) / # of connections`
-
-> [!NOTE]
-> Total memory indicates the total amount of memory that the server has across the provisioned vCores. For example, in a General Purpose two-vCore Azure Database for MySQL server, the total memory will be 5 GB * 2. You can find more details about memory for each tier in the [pricing tier](./concepts-pricing-tiers.md) documentation.
->
-> Base memory indicates the memory variables, like `query_cache_size` and `innodb_buffer_pool_size`, that MySQL will initialize and allocate at server start. Per-connection memory, like `sort_buffer_size` and `join_buffer_size`, is memory that's allocated only when a query needs it.
-
-### Create non-admin users
-
-[Create non-admin users](./howto-create-users.md) for each database. Typically, the user names are identified as the database names.
-
-### Reset your password
-
-You can [reset your password](./howto-create-manage-server-portal.md#update-admin-password) for your MySQL server by using the Azure portal.
-
-Resetting your server password for a production database can bring down your application. It's a good practice to reset the password for any production workloads at off-peak hours to minimize the impact on your application's users.
-
-## Performance and resiliency
-
-Here are a few tools and practices that you can use to help debug performance issues with your application.
-
-### Enable slow query logs to identify performance issues
-
-You can enable [slow query logs](./concepts-server-logs.md) and [audit logs](./concepts-audit-logs.md) on your server. Analyzing slow query logs can help identify performance bottlenecks for troubleshooting.
-
-Audit logs are also available through Azure Diagnostics logs in Azure Monitor logs, Azure Event Hubs, and storage accounts. See [How to troubleshoot query performance issues](./howto-troubleshoot-query-performance.md).
-
-### Use connection pooling
-
-Managing database connections can have a significant impact on the performance of the application as a whole. To optimize performance, you must reduce the number of times that connections are established and the time for establishing connections in key code paths. Use [connection pooling](./concepts-connectivity.md#access-databases-by-using-connection-pooling-recommended) to connect to Azure Database for MySQL to improve resiliency and performance.
-
-You can use the [ProxySQL](https://proxysql.com/) connection pooler to efficiently manage connections. Using a connection pooler can decrease idle connections and reuse existing connections, which will help avoid problems. See [How to set up ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/connecting-efficiently-to-azure-database-for-mysql-with-proxysql/ba-p/1279842) to learn more.
-
-### Retry logic to handle transient errors
-
-Your application might experience [transient errors](./concepts-connectivity.md#handling-transient-errors) where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds.
-
-A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./howto-troubleshoot-common-connection-issues.md) to learn more.
-
-### Enable read replication to mitigate failovers
-
-You can use [Data-in Replication](./howto-data-in-replication.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs.
-
-You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
-
-## Database deployment
-
-### Configure an Azure database for MySQL task in your CI/CD deployment pipeline
-
-Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) and continuous delivery (CD) through [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/) and use a task for [your MySQL server](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment) to update the database by running a custom script against it.
-
-### Use an effective process for manual database deployment
-
-During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
-
-1. Create a copy of a production database on a new database by using [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) or [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-admin-export-import-management.html).
-2. Update the new database with your new schema changes or updates needed for your database.
-3. Put the production database in a read-only state. You should not have write operations on the production database until deployment is completed.
-4. Test your application with the newly updated database from step 1.
-5. Deploy your application changes and make sure the application is now using the new database that has the latest updates.
-6. Keep the old production database so that you can roll back the changes. You can then evaluate to either delete the old production database or export it on Azure Storage if needed.
-
-> [!NOTE]
-> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests.
->
-> Make sure your application code also handles any failed requests.
-
-### Use MySQL native metrics to see if your workload is exceeding in-memory temporary table sizes
-
-With a read-heavy workload, queries running against your MySQL server might exceed the in-memory temporary table sizes. A read-heavy workload can cause your server to switch to writing temporary tables to disk, which affects the performance of your application. To determine if your server is writing to disk as a result of exceeding temporary table size, look at the following metrics:
-
-```sql
-show global status like 'created_tmp_disk_tables';
-show global status like 'created_tmp_tables';
-```
-
-The `created_tmp_disk_tables` metric indicates how many tables were created on disk. The `created_tmp_table` metric tells you how many temporary tables have to be formed in memory, given your workload. To determine if running a specific query will use temporary tables, run the [EXPLAIN](https://dev.mysql.com/doc/refman/8.0/en/explain.html) statement on the query. The detail in the `extra` column indicates `Using temporary` if the query will run using temporary tables.
-
-To calculate the percentage of your workload with queries spilling to disks, use your metric values in the following formula:
-
-`(created_tmp_disk_tables / (created_tmp_disk_tables + created_tmp_tables)) * 100`
-
-Ideally, this percentage should be less 25%. If you see that the percentage is 25% or greater, we suggest modifying two server parameters, tmp_table_size and max_heap_table_size.
-
-## Database schema and queries
-
-Here are few tips to keep in mind when you build your database schema and your queries.
-
-### Use the right datatype for your table columns
-
-Using the right datatype based on the type of data you want to store can optimize storage and reduce errors that can occur because of incorrect datatypes.
-
-### Use indexes
-
-To avoid slow queries, you can use indexes. Indexes can help find rows with specific columns quickly. See [How to use indexes in MySQL](https://dev.mysql.com/doc/refman/8.0/en/mysql-indexes.html).
-
-### Use EXPLAIN for your SELECT queries
-
-Use the `EXPLAIN` statement to get insights on what MySQL is doing to run your query. It can help you detect bottlenecks or issues with your query. See [How to use EXPLAIN to profile query performance](./howto-troubleshoot-query-performance.md).
mysql Concept Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-monitoring-best-practices.md
- Title: Monitoring best practices - Azure Database for MySQL
-description: This article describes the best practices to monitor your Azure Database for MySQL.
------ Previously updated : 11/23/2020--
-# Best practices for monitoring Azure Database for MySQL - Single server
--
-Learn about the best practices that can be used to monitor your database operations and ensure that the performance is not compromised as data size grows. As we add new capabilities to the platform, we will continue to refine the best practices detailed in this section.
-
-## Layout of the current monitoring toolkit
-
-Azure Database for MySQL provides tools and methods you can use to monitor usage easily, add, or remove resources (such as CPU, memory, or I/O), troubleshoot potential problems, and help improve the performance of a database. You can [monitor performance metrics](concepts-monitoring.md#metrics) on a regular basis to see the average, maximum, and minimum values for a variety of time ranges.
-
-You can [set up alerts](howto-alert-on-metric.md#create-an-alert-rule-on-a-metric-from-the-azure-portal) for a metric threshold, so you are informed if the server has reached those limits and take appropriate actions.
-
-Monitor the database server to make sure that the resources assigned to the database can handle the application workload. If the database is hitting resource limits, consider:
-
-* Identifying and optimizing the top resource-consuming queries.
-* Adding more resources by upgrading the service tier.
-
-### CPU utilization
-
-Monitor CPU usage and if the database is exhausting CPU resources. If CPU usage is 90% or more then you should scale up your compute by increasing the number of vCores or scale to next pricing tier. Make sure that the throughput or concurrency is as expected as you scale up/down the CPU.
-
-### Memory
-
-The amount of memory available for the database server is proportional to the [number of vCores](concepts-pricing-tiers.md). Make sure the memory is enough for the workload. Load test your application to verify the memory is sufficient for read and write operations. If the database memory consumption frequently grows beyond a defined threshold, this indicates that you should upgrade your instance by increasing vCores or higher performance tier. Use [Query Store](concepts-query-store.md), [Query Performance Recommendations](concepts-performance-recommendations.md) to identify queries with the longest duration, most executed. Explore opportunities to optimize.
-
-### Storage
-
-The [amount of storage](howto-create-manage-server-portal.md#scale-compute-and-storage) provisioned for the MySQL server determines the IOPs for your server. The storage used by the service includes the database files, transaction logs, the server logs and backup snapshots. Ensure that the consumed disk space does not constantly exceed above 85 percent of the total provisioned disk space. If that is the case, you need to delete or archive data from the database server to free up some space.
-
-### Network traffic
-
-**Network Receive Throughput, Network Transmit Throughput** ΓÇô The rate of network traffic to and from the MySQL instance in megabytes per second. You need to evaluate the throughput requirement for server and constantly monitor the traffic if throughput is lower than expected.
-
-### Database connections
-
-**Database Connections** ΓÇô The number of client sessions that are connected to the Azure Database for MySQL should be aligned with the [connection limits for the selected SKU](concepts-server-parameters.md#max_connections) size.
-
-## Next steps
-
-* [Best practice for performance of Azure Database for MySQL](concept-performance-best-practices.md)
-* [Best practice for server operations using Azure Database for MySQL](concept-operation-excellence-best-practices.md)
mysql Concept Operation Excellence Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-operation-excellence-best-practices.md
- Title: MySQL server operational best practices - Azure Database for MySQL
-description: This article describes the best practices to operate your MySQL database on Azure.
----- Previously updated : 11/23/2020--
-# Best practices for server operations on Azure Database for MySQL -Single server
--
-Learn about the best practices for working with Azure Database for MySQL. As we add new capabilities to the platform, we will continue to focus on refining the best practices detailed in this section.
-
-## Azure Database for MySQL Operational Guidelines
-
-The following are operational guidelines that should be followed when working with your Azure Database for MySQL to improve the performance of your database:
-
-* **Co-location**: To reduce network latency, place the client and the database server are in the same Azure region.
-
-* **Monitor your memory, CPU, and storage usage**: You can [setup alerts](howto-alert-on-metric.md) to notify you when usage patterns change or when you approach the capacity of your deployment, so that you can maintain system performance and availability.
-
-* **Scale up your DB instance**: You can [scale up](howto-create-manage-server-portal.md) when you are approaching storage capacity limits. You should have some buffer in storage and memory to accommodate unforeseen increases in demand from your applications. You can also [enable the storage autogrow](howto-auto-grow-storage-portal.md) feature 'ON' just to ensure that the service automatically scales the storage as it nears the storage limits.
-
-* **Configure backups**: Enable [local or geo-redundant backups](howto-restore-server-portal.md#set-backup-configuration) based on the requirement of the business. Also, you modify the retention period on how long the backups are available for business continuity.
-
-* **Increase I/O capacity**: If your database workload requires more I/O than you have provisioned, recovery or other transactional operations for your database will be slow. To increase the I/O capacity of a server instance, do any or all of the following:
-
- * Azure database for MySQL provides IOPS scaling at the rate of three IOPS per GB storage provisioned. [Increase the provisioned storage](howto-create-manage-server-portal.md#scale-storage-up) to scale the IOPS for better performance.
-
- * If you are already using Provisioned IOPS storage, provision [additional throughput capacity](howto-create-manage-server-portal.md#scale-storage-up).
-
-* **Scale compute**: Database workload can also be limited due to CPU or memory and this can have serious impact on the transaction processing. Note that compute (pricing tier) can be scaled up or down between [General Purpose or Memory Optimized](concepts-pricing-tiers.md) tiers only.
-
-* **Test for failover**: Manually test failover for your server instance to understand how long the process takes for your use case and to ensure that the application that accesses your server instance can automatically connect to the new server instance after failover.
-
-* **Use primary key**: Make sure your tables have a primary or unique key as you operate on the Azure Database for MySQL. This helps in a lot taking backups, replica etc. and improves performance.
-
-* **Configure TTL value**: If your client application is caching the Domain Name Service (DNS) data of your server instances, set a time-to-live (TTL) value of less than 30 seconds. Because the underlying IP address of a server instance can change after a failover, caching the DNS data for an extended time can lead to connection failures if your application tries to connect to an IP address that no longer is in service.
-
-* Use connection pooling to avoid hitting the [maximum connection limits](concepts-server-parameters.md#max_connections)and use retry logic to avoid intermittent connection issues.
-
-* If you are using replica, use [ProxySQL to balance off load](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/scaling-an-azure-database-for-mysql-workload-running-on/ba-p/1105847) between the primary server and the readable secondary replica server. See the setup steps here. </br>
-
-* When provisioning the resource, make sure you [enabled the autogrow](howto-auto-grow-storage-portal.md) for your Azure Database for MySQL. This does not add any additional cost and will protect the database from any storage bottlenecks that you might run into. </br>
--
-### Using InnoDB with Azure Database for MySQL
-
-* If using `ibdata1` feature, which is a system tablespace data file cannot shrink or be purged by dropping the data from the table, or moving the table to file-per-table `tablespaces`.
-
-* For a database greater than 1 TB in size, you should create the table in **innodb_file_per_table** `tablespace`. For a single table that is larger than 1 TB in size, you should the [partition](https://dev.mysql.com/doc/refman/5.7/en/partitioning.html) table.
-
-* For a server that has a large number of `tablespace`, the engine startup will be very slow due to the sequential tablespace scan during MySQL startup or failover.
-
-* Set innodb_file_per_table = ON before you create a table, if the total table number is less than 500.
-
-* If you have more than 500 tables in a database, then review the table size for each individual table. For a large table, you should still consider using the file-per-table tablespace to avoid the system tablespace file hit max storage limit.
-
-> [!NOTE]
-> For tables with size less than 5GB, consider using the system tablespace
-> ```sql
-> CREATE TABLE tbl_name ... *TABLESPACE* = *innodb_system*;
-> ```
-
-* [Partition](https://dev.mysql.com/doc/refman/5.7/en/partitioning.html) your table at table creation if you have a very large table might potentially grow beyond 1 TB.
-
-* Use multiple MySQL servers and spread the tables across those servers. Avoid putting too many tables on a single server if you have around 10000 tables or more.
-
-## Next steps
-- [Best practice for performance of Azure Database for MySQL](concept-performance-best-practices.md)-- [Best practice for monitoring your Azure Database for MySQL](concept-monitoring-best-practices.md)
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-performance-best-practices.md
- Title: Performance best practices - Azure Database for MySQL
-description: This article describes some recommendations to monitor and tune performance for your Azure Database for MySQL.
----- Previously updated : 1/28/2021--
-# Best practices for optimal performance of your Azure Database for MySQL - Single server
--
-Learn how to get best performance while working with your Azure Database for MySQL - Single server. As we add new capabilities to the platform, we will continue refining our recommendations in this section.
-
-## Physical Proximity
-
- Make sure you deploy an application and the database in the same region. A quick check before starting any performance benchmarking run is to determine the network latency between the client and database using a simple SELECT 1 query.
-
-## Accelerated Networking
-
-Use accelerated networking for the application server if you are using Azure virtual machine, Azure Kubernetes, or App Services. Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types.
-
-## Connection Efficiency
-
-Establishing a new connection is always an expensive and time-consuming task. When an application requests a database connection, it prioritizes the allocation of existing idle database connections rather than creating a new one. Here are some options for good connection practices:
--- **ProxySQL**: Use [ProxySQL](https://proxysql.com/) which provides built-in connection pooling and [load balance your workload](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042) to multiple read replicas as required on demand with any changes in application code.--- **Heimdall Data Proxy**: Alternatively, you can also leverage Heimdall Data Proxy, a vendor-neutral proprietary proxy solution. It supports query caching and read/write split with replication lag detection. You can also refer to how to [accelerate MySQL Performance with the Heimdall proxy](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/accelerate-mysql-performance-with-the-heimdall-proxy/ba-p/1063349). --- **Persistent or Long-Lived Connection**: If your application has short transactions or queries typically with execution time < 5-10 ms, then replace short connections with persistent connections. Replace short connections with persistent connections requires only minor changes to the code, but it has a major effect in terms of improving performance in many typical application scenarios. Make sure to set the timeout or close connection when the transaction is complete.--- **Replica**: If you are using replica, use [ProxySQL](https://proxysql.com/) to balance off load between the primary server and the readable secondary replica server. Learn [how to set up ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/scaling-an-azure-database-for-mysql-workload-running-on/ba-p/1105847).-
-## Data Import configurations
--- You can temporarily scale your instance to higher SKU size before starting a data import operation and then scale it down when the import is successful.-- You can import your data with minimal downtime by using [Azure Database Migration Service (DMS)](https://datamigration.microsoft.com/) for online or offline migrations. -
-## Azure Database for MySQL Memory Recommendations
-
-An Azure Database for MySQL performance best practice is to allocate enough RAM so that your working set resides almost completely in memory.
--- Check if the memory percentage being used in reaching the [limits](./concepts-pricing-tiers.md) using the [metrics for the MySQL server](./concepts-monitoring.md). -- Set up alerts on such numbers to ensure that as the servers reaches limits you can take prompt actions to fix it. Based on the limits defined, check if scaling up the database SKUΓÇöeither to higher compute size or to better pricing tier, which results in a dramatic increase in performance. -- Scale up until your performance numbers no longer drops dramatically after a scaling operation. For information on monitoring a DB instance's metrics, see [MySQL DB Metrics](./concepts-monitoring.md#metrics).
-
-## Use InnoDB Buffer Pool Warmup
-
-After restarting Azure Database for MySQL server, the data pages residing in storage are loaded as the tables are queried which leads to increased latency and slower performance for the first execution of the queries. This may not be acceptable for latency sensitive workloads.
-
-Utilizing InnoDB buffer pool warmup shortens the warmup period by reloading disk pages that were in the buffer pool before the restart rather than waiting for DML or SELECT operations to access corresponding rows.
-
-You can reduce the warmup period after restarting your Azure Database for MySQL server, which represents a performance advantage by configuring [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html). InnoDB saves a percentage of the most recently used pages for each buffer pool at server shutdown and restores these pages at server startup.
-
-It is also important to note that improved performance comes at the expense of longer start-up time for the server. When this parameter is enabled, server startup and restart time is expected to increase depending on the IOPS provisioned on the server.
-
-We recommend testing and monitor the restart time to ensure the start-up/restart performance is acceptable as the server is unavailable during that time. It is not recommended to use this parameter with less than 1000 provisioned IOPS (or in other words, when storage provisioned is less than 335 GB).
-
-To save the state of the buffer pool at server shutdown, set server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up/restart time by lowering and fine-tuning the value of server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`.
-
-> [!Note]
-> InnoDB buffer pool warmup parameters are only supported in general purpose storage servers with up to 16-TB storage. Learn more about [Azure Database for MySQL storage options here](./concepts-pricing-tiers.md#storage).
-
-## Next steps
--- [Best practice for server operations using Azure Database for MySQL](concept-operation-excellence-best-practices.md) <br/>-- [Best practice for monitoring your Azure Database for MySQL](concept-monitoring-best-practices.md)<br/>-- [Get started with Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md)<br/>
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-reserved-pricing.md
- Title: Prepay for compute with reserved capacity - Azure Database for MySQL
-description: Prepay for Azure Database for MySQL compute resources with reserved capacity
----- Previously updated : 10/06/2021--
-# Prepay for Azure Database for MySQL compute resources with reserved instances
--
-Azure Database for MySQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
-
-## How does the instance reservation work?
-You do not need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). </br>
-
-You can buy Azure Database for MySQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
-
-* You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
-* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
-* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for MySQL reserved capacity. </br>
-
-The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reserved-instance-usage.md).
-
-## Reservation exchanges and refunds
-
-You can exchange a reservation for another reservation of the same type, you can also exchange a reservation from Azure Database for MySQL - Single Server with Flexible Server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
-
-## Reservation discount
-
-You may save up to 67% on compute costs with reserved instances. In order to find the discount for your case, please visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
--
-## Determine the right database size before purchase
-
-The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed server within a specific region and using the same performance tier and hardware generation.</br>
-
-For example, let's suppose that you are running one general purpose, Gen5 ΓÇô 32 vCore MySQL database, and two memory optimized, Gen5 ΓÇô 16 vCore MySQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose, Gen5 ΓÇô 32 vCore database server, and one memory optimized, Gen5 ΓÇô 16 vCore database server. Let's suppose that you know that you will need these resources for at least 1 year. In this case, you should purchase a 64 (2x32) vCores, 1 year reservation for single database general purpose - Gen5 and a 48 (2x16 + 16) vCore 1 year reservation for single database memory optimized - Gen5
--
-## Buy Azure Database for MySQL reserved capacity
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Select **All services** > **Reservations**.
-3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for MySQL** to purchase a new reservation for your MySQL databases.
-4. Fill-in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for MySQL servers that get the discount depend on the scope and quantity selected.
----
-The following table describes required fields.
-
-| Field | Description |
-| : | :- |
-| Subscription | The subscription used to pay for the Azure Database for MySQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MySQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
-| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for MySQL servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for MySQL servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for MySQL servers in the selected subscription and the selected resource group within that subscription.
-| Region | The Azure region that's covered by the Azure Database for MySQL reserved capacity reservation.
-| Deployment Type | The Azure Database for MySQL resource type that you want to buy the reservation for.
-| Performance Tier | The service tier for the Azure Database for MySQL servers.
-| Term | One year
-| Quantity | The amount of compute resources being purchased within the Azure Database for MySQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you are running or planning to run an Azure Database for MySQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
-
-## Reserved instances API support
-
-Use Azure APIs to programmatically get information for your organization about Azure service or software reservations. For example, use the APIs to:
--- Find reservations to buy-- Buy a reservation-- View purchased reservations-- View and manage reservation access-- Split or merge reservations-- Change the scope of reservations-
-For more information, see [APIs for Azure reservation automation](../cost-management-billing/reservations/reservation-apis.md).
-
-## vCore size flexibility
-
-vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit.
-
-## How to view reserved instance purchase details
-
-You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations). For more information, see [How a reservation discount is applied to Azure Database for MySQL](../cost-management-billing/reservations/understand-reservation-charges-mysql.md).
-
-## Reserved instance expiration
-
-You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Instances for Azure Database for MySQL](../cost-management-billing/reservations/understand-reservation-charges-mysql.md).
-
-## Need help ? Contact us
-
-If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest)
-
-## Next steps
-
-The vCore reservation discount is applied automatically to the number of Azure Database for MySQL servers that match the Azure Database for MySQL reserved capacity reservation scope and attributes. You can update the scope of the Azure database for MySQL reserved capacity reservation through Azure portal, PowerShell, CLI or through the API. </br></br>
-To learn how to manage the Azure Database for MySQL reserved capacity, see manage Azure Database for MySQL reserved capacity.
-
-To learn more about Azure Reservations, see the following articles:
-
-* [What are Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md)?
-* [Manage Azure Reservations](../cost-management-billing/reservations/manage-reserved-vm-instance.md)
-* [Understand Azure Reservations discount](../cost-management-billing/reservations/understand-reservation-charges.md)
-* [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reservation-charges-mysql.md)
-* [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
-* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-aks.md
- Title: Connect to Azure Kubernetes Service - Azure Database for MySQL
-description: Learn about connecting Azure Kubernetes Service with Azure Database for MySQL
----- Previously updated : 07/14/2020---
-# Best practices for Azure Kubernetes Service and Azure Database for MySQL
--
-Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for MySQL together to create an application.
-
-## Create Database before creating the AKS cluster
-
-Azure Database for MySQL has two deployment options:
--- Single Sever-- Flexible Server-
-Single Server supports Single availability zone and Flexible Server supports multiple availability zones. AKS on the other hand also supports enabling single or multiple availability zones. Creating the database server first to see the availability zone the server is in and then create the AKS clusters in the same availability zone. This can improve performance for the application by reducing networking latency.
-
-## Use Accelerated networking
-
-Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../virtual-network/create-vm-accelerated-networking-cli.md).
-
-From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
-
-You can confirm whether your AKS cluster has accelerated networking:
-
-1. Go to the Azure portal and select your AKS cluster.
-2. Select the Properties tab.
-3. Copy the name of the **Infrastructure Resource Group**.
-4. Use the portal search bar to locate and open the infrastructure resource group.
-5. Select a VM in that resource group.
-6. Go to the VM's **Networking** tab.
-7. Confirm whether **Accelerated networking** is 'Enabled.'
-
-Or through the Azure CLI using the following two commands:
-
-```azurecli
-az aks show --resource-group myResourceGroup --name myAKSCluster --query "nodeResourceGroup"
-```
-
-The output will be the generated resource group that AKS creates containing the network interface. Take the "nodeResourceGroup" name and use it in the next command. **EnableAcceleratedNetworking** will either be true or false:
-
-```azurecli
-az network nic list --resource-group nodeResourceGroup -o table
-```
-
-## Use Azure premium fileshare
-
- Use [Azure premium fileshare](../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal) for persistent storage that can be used by one or many pods, and can be dynamically or statically provisioned. Azure premium fileshare gives you best performance for your application if you expect large number of I/O operations on the file storage. To learn more, see [how to enable Azure Files](../aks/azure-files-dynamic-pv.md).
-
-## Next steps
-
-Create an AKS cluster [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-audit-logs.md
- Title: Audit logs - Azure Database for MySQL
-description: Describes the audit logs available in Azure Database for MySQL, and the available parameters for enabling logging levels.
----- Previously updated : 6/24/2020--
-# Audit Logs in Azure Database for MySQL
--
-In Azure Database for MySQL, the audit log is available to users. The audit log can be used to track database-level activity and is commonly used for compliance.
-
-## Configure audit logging
-
->[!IMPORTANT]
-> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted and minimum amount of data is collected.
-
-By default the audit log is disabled. To enable it, set `audit_log_enabled` to ON.
-
-Other parameters you can adjust include:
--- `audit_log_events`: controls the events to be logged. See below table for specific audit events.-- `audit_log_include_users`: MySQL users to be included for logging. The default value for this parameter is empty, which will include all the users for logging. This has higher priority over `audit_log_exclude_users`. Max length of the parameter is 512 characters.-- `audit_log_exclude_users`: MySQL users to be excluded from logging. Max length of the parameter is 512 characters.-
-> [!NOTE]
-> `audit_log_include_users` has higher priority over `audit_log_exclude_users`. For example, if `audit_log_include_users` = `demouser` and `audit_log_exclude_users` = `demouser`, the user will be included in the audit logs because `audit_log_include_users` has higher priority.
-
-| **Event** | **Description** |
-|||
-| `CONNECTION` | - Connection initiation (successful or unsuccessful) <br> - User reauthentication with different user/password during session <br> - Connection termination |
-| `DML_SELECT`| SELECT queries |
-| `DML_NONSELECT` | INSERT/DELETE/UPDATE queries |
-| `DML` | DML = DML_SELECT + DML_NONSELECT |
-| `DDL` | Queries like "DROP DATABASE" |
-| `DCL` | Queries like "GRANT PERMISSION" |
-| `ADMIN` | Queries like "SHOW STATUS" |
-| `GENERAL` | All in DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN |
-| `TABLE_ACCESS` | - Available for MySQL 5.7 and MySQL 8.0 <br> - Table read statements, such as SELECT or INSERT INTO ... SELECT <br> - Table delete statements, such as DELETE or TRUNCATE TABLE <br> - Table insert statements, such as INSERT or REPLACE <br> - Table update statements, such as UPDATE |
-
-## Access audit logs
-
-Audit logs are integrated with Azure Monitor Diagnostic Logs. Once you've enabled audit logs on your MySQL server, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs in the Azure portal, see the [audit log portal article](howto-configure-audit-logs-portal.md#set-up-diagnostic-logs).
-
->[!Note]
->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings
-
-## Diagnostic Logs Schemas
-
-The following sections describe what's output by MySQL audit logs based on the event type. Depending on the output method, the fields included and the order in which they appear may vary.
-
-### Connection
-
-| **Property** | **Description** |
-|||
-| `TenantId` | Your tenant ID |
-| `SourceSystem` | `Azure` |
-| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
-| `Type` | Type of the log. Always `AzureDiagnostics` |
-| `SubscriptionId` | GUID for the subscription that the server belongs to |
-| `ResourceGroup` | Name of the resource group the server belongs to |
-| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
-| `ResourceType` | `Servers` |
-| `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
-| `Category` | `MySqlAuditLogs` |
-| `OperationName` | `LogEvent` |
-| `LogicalServerName_s` | Name of the server |
-| `event_class_s` | `connection_log` |
-| `event_subclass_s` | `CONNECT`, `DISCONNECT`, `CHANGE USER` (only available for MySQL 5.7) |
-| `connection_id_d` | Unique connection ID generated by MySQL |
-| `host_s` | Blank |
-| `ip_s` | IP address of client connecting to MySQL |
-| `user_s` | Name of user executing the query |
-| `db_s` | Name of database connected to |
-| `\_ResourceId` | Resource URI |
-
-### General
-
-Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN event types.
-
-> [!NOTE]
-> For `sql_text`, log will be truncated if it exceeds 2048 characters.
-
-| **Property** | **Description** |
-|||
-| `TenantId` | Your tenant ID |
-| `SourceSystem` | `Azure` |
-| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
-| `Type` | Type of the log. Always `AzureDiagnostics` |
-| `SubscriptionId` | GUID for the subscription that the server belongs to |
-| `ResourceGroup` | Name of the resource group the server belongs to |
-| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
-| `ResourceType` | `Servers` |
-| `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
-| `Category` | `MySqlAuditLogs` |
-| `OperationName` | `LogEvent` |
-| `LogicalServerName_s` | Name of the server |
-| `event_class_s` | `general_log` |
-| `event_subclass_s` | `LOG`, `ERROR`, `RESULT` (only available for MySQL 5.6) |
-| `event_time` | Query start time in UTC timestamp |
-| `error_code_d` | Error code if query failed. `0` means no error |
-| `thread_id_d` | ID of thread that executed the query |
-| `host_s` | Blank |
-| `ip_s` | IP address of client connecting to MySQL |
-| `user_s` | Name of user executing the query |
-| `sql_text_s` | Full query text |
-| `\_ResourceId` | Resource URI |
-
-### Table access
-
-> [!NOTE]
-> Table access logs are only output for MySQL 5.7.<br>For `sql_text`, log will be truncated if it exceeds 2048 characters.
-
-| **Property** | **Description** |
-|||
-| `TenantId` | Your tenant ID |
-| `SourceSystem` | `Azure` |
-| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
-| `Type` | Type of the log. Always `AzureDiagnostics` |
-| `SubscriptionId` | GUID for the subscription that the server belongs to |
-| `ResourceGroup` | Name of the resource group the server belongs to |
-| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
-| `ResourceType` | `Servers` |
-| `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
-| `Category` | `MySqlAuditLogs` |
-| `OperationName` | `LogEvent` |
-| `LogicalServerName_s` | Name of the server |
-| `event_class_s` | `table_access_log` |
-| `event_subclass_s` | `READ`, `INSERT`, `UPDATE`, or `DELETE` |
-| `connection_id_d` | Unique connection ID generated by MySQL |
-| `db_s` | Name of database accessed |
-| `table_s` | Name of table accessed |
-| `sql_text_s` | Full query text |
-| `\_ResourceId` | Resource URI |
-
-## Analyze logs in Azure Monitor Logs
-
-Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your audited events. Below are some sample queries to help you get started. Make sure to update the below with your server name.
--- List GENERAL events on a particular server-
- ```kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlAuditLogs' and event_class_s == "general_log"
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | order by TimeGenerated asc nulls last
- ```
--- List CONNECTION events on a particular server-
- ```kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlAuditLogs' and event_class_s == "connection_log"
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | order by TimeGenerated asc nulls last
- ```
--- Summarize audited events on a particular server-
- ```kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | summarize count() by event_class_s, event_subclass_s, user_s, ip_s
- ```
--- Graph the audit event type distribution on a particular server-
- ```kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m)
- | render timechart
- ```
--- List audited events across all MySQL servers with Diagnostic Logs enabled for audit logs-
- ```kusto
- AzureDiagnostics
- | where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | order by TimeGenerated asc nulls last
- ```
-
-## Next steps
--- [How to configure audit logs in the Azure portal](howto-configure-audit-logs-portal.md)
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-azure-ad-authentication.md
- Title: Active Directory authentication - Azure Database for MySQL
-description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for MySQL
----- Previously updated : 07/23/2020--
-# Use Azure Active Directory for authenticating with MySQL
--
-Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for MySQL using identities defined in Azure AD.
-With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
-
-Benefits of using Azure AD include:
--- Authentication of users across Azure Services in a uniform way-- Management of password policies and password rotation in a single place-- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords-- Customers can manage database permissions using external (Azure AD) groups.-- Azure AD authentication uses MySQL database users to authenticate identities at the database level-- Support of token-based authentication for applications connecting to Azure Database for MySQL-
-To configure and use Azure Active Directory authentication, use the following process:
-
-1. Create and populate Azure Active Directory with user identities as needed.
-2. Optionally associate or change the Active Directory currently associated with your Azure subscription.
-3. Create an Azure AD administrator for the Azure Database for MySQL server.
-4. Create database users in your database mapped to Azure AD identities.
-5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
-
-> [!NOTE]
-> To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Configure and sign in with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).
-
-## Architecture
-
-The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for MySQL. The arrows indicate communication pathways.
-
-![authentication flow][1]
-
-## Administrator structure
-
-When using Azure AD authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Azure AD administrator. Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL server. Only one Azure AD administrator (a user or group) can be configured at any time.
-
-![admin structure][2]
-
-## Permissions
-
-To create new users that can authenticate with Azure AD, you must be the designated Azure AD administrator. This user is assigned by configuring the Azure AD Administrator account for a specific Azure Database for MySQL server.
-
-To create a new Azure AD database user, you must connect as the Azure AD administrator. This is demonstrated in [Configure and Login with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).
-
-Any Azure AD authentication is only possible if the Azure AD admin was created for Azure Database for MySQL. If the Azure Active Directory admin was removed from the server, existing Azure Active Directory users created previously can no longer connect to the database using their Azure Active Directory credentials.
-
-## Connecting using Azure AD identities
-
-Azure Active Directory authentication supports the following methods of connecting to a database using Azure AD identities:
--- Azure Active Directory Password-- Azure Active Directory Integrated-- Azure Active Directory Universal with MFA-- Using Active Directory Application certificates or client secrets-- [Managed Identity](howto-connect-with-managed-identity.md)-
-Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in.
-
-Please note that management operations, such as adding new users, are only supported for Azure AD user roles at this point.
-
-> [!NOTE]
-> For more details on how to connect with an Active Directory token, see [Configure and sign in with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).
-
-## Additional considerations
--- Azure Active Directory authentication is only available for MySQL 5.7 and newer.-- Only one Azure AD administrator can be configured for a Azure Database for MySQL server at any time.-- Only an Azure AD administrator for MySQL can initially connect to the Azure Database for MySQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users.-- If a user is deleted from Azure AD, that user will no longer be able to authenticate with Azure AD, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user.
-> [!NOTE]
-> Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for MySQL this access will be revoked immediately.
-- If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will re-enable Azure AD logins.-- Azure Database for MySQL matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing user.-
-## Next steps
--- To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Configure and sign in with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).-- For an overview of logins, and database users for Azure Database for MySQL, see [Create users in Azure Database for MySQL](howto-create-users.md).-
-<!--Image references-->
-
-[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
-[2]: ./media/concepts-azure-ad-authentication/admin-structure.png
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-azure-advisor-recommendations.md
- Title: Azure Advisor for MySQL
-description: Learn about Azure Advisor recommendations for MySQL.
----- Previously updated : 04/08/2021-
-# Azure Advisor for MySQL
--
-Learn about how Azure Advisor is applied to Azure Database for MySQL and get answers to common questions.
-## What is Azure Advisor for MySQL?
-The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your MySQL database.
-Advisor recommendations are split among our MySQL database offerings:
-* Azure Database for MySQL - Single Server
-* Azure Database for MySQL - Flexible Server
-
-Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations.
-## Where can I view my recommendations?
-Recommendations are available from the **Overview** navigation sidebar in the Azure portal. A preview will appear as a banner notification, and details can be viewed in the **Notifications** section located just below the resource usage graphs.
--
-## Recommendation types
-Azure Database for MySQL prioritize the following types of recommendations:
-* **Performance**: To improve the speed of your MySQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../advisor/advisor-performance-recommendations.md).
-* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limit and connection limit recommendations. For more information, see [Advisor Reliability recommendations](../advisor/advisor-high-availability-recommendations.md).
-* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../advisor/advisor-cost-recommendations.md).
-
-## Understanding your recommendations
-* **Daily schedule**: For Azure MySQL databases, we check server telemetry and issue recommendations on a daily schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry on the following day.
-* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
-
-## Next steps
-For more information, see [Azure Advisor Overview](../advisor/advisor-overview.md).
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-backup.md
- Title: Backup and restore - Azure Database for MySQL
-description: Learn about automatic backups and restoring your Azure Database for MySQL server.
----- Previously updated : 3/27/2020---
-# Backup and restore in Azure Database for MySQL
--
-Azure Database for MySQL automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion.
-
-## Backups
-
-Azure Database for MySQL takes backups of the data files and the transaction log. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can [optionally configure it](howto-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted using AES 256-bit encryption.
-
-These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](concepts-migrate-dump-restore.md) to copy a database.
-
-The backup type and frequency is depending on the backend storage for the servers.
-
-### Backup type and frequency
-
-#### Basic storage servers
-
-The Basic storage is the backend storage supporting [Basic tier servers](concepts-pricing-tiers.md). Backups on Basic storage servers are snapshot-based. A full database snapshot is performed daily. There are no differential backups performed for basic storage servers and all snapshot backups are full database backups only.
-
-Transaction log backups occur every five minutes.
-
-#### General purpose storage v1 servers (supports up to 4-TB storage)
-
-The General purpose storage is the backend storage supporting [General Purpose](concepts-pricing-tiers.md) and [Memory Optimized tier](concepts-pricing-tiers.md) server. For servers with general purpose storage up to 4 TB, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. The backups on general purpose storage up to 4-TB storage are not snapshot-based and consumes IO bandwidth at the time of backup. For large databases (> 1 TB) on 4-TB storage, we recommend you consider
--- Provisioning more IOPs to account for backup IOs OR-- Alternatively, migrate to general purpose storage that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred [Azure regions](./concepts-pricing-tiers.md#storage). There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.-
-#### General purpose storage v2 servers (supports up to 16-TB storage)
-
-In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes.
-
-For more information of Basic and General purpose storage, refer [storage documentation](./concepts-pricing-tiers.md#storage).
-
-### Backup retention
-
-Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./howto-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./howto-restore-server-cli.md#set-backup-configuration).
-
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days:
--- General purpose storage v1 servers (supporting up to 4-TB storage) will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.-- General purpose storage v2 servers (supporting up to 16-TB storage) will retain the full database snapshots and transaction log backups in last 8 days.-
-#### Long-term retention
-
-Long-term retention of backups beyond 35 days is currently not natively supported by the service yet. You have an option to use mysqldump to take backups and store them for long-term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it.
-
-### Backup redundancy options
-
-Azure Database for MySQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../availability-zones/cross-region-replication-azure.md). This geo-redundancy provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
-
-> [!NOTE]
->For the following regions - Central India, France Central, UAE North, South Africa North; General purpose storage v2 storage is in Public Preview. If you create a source server in General purpose storage v2 (Supporting up to 16-TB storage) in the above mentioned regions then enabling Geo-Redundant Backup is not supported.
-
-#### Moving from locally redundant to geo-redundant backup storage
-
-Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](concepts-migrate-dump-restore.md) is the only supported option.
-
-### Backup storage cost
-
-Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/).
-
-You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available via the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
-
-The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups.
-
-## Restore
-
-In Azure Database for MySQL, performing a restore creates a new server from the original server's backups and restores all databases contained in the server.
-
-There are two types of restore available:
--- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server utilizing the combination of full and transaction log backups.-- **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region utilizing the most recent backup taken.-
-The estimated time for the recovery of the server depends on several factors:
-* The size of the databases
-* The number of transaction logs involved
-* The amount of activity that needs to be replayed to recover to the restore point
-* The network bandwidth if the restore is to a different region
-* The number of concurrent restore requests being processed in the target region
-* The presence of primary key in the tables in the database. For faster recovery, consider adding primary key for all the tables in your database. To check if your tables have primary key, you can use the following query:
-```sql
-select tab.table_schema as database_name, tab.table_name from information_schema.tables tab left join information_schema.table_constraints tco on tab.table_schema = tco.table_schema and tab.table_name = tco.table_name and tco.constraint_type = 'PRIMARY KEY' where tco.constraint_type is null and tab.table_schema not in('mysql', 'information_schema', 'performance_schema', 'sys') and tab.table_type = 'BASE TABLE' order by tab.table_schema, tab.table_name;
-```
-For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that a high number of geo-restore requests will be initiated for disaster recovery. When there are many requests, the recovery time for individual databases can increase. Most database restores finish in less than 12 hours.
-
-> [!IMPORTANT]
-> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](howto-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
-
-### Point-in-time restore
-
-Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option.
-
-> [!NOTE]
-> There are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation
->
-> - time_zone - This value to set to DEFAULT value **SYSTEM**
-> - event_scheduler - The event_scheduler is set to **OFF** on the restored server
->
-> You will need to set these server parameters by reconfiguring the [server parameter](howto-server-parameters.md)
-
-Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect.
-
-You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes.
-
-### Geo-restore
-
-You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups.
-- General purpose storage v1 servers (supporting up to 4-TB storage) can be restored to the geo-paired region, or to any Azure region that supports Azure Database for MySQL Single Server service.-- General purpose storage v2 servers (supporting up to 16-TB storage) can only be restored to Azure regions that support General purpose storage v2 servers infrastructure.
-Review [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage) for the list of supported regions.
-
-Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
-
-> [!IMPORTANT]
->If a geo-restore is performed for a newly created server, the initial backup synchronization may take more than 24 hours depending on data size as the initial full snapshot backup copy time is much higher. Subsequent snapshot backups are incremental copy and hence the restores are faster after 24 hours of server creation. If you are evaluating geo-restores to define your RTO, we recommend you to wait and evaluate geo-restore **only after 24 hours** of server creation for better estimates.
-
-During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size during geo-restore is not supported.
-
-The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours.
-
-### Perform post-restore tasks
-
-After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running:
--- If the new server is meant to replace the original server, redirect clients and client applications to the new server-- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.-- Ensure appropriate logins and database level permissions are in place-- Configure alerts, as appropriate-
-## Next steps
--- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).-- To restore to a point-in-time using the Azure portal, seeΓÇ»[restore server to a point-in-time using the Azure portal](howto-restore-server-portal.md).-- To restore to a point-in-time using Azure CLI, seeΓÇ»[restore server to a point-in-time using CLI](howto-restore-server-cli.md).
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-business-continuity.md
- Title: Business continuity - Azure Database for MySQL
-description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MySQL service.
----- Previously updated : 7/7/2020--
-# Overview of business continuity with Azure Database for MySQL - Single Server
--
-This article describes the capabilities that Azure Database for MySQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance.
-
-## Features that you can use to provide business continuity
-
-As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after the disruptive event - this is your Recovery Time Objective (RTO). You also need to understand the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event - this is your Recovery Point Objective (RPO).
-
-Azure Database for MySQL Single Server provides business continuity and disaster recovery features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in a different region. Each has different characteristics for the recovery time and the potential data loss. With [Geo-restore](concepts-backup.md) feature, a new server is created using the backup data that is replicated from another region. The overall time it takes to restore and recover depends on the size of the database and the amount of logs to recover. The overall time to establish the server varies from few minutes to few hours. With [read replicas](concepts-read-replicas.md), transaction logs from the primary are asynchronously streamed to the replica. In the event of a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss.
-
-> [!NOTE]
-> The lag between the primary and the replica depends on the latency between the sites, the amount of data to be transmitted and most importantly on the write workload of the primary server. Heavy write workloads can generate significant lag.
->
-> Because of asynchronous nature of replication used for read-replicas, they **should not** be considered as a High Availability (HA) solution since the higher lags can mean higher RTO and RPO. Only for workloads where the lag remains smaller through the peak and non-peak times of the workload, read replicas can act as a HA alternative. Otherwise read replicas are intended for true read-scale for ready heavy workloads and for (Disaster Recovery) DR scenarios.
-
-The following table compares RTO and RPO in a **typical workload** scenario:
-
-| **Capability** | **Basic** | **General Purpose** | **Memory optimized** |
-| :: | :-: | :--: | :: |
-| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
-| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h |
-| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
-
- \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
-
-## Recover a server after a user or application error
-
-You can use the service's backups to recover a server from various disruptive events. A user may accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data due to an application defect, and so on.
-
-You can perform a point-in-time-restore to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server.
-
-> [!IMPORTANT]
-> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](howto-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
-
-## Recover from an Azure regional data center outage
-
-Although rare, an Azure data center can have an outage. When an outage occurs, it causes a business disruption that might only last a few minutes, but could last for hours.
-
-One option is to wait for your server to come back online when the data center outage is over. This works for applications that can afford to have the server offline for some period of time, for example a development environment. When data center has an outage, you do not know how long the outage might last, so this option only works if you don't need your server for a while.
-
-## Geo-restore
-
-The geo-restore feature restores the server using geo-redundant backups. The backups are hosted in your server's [paired region](../availability-zones/cross-region-replication-azure.md). These backups are accessible even when the region your server is hosted in is offline. You can restore from these backups to any other region and bring your server back online. Learn more about geo-restore from the [backup and restore concepts article](concepts-backup.md).
-
-> [!IMPORTANT]
-> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using mysqldump of your existing server and restore it to a newly created server configured with geo-redundant backups.
-
-## Cross-region read replicas
-
-You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using MySQL's binary log replication technology. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
-
-## FAQ
-
-### Where does Azure Database for MySQL store customer data?
-By default, Azure Database for MySQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
-
-## Next steps
--- Learn more about the [automated backups in Azure Database for MySQL](concepts-backup.md).-- Learn how to restore using [the Azure portal](howto-restore-server-portal.md) or [the Azure CLI](howto-restore-server-cli.md).-- Learn about [read replicas in Azure Database for MySQL](concepts-read-replicas.md).
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-certificate-rotation.md
- Title: Certificate rotation for Azure Database for MySQL
-description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for MySQL
----- Previously updated : 04/08/2021--
-# Understanding the changes in the Root CA change for Azure Database for MySQL Single Server
--
-Azure Database for MySQL Single Server successfully completed the root certificate change on **February 15, 2021 (02/15/2021)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
-
-> [!NOTE]
-> This article applies to [Azure Database for MySQL - Single Server](single-server-overview.md) ONLY. For [Azure Database for MySQL - Flexible Server](flexible-server/overview.md), the certificate needed to communicate over SSL is [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)
->
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
-
-#### Why is a root certificate update required?
-
-Azure Database for MySQL users can only use the predefined certificate to connect to their MySQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
-
-Per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-
-The new certificate is rolled out and in effect as of February 15, 2021 (02/15/2021).
-
-#### What change was performed on February 15, 2021 (02/15/2021)?
-
-On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers don't need to change anything and there's no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
-
-#### Do I need to make any changes on my client to maintain connectivity?
-
-No change is required on client side. If you followed our previous recommendation below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
-
-###### Previous recommendation
-
-To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation, one of the allowed values will be used. Refer to the following steps:
-
-1. Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from the following links:
-
- * [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem)
- * [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem)
-
-2. Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
-
- * For Java (MySQL Connector/J) users, execute:
-
- ```console
- keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
- ```
-
- ```console
- keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
- ```
-
- Then replace the original keystore file with the new generated one:
-
- * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
- * System.setProperty("javax.net.ssl.trustStorePassword","password");
-
- * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
-
- :::image type="content" source="media/overview/netconnecter-cert.png" alt-text="Azure Database for MySQL .NET cert diagram":::
-
- * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
-
- * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format:
-
- ```
- --BEGIN CERTIFICATE--
- (Root CA1: BaltimoreCyberTrustRoot.crt.pem)
- --END CERTIFICATE--
- --BEGIN CERTIFICATE--
- (Root CA2: DigiCertGlobalRootG2.crt.pem)
- --END CERTIFICATE--
- ```
-
-3. Replace the original root CA pem file with the combined root CA file and restart your application/client.
-
- In the future, after the new certificate is deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
-
-> [!NOTE]
-> Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done, and then it will be safe to drop the **Baltimore certificate**.
-
-#### Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-
-We evaluated the customer readiness for this change and realized that many customers were looking for extra lead time to manage this change. To provide more lead time to customers for readiness, we decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year, providing sufficient lead time to the customers and end users.
-
-Our recommendation to users is to use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
-
-#### What if we removed the BaltimoreCyberTrustRoot certificate?
-
-You'll start to encounter connectivity errors while connecting to your Azure Database for MySQL server. You'll need to [configure SSL](howto-configure-ssl.md) with the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
-
-## Frequently asked questions
-
-#### If I'm not using SSL/TLS, do I still need to update the root CA?
-
- No actions are required if you aren't using SSL/TLS.
-
-#### If I'm using SSL/TLS, do I need to restart my database server to update the root CA?
-
-No, you don't need to restart the database server to start using the new certificate. This root certificate is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
-
-#### How do I know if I'm using SSL/TLS with root certificate verification?
-
-You can identify whether your connections verify the root certificate by reviewing your connection string.
-
-* If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate.
-* If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you don't need to update certificates.
-* If your connection string doesn't specify sslmode, you don't need to update certificates.
-
-If you're using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates.
-
-#### What is the impact of using App Service with Azure Database for MySQL?
-
-For Azure app services connecting to Azure Database for MySQL, there are two possible scenarios depending on how on you're using SSL with your application.
-
-* This new certificate has been added to App Service at platform level. If you're using the SSL certificates included on App Service platform in your application, then no action is needed. This is the most common scenario.
-* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and produce a combined certificate as mentioned above and use the certificate file. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress). This is an uncommon scenario but we have seen some users using this.
-
-#### What is the impact of using Azure Kubernetes Services (AKS) with Azure Database for MySQL?
-
-If you're trying to connect to the Azure Database for MySQL using Azure Kubernetes Services (AKS), it's similar to access from a dedicated customers host environment. Refer to the steps [here](../aks/ingress-own-tls.md).
-
-#### What is the impact of using Azure Data Factory to connect to Azure Database for MySQL?
-
-For a connector using Azure Integration Runtime, the connector uses certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates, and therefore no action is needed.
-
-For a connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you'll need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
-
-#### Do I need to plan a database server maintenance downtime for this change?
-
-No. Since the change is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change.
-
-#### If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
-
-For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL.
-
-#### How often does Microsoft update their certificates or what is the expiry policy?
-
-These certificates used by Azure Database for MySQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times.
-
-#### If I'm using read replicas, do I need to perform this update only on source server or the read replicas?
-
-Since this update is a client-side change, if the client used to read data from the replica server, you'll need to apply the changes for those clients as well.
-
-#### If I'm using Data-in replication, do I need to perform any action?
-
-If you're using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider:
-
-* If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
-
- ```azurecli-interactive
- Master_SSL_Allowed : Yes
- Master_SSL_CA_File : ~\azure_mysqlservice.pem
- Master_SSL_CA_Path :
- Master_SSL_Cert : ~\azure_mysqlclient_cert.pem
- Master_SSL_Cipher :
- Master_SSL_Key : ~\azure_mysqlclient_key.pem
- ```
-
- If you see that the certificate is provided for the CA_file, SSL_Cert, and SSL_Key, you'll need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and create a combined cert file.
-
-* If the data-replication is between two Azure Database for MySQL servers, then you'll need to reset the replica by executing **CALL mysql.az_replication_change_master** and provide the new dual root certificate as the last parameter [master_ssl_ca](howto-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication).
-
-#### Is there a server-side query to determine whether SSL is being used?
-
-To verify if you're using SSL connection to connect to the server refer [SSL verification](howto-configure-ssl.md#step-4-verify-the-ssl-connection).
-
-#### Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
-
-No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
-
-#### What if I have further questions?
-
-For questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com).
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-compatibility.md
- Title: Driver and tools compatibility - Azure Database for MySQL
-description: This article describes the MySQL drivers and management tools that are compatible with Azure Database for MySQL.
----- Previously updated : 11/4/2021-
-# MySQL drivers and management tools compatible with Azure Database for MySQL
--
-This article describes the drivers and management tools that are compatible with Azure Database for MySQL Single Server.
-
-> [!NOTE]
-> This article is only applicable to Azure Database for MySQL Single Server to ensure drivers are compatible with [connectivity architecture](concepts-connectivity-architecture.md) of Single Server service. [Azure Database for MySQL Flexible Server](./flexible-server/overview.md) is compatible with all the drivers and tools supported and compatible with MySQL community edition.
-
-## MySQL Drivers
-Azure Database for MySQL uses the world's most popular community edition of MySQL database. As such, it's compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open-source community to constantly improve the functionality and usability of MySQL drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 is provided in the following table:
-
-| **Programming Language** | **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** |
-| :-- | : | :-- | :- | : | :-- |
-| PHP | mysqli, pdo_mysql, mysqlnd | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.|
-| .NET | Async MySQL Connector for .NET | https://github.com/mysql-net/MySqlConnector <br> [Installation package from NuGet](https://www.nuget.org/packages/MySqlConnector/) | 0.27 and after | 0.26.5 and before | |
-| .NET | MySQL Connector/NET | https://github.com/mysql/mysql-connector-net | 6.6.3, 7.0, 8.0 | | An encoding bug may cause connections to fail on some non-UTF8 Windows systems. |
-| Node.js | mysqljs | https://github.com/mysqljs/mysql/ <br> Installation package from NPM:<br> Run `npm install mysql` from NPM | 2.15 | 2.14.1 and before | |
-| Node.js | node-mysql2 | https://github.com/sidorares/node-mysql2 | 1.3.4+ | | |
-| Go | Go MySQL Driver | https://github.com/go-sql-driver/mysql/releases | 1.3, 1.4 | 1.2 and before | Use `allowNativePasswords=true` in the connection string for version 1.3. Version 1.4 contains a fix and `allowNativePasswords=true` is no longer required. |
-| Python | MySQL Connector/Python | https://pypi.python.org/pypi/mysql-connector-python | 1.2.3, 2.0, 2.1, 2.2, use 8.0.16+ with MySQL 8.0 | 1.2.2 and before | |
-| Python | PyMySQL | https://pypi.org/project/PyMySQL/ | 0.7.11, 0.8.0, 0.8.1, 0.9.3+ | 0.9.0 - 0.9.2 (regression in web2py) | |
-| Java | MariaDB Connector/J | https://downloads.mariadb.org/connector-java/ | 2.1, 2.0, 1.6 | 1.5.5 and before | |
-| Java | MySQL Connector/J | https://github.com/mysql/mysql-connector-j | 5.1.21+, use 8.0.17+ with MySQL 8.0 | 5.1.20 and below | |
-| C | MySQL Connector/C (libmysqlclient) | https://dev.mysql.com/doc/c-api/5.7/en/c-api-implementations.html | 6.0.2+ | | |
-| C | MySQL Connector/ODBC (myodbc) | https://github.com/mysql/mysql-connector-odbc | 3.51.29+ | | |
-| C++ | MySQL Connector/C++ | https://github.com/mysql/mysql-connector-cpp | 1.1.9+ | 1.1.3 and below | |
-| C++ | MySQL++| https://github.com/tangentsoft/mysqlpp | 3.2.3+ | | |
-| Ruby | mysql2 | https://github.com/brianmario/mysql2 | 0.4.10+ | | |
-| R | RMySQL | https://github.com/rstats-db/RMySQL | 0.10.16+ | | |
-| Swift | mysql-swift | https://github.com/novi/mysql-swift | 0.7.2+ | | |
-| Swift | vapor/mysql | https://github.com/vapor/mysql-kit | 2.0.1+ | | |
-
-## Management Tools
-The compatibility advantage extends to database management tools as well. Your existing tools should continue to work with Azure Database for MySQL, as long as the database manipulation operates within the confines of user permissions. Three common database management tools that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 are listed in the following table:
-
-| | **MySQL Workbench 6.x and up** | **Navicat 12** | **PHPMyAdmin 4.x and up** | **dbForge Studio for MySQL 9.0** |
-| :- | :-- | :- | :-| :- |
-| **Create, Update, Read, Write, Delete** | X | X | X | X |
-| **SSL Connection** | X | X | X | X |
-| **SQL Query Auto Completion** | X | X | | X |
-| **Import and Export Data** | X | X | X | X |
-| **Export to Multiple Formats** | X | X | X | X |
-| **Backup and Restore** | | X | | X |
-| **Display Server Parameters** | X | X | X | X |
-| **Display Client Connections** | X | X | X | X |
-
-## Next steps
--- [Troubleshoot connection issues to Azure Database for MySQL](howto-troubleshoot-common-connection-issues.md)
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connection-libraries.md
- Title: Connection libraries - Azure Database for MySQL
-description: This article lists each library or driver that client programs can use when connecting to Azure Database for MySQL.
----- Previously updated : 8/3/2020--
-# Connection libraries for Azure Database for MySQL
--
-This article lists each library or driver that client programs can use when connecting to Azure Database for MySQL.
-
-## Client interfaces
-MySQL offers standard database driver connectivity for using MySQL with applications and tools that are compatible with industry standards ODBC and JDBC. Any system that works with ODBC or JDBC can use MySQL.
-
-| **Language** | **Platform** | **Additional Resource** | **Download** |
-| :-- | :| :--| :|
-| PHP | Windows, Linux | [MySQL native driver for PHP - mysqlnd](https://dev.mysql.com/downloads/connector/php-mysqlnd/) | [Download](https://secure.php.net/downloads.php) |
-| ODBC | Windows, Linux, macOS X, and Unix platforms | [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | [Download](https://dev.mysql.com/downloads/connector/odbc/) |
-| ADO.NET | Windows | [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | [Download](https://dev.mysql.com/downloads/connector/net/) |
-| JDBC | Platform independent | [MySQL Connector/J 5.1 Developer Guide](https://dev.mysql.com/doc/connector-j/5.1/en/) | [Download](https://dev.mysql.com/downloads/connector/j/) |
-| Node.js | Windows, Linux, macOS X | [sidorares/node-mysql2](https://github.com/sidorares/node-mysql2/tree/master/documentation) | [Download](https://github.com/sidorares/node-mysql2) |
-| Python | Windows, Linux, macOS X | [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) |
-| C++ | Windows, Linux, macOS X | [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/connector-cpp/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) |
-| C | Windows, Linux, macOS X | [MySQL Connector/C Developer Guide](https://dev.mysql.com/doc/c-api/8.0/en/) | [Download](https://dev.mysql.com/downloads/connector/c/)
-| Perl | Windows, Linux, macOS X, and Unix platforms | [DBD::MySQL](https://metacpan.org/pod/DBD::mysql) | [Download](https://metacpan.org/pod/DBD::mysql) |
--
-## Next steps
-Read these quickstarts on how to connect to and query Azure Database for MySQL by using your language of choice:
--- [PHP](./connect-php.md)-- [Java](./connect-java.md)-- [.NET (C#)](./connect-csharp.md)-- [Python](./connect-python.md)-- [Node.JS](./connect-nodejs.md)-- [Ruby](./connect-ruby.md)-- [C++](connect-cpp.md)-- [Go](./connect-go.md)
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connectivity-architecture.md
- Title: Connectivity architecture - Azure Database for MySQL
-description: Describes the connectivity architecture for your Azure Database for MySQL server.
----- Previously updated : 10/15/2021--
-# Connectivity architecture in Azure Database for MySQL
--
-This article explains the Azure Database for MySQL connectivity architecture and how the traffic is directed to your Azure Database for MySQL instance from clients both within and outside Azure.
-
-## Connectivity architecture
-Connection to your Azure Database for MySQL is established through a gateway that is responsible for routing incoming connections to the physical location of your server in our clusters. The following diagram illustrates the traffic flow.
--
-As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MySQL. Therefore, in order to connect to your server, such as from corporate networks, it's necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
-
-## Azure Database for MySQL gateway IP addresses
-
-The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MySQL server.
-
-As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
-
-* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**. You should use fully qualified domain name (FQDN) of your server in the format `<servername>.mysql.database.azure.com`, in the connection string for your application.
-* You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
-
-The following table lists the gateway IP addresses of the Azure Database for MySQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
-
-* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you're provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
-* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
-
-| **Region name** | **Gateway IP addresses** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
-|||--|--|
-| Australia Central | 20.36.105.0 | | |
-| Australia Central2 | 20.36.113.0 | | |
-| Australia East | 13.75.149.87, 40.79.161.1 | | |
-| Australia South East | 13.73.109.251, 13.77.49.32, 13.77.48.10 | | |
-| Brazil South | 191.233.201.8, 191.233.200.16 | | 104.41.11.5 |
-| Canada Central | 13.71.168.32|| 40.85.224.249, 52.228.35.221 |
-| Canada East | 40.86.226.166, 52.242.30.154 | | |
-| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | |
-| China East | 139.219.130.35 | | |
-| China East 2 | 40.73.82.1, 52.130.120.89 |
-| China East 3 | 52.131.155.192 |
-| China North | 139.219.15.17 | | |
-| China North 2 | 40.73.50.0 | |
-| China North 3 | 52.131.27.192 | |
-| East Asia | 13.75.33.20, 52.175.33.150, 13.75.33.20, 13.75.33.21 | | |
-| East US | 40.71.8.203, 40.71.83.113 | 40.121.158.30 | 191.238.6.43 |
-| East US 2 | 40.70.144.38, 52.167.105.38 | 52.177.185.181 | |
-| France Central | 40.79.137.0, 40.79.129.1 | | |
-| France South | 40.79.177.0 | | |
-| Germany Central | 51.4.144.100 | | |
-| Germany North | 51.116.56.0 | | |
-| Germany North East | 51.5.144.179 | | |
-| Germany West Central | 51.116.152.0 | | |
-| India Central | 104.211.96.159 | | |
-| India South | 104.211.224.146 | | |
-| India West | 104.211.160.80 | | |
-| Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | |
-| Japan West | 191.238.68.11, 40.74.96.6, 40.74.96.7 | 104.214.148.156 | |
-| Korea Central | 52.231.17.13 | 52.231.32.42 | |
-| Korea South | 52.231.145.3, 52.231.151.97 | 52.231.200.86 | |
-| North Central US | 52.162.104.35, 52.162.104.36 | 23.96.178.199 | |
-| North Europe | 52.138.224.6, 52.138.224.7 | 40.113.93.91 | 191.235.193.75 |
-| South Africa North | 102.133.152.0 | | |
-| South Africa West | 102.133.24.0 | | |
-| South Central US | 104.214.16.39, 20.45.120.0 | 13.66.62.124 | 23.98.162.75 |
-| South East Asia | 40.78.233.2, 23.98.80.12 | 104.43.15.0 | |
-| Switzerland North | 51.107.56.0 | | |
-| Switzerland West | 51.107.152.0 | | |
-| UAE Central | 20.37.72.64 | | |
-| UAE North | 65.52.248.0 | | |
-| UK South | 51.140.144.32, 51.105.64.0 | 51.140.184.11 | |
-| UK West | 51.141.8.11 | | |
-| West Central US | 13.78.145.25, 52.161.100.158 | | |
-| West Europe | 13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 |
-| West US | 13.86.216.212, 13.86.217.212 | 104.42.238.205 | 23.99.34.75 |
-| West US2 | 13.66.136.195, 13.66.136.192, 13.66.226.202 | | |
-| West US3 | 20.150.184.2 | | |
-## Connection redirection
-
-Azure Database for MySQL supports an additional connection policy, **redirection**, that helps to reduce network latency between client applications and MySQL servers. With redirection, and after the initial TCP session is established to the Azure Database for MySQL server, the server returns the backend address of the node hosting the MySQL server to the client. Thereafter, all subsequent packets flow directly to the server, bypassing the gateway. As packets flow directly to the server, latency and throughput have improved performance.
-
-This feature is supported in Azure Database for MySQL servers with engine versions 5.6, 5.7, and 8.0.
-
-Support for redirection is available in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft, and is available on [PECL](https://pecl.php.net/package/mysqlnd_azure). See the [configuring redirection](./howto-redirection.md) article for more information on how to use redirection in your applications.
--
-> [!IMPORTANT]
-> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview.
-
-## Frequently asked questions
-
-### What you need to know about this planned maintenance?
-This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it's automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
-
-### What are we decommissioning?
-Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We're decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
-
-### How can you validate if your connections are going to old gateway nodes or new gateway nodes?
-Ping your server's FQDN, for example ``ping xxx.mysql.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway.
-
-You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses
-
-### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
-You'll receive an email to inform you when we'll start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
-
-### What do I do if my client applications are still connecting to old gateway server?
-This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code.
-
-### Is there any impact for my application connections?
-This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
-
-### Can I request for a specific time window for the maintenance?
-As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for Most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-
-### I'm using private link, will my connections get affected?
-No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
---
-## Next steps
-* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./howto-manage-firewall-using-portal.md)
-* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./howto-manage-firewall-using-cli.md)
-* [Configure redirection with Azure Database for MySQL](./howto-redirection.md)
mysql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connectivity.md
- Title: Transient connectivity errors - Azure Database for MySQL
-description: Learn how to handle transient connectivity errors and connect efficiently to Azure Database for MySQL.
-keywords: mysql connection,connection string,connectivity issues,transient error,connection error,connect efficiently
----- Previously updated : 3/18/2020--
-# Handle transient errors and connect efficiently to Azure Database for MySQL
--
-This article describes how to handle transient errors and connect efficiently to Azure Database for MySQL.
-
-## Transient errors
-
-A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens. Another reason could be a new version of a PaaS service that is being rolled out. Most of these events are automatically mitigated by the system in less than 60 seconds. A best practice for designing and developing applications in the cloud is to expect transient errors. Assume they can happen in any component at any time and to have the appropriate logic in place to handle these situations.
-
-## Handling transient errors
-
-Transient errors should be handled using retry logic. Situations that must be considered:
-
-* An error occurs when you try to open a connection
-* An idle connection is dropped on the server side. When you try to issue a command it can't be executed
-* An active connection that currently is executing a command is dropped.
-
-The first and second case are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for MySQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
-
-* Wait for 5 seconds before your first retry.
-* For each following retry, the increase the wait exponentially, up to 60 seconds.
-* Set a max number of retries at which point your application considers the operation failed.
-
-When a connection with an active transaction fails, it is more difficult to handle the recovery correctly. There are two cases: If the transaction was read-only in nature, it is safe to reopen the connection and to retry the transaction. If however the transaction was also writing to the database, you must determine if the transaction was rolled back, or if it succeeded before the transient error happened. In that case, you might just not have received the commit acknowledgment from the database server.
-
-One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client-generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully.
-
-When your program communicates with Azure Database for MySQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors.
-
-Make sure to test you retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for MySQL server. Your application should handle the brief downtime that is encountered during this operation without any problems.
-
-## Connect efficiently to Azure Database for MySQL
-
-Database connections are a limited resource, so making effective use of connection pooling to access Azure Database for MySQL optimizes performance. The below section explains how to use connection pooling or persistent connections to more effectively access Azure Database for MySQL.
-
-## Access databases by using connection pooling (recommended)
-
-Managing database connections can have a significant impact on the performance of the application as a whole. To optimize the performance of your application, the goal should be to reduce the number of times connections are established and time for establishing connections in key code paths. We strongly recommend using database connection pooling or persistent connections to connect to Azure Database for MySQL. Database connection pooling handles the creation, management, and allocation of database connections. When a program requests a database connection, it prioritizes the allocation of existing idle database connections, rather than the creation of a new connection. After the program has finished using the database connection, the connection is recovered in preparation for further use, rather than simply being closed down.
-
-For better illustration, this article provides [a piece of sample code](./sample-scripts-java-connection-pooling.md) that uses JAVA as an example. For more information, see [Apache common DBCP](https://commons.apache.org/proper/commons-dbcp/).
-
-> [!NOTE]
-> The server configures a timeout mechanism to close a connection that has been in an idle state for some time to free up resources. Be sure to set up the verification system to ensure the effectiveness of persistent connections when you are using them. For more information, see [Configure verification systems on the client side to ensure the effectiveness of persistent connections](concepts-connectivity.md#configure-verification-mechanisms-in-clients-to-confirm-the-effectiveness-of-persistent-connections).
-
-## Access databases by using persistent connections (recommended)
-
-The concept of persistent connections is similar to that of connection pooling. Replacing short connections with persistent connections requires only minor changes to the code, but it has a major effect in terms of improving performance in many typical application scenarios.
-
-## Access databases by using wait and retry mechanism with short connections
-
-If you have resource limitations, we strongly recommend that you use database pooling or persistent connections to access databases. If your application use short connections and experience connection failures when you approach the upper limit on the number of concurrent connections,you can try wait and retry mechanism. You can set an appropriate wait time, with a shorter wait time after the first attempt. Thereafter, you can try waiting for events multiple times.
-
-## Configure verification mechanisms in clients to confirm the effectiveness of persistent connections
-
-The server configures a timeout mechanism to close a connection thatΓÇÖs been in an idle state for some time to free up resources. When the client accesses the database again, itΓÇÖs equivalent to creating a new connection request between the client and the server. To ensure the effectiveness of connections during the process of using them, configure a verification mechanism on the client. As shown in the following example, you can use Tomcat JDBC connection pooling to configure this verification mechanism.
-
-By setting the TestOnBorrow parameter, when there's a new request, the connection pool automatically verifies the effectiveness of any available idle connections. If such a connection is effective, its directly returned otherwise connection pool withdraws the connection. The connection pool then creates a new effective connection and returns it. This process ensures that database is accessed efficiently.
-
-For information on the specific settings, see the [JDBC connection pool official introduction document](https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Common_Attributes). You mainly need to set the following three parameters: TestOnBorrow (set to true), ValidationQuery (set to SELECT 1), and ValidationQueryTimeout (set to 1). The specific sample code is shown below:
-
-```java
-public class SimpleTestOnBorrowExample {
- public static void main(String[] args) throws Exception {
- PoolProperties p = new PoolProperties();
- p.setUrl("jdbc:mysql://localhost:3306/mysql");
- p.setDriverClassName("com.mysql.jdbc.Driver");
- p.setUsername("root");
- p.setPassword("password");
- // The indication of whether objects will be validated by the idle object evictor (if any).
- // If an object fails to validate, it will be dropped from the pool.
- // NOTE - for a true value to have any effect, the validationQuery or validatorClassName parameter must be set to a non-null string.
- p.setTestOnBorrow(true);
-
- // The SQL query that will be used to validate connections from this pool before returning them to the caller.
- // If specified, this query does not have to return any data, it just can't throw a SQLException.
- p.setValidationQuery("SELECT 1");
-
- // The timeout in seconds before a connection validation queries fail.
- // This works by calling java.sql.Statement.setQueryTimeout(seconds) on the statement that executes the validationQuery.
- // The pool itself doesn't timeout the query, it is still up to the JDBC driver to enforce query timeouts.
- // A value less than or equal to zero will disable this feature.
- p.setValidationQueryTimeout(1);
- // set other useful pool properties.
- DataSource datasource = new DataSource();
- datasource.setPoolProperties(p);
-
- Connection con = null;
- try {
- con = datasource.getConnection();
- // execute your query here
- } finally {
- if (con!=null) try {con.close();}catch (Exception ignore) {}
- }
- }
- }
-```
-
-## Next steps
-
-* [Troubleshoot connection issues to Azure Database for MySQL](howto-troubleshoot-common-connection-issues.md)
mysql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-access-and-security-vnet.md
- Title: VNet service endpoints - Azure Database for MySQL
-description: 'Describes how VNet service endpoints work for your Azure Database for MySQL server.'
----- Previously updated : 7/17/2020-
-# Use Virtual Network service endpoints and rules for Azure Database for MySQL
--
-*Virtual network rules* are one firewall security feature that controls whether your Azure Database for MySQL server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for MySQL server.
-
-To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for MySQL:
--
-> [!NOTE]
-> This feature is available in all regions of Azure where Azure Database for MySQL is deployed for General Purpose and Memory Optimized servers.
-> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server.
-
-You can also consider using [Private Link](concepts-data-access-security-private-link.md) for connections. Private Link provides a private IP address in your VNet for the Azure Database for MySQL server.
-
-<a name="anch-terminology-and-description-82f"></a>
-
-## Terminology and description
-
-**Virtual network:** You can have virtual networks associated with your Azure subscription.
-
-**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) that you have are assigned to subnets. One subnet can contain multiple VMs or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access.
-
-**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for MySQL and PostgreSQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for all Azure SQL Database, Azure Database for MySQL and Azure Database for PostgreSQL servers on the subnet.
-
-**Virtual network rule:** A virtual network rule for your Azure Database for MySQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for MySQL server. To be in the ACL for your Azure Database for MySQL server, the subnet must contain the **Microsoft.Sql** type name.
-
-A virtual network rule tells your Azure Database for MySQL server to accept communications from every node that is on the subnet.
-------
-<a name="anch-benefits-of-a-vnet-rule-68b"></a>
-
-## Benefits of a virtual network rule
-
-Until you take action, the VMs on your subnets cannot communicate with your Azure Database for MySQL server. One action that establishes the communication is the creation of a virtual network rule. The rationale for choosing the VNet rule approach requires a compare-and-contrast discussion involving the competing security options offered by the firewall.
-
-### A. Allow access to Azure services
-
-The Connection security pane has an **ON/OFF** button that is labeled **Allow access to Azure services**. The **ON** setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This **ON** setting is probably more open than you want your Azure Database for MySQL Database to be. The virtual network rule feature offers much finer granular control.
-
-### B. IP rules
-
-The Azure Database for MySQL firewall allows you to specify IP address ranges from which communications are accepted into the Azure Database for MySQL Database. This approach is fine for stable IP addresses that are outside the Azure private network. But many nodes inside the Azure private network are configured with *dynamic* IP addresses. Dynamic IP addresses might change, such as when your VM is restarted. It would be folly to specify a dynamic IP address in a firewall rule, in a production environment.
-
-You can salvage the IP option by obtaining a *static* IP address for your VM. For details, see [Configure private IP addresses for a virtual machine by using the Azure portal][vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w].
-
-However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage.
-
-<a name="anch-details-about-vnet-rules-38q"></a>
-
-## Details about virtual network rules
-
-This section describes several details about virtual network rules.
-
-### Only one geographic region
-
-Each Virtual Network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet.
-
-Any virtual network rule is limited to the region that its underlying endpoint applies to.
-
-### Server-level, not database-level
-
-Each virtual network rule applies to your whole Azure Database for MySQL server, not just to one particular database on the server. In other words, virtual network rule applies at the server-level, not at the database-level.
-
-### Security administration roles
-
-There is a separation of security roles in the administration of Virtual Network service endpoints. Action is required from each of the following roles:
--- **Network Admin:** &nbsp; Turn on the endpoint.-- **Database Admin:** &nbsp; Update the access control list (ACL) to add the given subnet to the Azure Database for MySQL server.-
-*Azure RBAC alternative:*
-
-The roles of Network Admin and Database Admin have more capabilities than are needed to manage virtual network rules. Only a subset of their capabilities is needed.
-
-You have the option of using [Azure role-based access control (Azure RBAC)][rbac-what-is-813s] in Azure to create a single custom role that has only the necessary subset of capabilities. The custom role could be used instead of involving either the Network Admin or the Database Admin. The surface area of your security exposure is lower if you add a user to a custom role, versus adding the user to the other two major administrator roles.
-
-> [!NOTE]
-> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Both subscriptions must be in the same Azure Active Directory tenant.
-> - The user has the required permissions to initiate operations, such as enabling service endpoints and adding a VNet-subnet to the given Server.
-> - Make sure that both the subscription have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Limitations
-
-For Azure Database for MySQL, the virtual network rules feature has the following limitations:
--- A Web App can be mapped to a private IP in a VNet/subnet. Even if service endpoints are turned ON from the given VNet/subnet, connections from the Web App to the server will have an Azure public IP source, not a VNet/subnet source. To enable connectivity from a Web App to a server that has VNet firewall rules, you must Allow Azure services to access server on the server.--- In the firewall for your Azure Database for MySQL, each virtual network rule references a subnet. All these referenced subnets must be hosted in the same geographic region that hosts the Azure Database for MySQL.--- Each Azure Database for MySQL server can have up to 128 ACL entries for any given virtual network.--- Virtual network rules apply only to Azure Resource Manager virtual networks; and not to [classic deployment model][arm-deployment-model-568f] networks.--- Turning ON virtual network service endpoints to Azure Database for MySQL using the **Microsoft.Sql** service tag also enables the endpoints for all Azure Database --- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.--- If **Microsoft.Sql** is enabled in a subnet, it indicates that you only want to use VNet rules to connect. [Non-VNet firewall rules](concepts-firewall-rules.md) of resources in that subnet will not work.--- On the firewall, IP address ranges do apply to the following networking items, but virtual network rules do not:
- - [Site-to-Site (S2S) virtual private network (VPN)][vpn-gateway-indexmd-608y]
- - On-premises via [ExpressRoute][expressroute-indexmd-744v]
-
-## ExpressRoute
-
-If your network is connected to the Azure network through use of [ExpressRoute][expressroute-indexmd-744v], each circuit is configured with two public IP addresses at the Microsoft Edge. The two IP addresses are used to connect to Microsoft Services, such as to Azure Storage, by using Azure Public Peering.
-
-To allow communication from your circuit to Azure Database for MySQL, you must create IP network rules for the public IP addresses of your circuits. In order to find the public IP addresses of your ExpressRoute circuit, open a support ticket with ExpressRoute by using the Azure portal.
-
-## Adding a VNET Firewall rule to your server without turning on VNET Service Endpoints
-
-Merely setting a VNet firewall rule does not help secure the server to the VNet. You must also turn VNet service endpoints **On** for the security to take effect. When you turn service endpoints **On**, your VNet-subnet experiences downtime until it completes the transition from **Off** to **On**. This is especially true in the context of large VNets. You can use the **IgnoreMissingServiceEndpoint** flag to reduce or eliminate the downtime during transition.
-
-You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal.
-
-## Related articles
-- [Azure virtual networks][vm-virtual-network-overview]-- [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d]-
-## Next steps
-For articles on creating VNet rules, see:
-- [Create and manage Azure Database for MySQL VNet rules using the Azure portal](howto-manage-vnet-using-portal.md)-- [Create and manage Azure Database for MySQL VNet rules using Azure CLI](howto-manage-vnet-using-cli.md)-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[arm-deployment-model-568f]: ../azure-resource-manager/management/deployment-models.md
-
-[vm-virtual-network-overview]: ../virtual-network/virtual-networks-overview.md
-
-[vm-virtual-network-service-endpoints-overview-649d]: ../virtual-network/virtual-network-service-endpoints-overview.md
-
-[vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]: ../virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md
-
-[rbac-what-is-813s]: ../role-based-access-control/overview.md
-
-[vpn-gateway-indexmd-608y]: ../vpn-gateway/index.yml
-
-[expressroute-indexmd-744v]: ../expressroute/index.yml
-
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-access-security-private-link.md
- Title: Private Link - Azure Database for MySQL
-description: Learn how Private link works for Azure Database for MySQL.
----- Previously updated : 03/10/2020--
-# Private Link for Azure Database for MySQL
--
-Private Link allows you to connect to various PaaS services in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet.
-
-For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../virtual-network/virtual-networks-overview.md) and Subnet.
-
-> [!NOTE]
-> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
-
-## Data exfiltration prevention
-
-Data ex-filtration in Azure Database for MySQL is when an authorized user, such as a database admin, is able to extract data from one system and move it to another location or system outside the organization. For example, the user moves the data to a storage account owned by a third party.
-
-Consider a scenario with a user running MySQL Workbench inside an Azure Virtual Machine (VM) that is connecting to an Azure Database for MySQL server provisioned in West US. The example below shows how to limit access with public endpoints on Azure Database for MySQL using network access controls.
-
-* Disable all Azure service traffic to Azure Database for MySQL via the public endpoint by setting *Allow Azure Services* to OFF. Ensure no IP addresses or ranges are allowed to access the server either via [firewall rules](./concepts-firewall-rules.md) or [virtual network service endpoints](./concepts-data-access-and-security-vnet.md).
-
-* Only allow traffic to the Azure Database for MySQL using the Private IP address of the VM. For more information, see the articles on [Service Endpoint](concepts-data-access-and-security-vnet.md) and [VNet firewall rules](howto-manage-vnet-using-portal.md).
-
-* On the Azure VM, narrow down the scope of outgoing connection by using Network Security Groups (NSGs) and Service Tags as follows
-
- * Specify an NSG rule to allow traffic for *Service Tag = SQL.WestUs* - only allowing connection to Azure Database for MySQL in West US
- * Specify an NSG rule (with a higher priority) to deny traffic for *Service Tag = SQL* - denying connections to Update to Azure Database for MySQL in all regions</br></br>
-
-At the end of this setup, the Azure VM can connect only to Azure Database for MySQL in the West US region. However, the connectivity isn't restricted to a single Azure Database for MySQL. The VM can still connect to any Azure Database for MySQL in the West US region, including the databases that aren't part of the subscription. While we've reduced the scope of data exfiltration in the above scenario to a specific region, we haven't eliminated it altogether.</br>
-
-With Private Link, you can now set up network access controls like NSGs to restrict access to the private endpoint. Individual Azure PaaS resources are then mapped to specific private endpoints. A malicious insider can only access the mapped PaaS resource (for example an Azure Database for MySQL) and no other resource.
-
-## On-premises connectivity over private peering
-
-When you connect to the public endpoint from on-premises machines, your IP address needs to be added to the IP-based firewall using a server-level firewall rule. While this model works well for allowing access to individual machines for dev or test workloads, it's difficult to manage in a production environment.
-
-With Private Link, you can enable cross-premises access to the private endpoint using [Express Route](https://azure.microsoft.com/services/expressroute/) (ER), private peering or [VPN tunnel](../vpn-gateway/index.yml). They can subsequently disable all access via public endpoint and not use the IP-based firewall.
-
-> [!NOTE]
-> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Make sure that both subscriptions have the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Configure Private Link for Azure Database for MySQL
-
-### Creation Process
-
-Private endpoints are required to enable Private Link. This can be done using the following how-to guides.
-
-* [Azure portal](./howto-configure-privatelink-portal.md)
-* [CLI](./howto-configure-privatelink-cli.md)
-
-### Approval Process
-Once the network admin creates the private endpoint (PE), the MySQL admin can manage the private endpoint Connection (PEC) to Azure Database for MySQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for MySQL connectivity.
-
-* Navigate to the Azure Database for MySQL server resource in the Azure portal.
- * Select the private endpoint connections in the left pane
- * Shows a list of all private endpoint Connections (PECs)
- * Corresponding private endpoint (PE) created
--
-* Select an individual PEC from the list by selecting it.
--
-* The MySQL server admin can choose to approve or reject a PEC and optionally add a short text response.
--
-* After approval or rejection, the list will reflect the appropriate state along with the response text
--
-## Use cases of Private Link for Azure Database for MySQL
-
-Clients can connect to the private endpoint from the same VNet, [peered VNet](../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases.
--
-### Connecting from an Azure VM in Peered Virtual Network (VNet)
-Configure [VNet peering](../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for MySQL from an Azure VM in a peered VNet.
-
-### Connecting from an Azure VM in VNet-to-VNet environment
-Configure [VNet-to-VNet VPN gateway connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for MySQL from an Azure VM in a different region or subscription.
-
-### Connecting from an on-premises environment over VPN
-To establish connectivity from an on-premises environment to the Azure Database for MySQL, choose and implement one of the options:
-
-* [Point-to-Site connection](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
-* [Site-to-Site VPN connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md)
-* [ExpressRoute circuit](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)
-
-## Private Link combined with firewall rules
-
-The following situations and outcomes are possible when you use Private Link in combination with firewall rules:
-
-* If you don't configure any firewall rules, then by default, no traffic will be able to access the Azure Database for MySQL.
-
-* If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule.
-
-* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for MySQL is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for MySQL.
-
-## Deny public access for Azure Database for MySQL
-
-If you want to rely only on private endpoints for accessing their Azure Database for MySQL, you can disable setting all public endpoints (i.e. [firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
-
-When this setting is set to *YES*, only connections via private endpoints are allowed to your Azure Database for MySQL. When this setting is set to *NO*, clients can connect to your Azure Database for MySQL based on your firewall or VNet service endpoint settings. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
-
-> [!Note]
-> This feature is available in all Azure regions where Azure Database for MySQL - Single server supports General Purpose and Memory Optimized pricing tiers.
->
-> This setting does not have any impact on the SSL and TLS configurations for your Azure Database for MySQL.
-
-To learn how to set the **Deny Public Network Access** for your Azure Database for MySQL from Azure portal, refer to [How to configure Deny Public Network Access](howto-deny-public-network-access.md).
-
-## Next steps
-
-To learn more about Azure Database for MySQL security features, see the following articles:
-
-* To configure a firewall for Azure Database for MySQL, see [Firewall support](./concepts-firewall-rules.md).
-
-* To learn how to configure a virtual network service endpoint for your Azure Database for MySQL, see [Configure access from virtual networks](./concepts-data-access-and-security-vnet.md).
-
-* For an overview of Azure Database for MySQL connectivity, see [Azure Database for MySQL Connectivity Architecture](./concepts-connectivity-architecture.md)
-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Concepts Data Encryption Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-encryption-mysql.md
- Title: Data encryption with customer-managed key - Azure Database for MySQL
-description: Azure Database for MySQL data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.
----- Previously updated : 01/13/2020--
-# Azure Database for MySQL data encryption with a customer-managed key
--
-Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
-
-Data encryption with customer-managed keys for Azure Database for MySQL, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../key-vault/general/security-features.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article.
-
-Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key, but does provide services of encryption and decryption to authorized entities. Key Vault can generate the key, import it, or [have it transferred from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md).
-
-> [!NOTE]
-> This feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-data-encryption-mysql.md#limitations) section.
-
-## Benefits
-
-Data encryption with customer-managed keys for Azure Database for MySQL provides the following benefits:
-
-* Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible
-* Full control over the key-lifecycle, including rotation of the key to align with corporate policies
-* Central management and organization of keys in Azure Key Vault
-* Ability to implement separation of duties between security officers, and DBA and system administrators
--
-## Terminology and description
-
-**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
-
-**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK.
-
-The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../security/fundamentals/encryption-atrest.md).
-
-## How data encryption with a customer-managed key work
--
-For a MySQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server:
-
-* **get**: For retrieving the public part and properties of the key in the key vault.
-* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL.
-* **unwrapKey**: To be able to decrypt the DEK. Azure Database for MySQL needs the decrypted DEK to encrypt/decrypt the data
-
-The key vault administrator can also [enable logging of Key Vault audit events](../azure-monitor/insights/key-vault-insights-overview.md), so they can be audited later.
-
-When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
-
-## Requirements for configuring data encryption for Azure Database for MySQL
-
-The following are requirements for configuring Key Vault:
-
-* Key Vault and Azure Database for MySQL must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving Key Vault resource afterwards requires you to reconfigure the data encryption.
-* Enable the [soft-delete](../key-vault/general/soft-delete-overview.md) feature on the key vault with retention period set to **90 days**, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days by default, unless the retention period is explicitly set to <=90 days. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal).
-* Enable the [Purge Protection](../key-vault/general/soft-delete-overview.md#purge-protection) feature on the key vault with retention period set to **90 days**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via Azure CLI or PowerShell. When purge protection is on, a vault or an object in the deleted state cannot be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed.
-* Grant the Azure Database for MySQL access to the key vault with the get, wrapKey, and unwrapKey permissions by using its unique managed identity. In the Azure portal, the unique 'Service' identity is automatically created when data encryption is enabled on the MySQL. See [Configure data encryption for MySQL](howto-data-encryption-portal.md) for detailed, step-by-step instructions when you're using the Azure portal.
-
-The following are requirements for configuring the customer-managed key:
-
-* The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048.
-* The key activation date (if set) must be a date and time in the past. The expiration date not set.
-* The key must be in the *Enabled* state.
-* The key must have [soft delete](../key-vault/general/soft-delete-overview.md) with retention period set to **90 days**.This implicitly sets the required key attribute recoveryLevel: ΓÇ£RecoverableΓÇ¥. If the retention is set to < 90 days, the recoveryLevel: "CustomizedRecoverable", which doesn't the requirement so ensure to set the retention period is set to **90 days**.
-* The key must have [purge protection enabled](../key-vault/general/soft-delete-overview.md#purge-protection).
-* If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key) into the key vault, make sure to provide it in the supported file formats (`.pfx`, `.byok`, `.backup`).
-
-## Recommendations
-
-When you're using data encryption by using a customer-managed key, here are recommendations for configuring Key Vault:
-
-* Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion.
-* Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated.
-* Ensure that Key Vault and Azure Database for MySQL reside in the same region, to ensure a faster access for DEK wrap, and unwrap operations.
-* Lock down the Azure KeyVault to only **private endpoint and selected networks** and allow only *trusted Microsoft* services to secure the resources.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/keyvault-trusted-service.png" alt-text="trusted-service-with-AKV":::
-
-Here are recommendations for configuring a customer-managed key:
-
-* Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service.
-
-* If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey).
-
-## Inaccessible customer-managed key condition
-
-When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*. Some of the reason why the server can reach this state are:
-
-* If we create a Point In Time Restore server for your Azure Database for MySQL, which has data encryption enabled, the newly created server will be in *Inaccessible* state. You can fix this through [Azure portal](howto-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](howto-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
-* If we create a read replica for your Azure Database for MySQL, which has data encryption enabled, the replica server will be in *Inaccessible* state. You can fix this through [Azure portal](howto-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](howto-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
-* If you delete the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key Vault](../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
-* If we delete the key from the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key](../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
-* If the key stored in the Azure KeyVault expires, the key will become invalid and the Azure Database for MySQL will transition into *Inaccessible* state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the server *Available*.
-
-### Accidental key access revocation from Key Vault
-
-It might happen that someone with sufficient access rights to Key Vault accidentally disables server access to the key by:
-
-* Revoking the key vault's `get`, `wrapKey`, and `unwrapKey` permissions from the server.
-* Deleting the key.
-* Deleting the key vault.
-* Changing the key vault's firewall rules.
-* Deleting the managed identity of the server in Azure AD.
-
-## Monitor the customer-managed key in Key Vault
-
-To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features:
-
-* [Azure Resource Health](../service-health/resource-health-overview.md): An inaccessible database that has lost access to the customer key shows as "Inaccessible" after the first connection to the database has been denied.
-* [Activity log](../service-health/alerts-activity-log-service-notifications-portal.md): When access to the customer key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible, if you create alerts for these events.
-
-* [Action groups](../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences.
-
-## Restore and replicate with a customer's managed key in Key Vault
-
-After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through read replicas. However, the copy can be changed to reflect a new customer's managed key for encryption. When the customer-managed key is changed, old backups of the server start using the latest key.
-
-To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the source and restored/replica servers:
-
-* Initiate the restore or read replica creation process from the source Azure Database for MySQL.
-* Keep the newly created server (restored/replica) in an inaccessible state, because its unique identity hasn't yet been given permissions to Key Vault.
-* On the restored/replica server, revalidate the customer-managed key in the data encryption settings to ensures that the newly created server is given wrap and unwrap permissions to the key stored in Key Vault.
-
-## Limitations
-
-For Azure Database for MySQL, the support for encryption of data at rest using customers managed key (CMK) has few limitations -
-
-* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
-* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
-
- > [!NOTE]
- > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
- > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
-
-* Encryption is only supported with RSA 2048 cryptographic key.
-
-## Next steps
-
-* Learn how to set up data encryption with a customer-managed key for your Azure database for MySQL by using the [Azure portal](howto-data-encryption-portal.md) and [Azure CLI](howto-data-encryption-cli.md).
-* Learn about the storage type support for [Azure Database for MySQL - Single Server](concepts-pricing-tiers.md#storage)
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-in-replication.md
- Title: Data-in Replication - Azure Database for MySQL
-description: Learn about using Data-in Replication to synchronize from an external server into the Azure Database for MySQL service.
----- Previously updated : 04/08/2021--
-# Replicate data into Azure Database for MySQL
--
-Data-in Replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL service. The external server can be on-premises, in virtual machines, or a database service hosted by other cloud providers. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-
-## When to use Data-in Replication
-
-The main scenarios to consider about using Data-in Replication are:
--- **Hybrid Data Synchronization:** With Data-in Replication, you can keep data synchronized between your on-premises servers and Azure Database for MySQL. This synchronization is useful for creating hybrid applications. This method is appealing when you have an existing local database server but want to move the data to a region closer to end users.-- **Multi-Cloud Synchronization:** For complex cloud solutions, use Data-in Replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database services hosted in those clouds.-
-For migration scenarios, use the [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/)(DMS).
-
-## Limitations and considerations
-
-### Data not replicated
-
-The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html).
-
-### Filtering
-
-To skip replicating tables from your source server (hosted on-premises, in virtual machines, or a database service hosted by other cloud providers), the `replicate_wild_ignore_table` parameter is supported. Optionally, update this parameter on the replica server hosted in Azure using the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
-
-To learn more about this parameter, review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table).
-
-## Supported in General Purpose or Memory Optimized tier only
-
-Data-in Replication is only supported in General Purpose and Memory Optimized pricing tiers.
-
->[!Note]
->GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2).
-
-### Requirements
--- The source server version must be at least MySQL version 5.6.-- The source and replica server versions must be the same. For example, both must be MySQL version 5.6 or both must be MySQL version 5.7.-- Each table must have a primary key.-- The source server should use the MySQL InnoDB engine.-- User must have permissions to configure binary logging and create new users on the source server.-- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` or `mysql.az_replication_change_master_with_gtid` stored procedure. Refer to the following [examples](./howto-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter.-- Ensure that the source server's IP address has been added to the Azure Database for MySQL replica server's firewall rules. Update firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md).-- Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306.-- Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).-
-## Next steps
--- Learn how to [set up data-in replication](howto-data-in-replication.md)-- Learn about [replicating in Azure with read replicas](concepts-read-replicas.md)-- Learn about how to [migrate data with minimal downtime using DMS](howto-migrate-online.md)
mysql Concepts Database Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-database-application-development.md
- Title: Application development - Azure Database for MySQL
-description: Introduces design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL
----- Previously updated : 3/18/2020--
-# Application development overview for Azure Database for MySQL
--
-This article discusses design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL.
-
-> [!TIP]
-> For a tutorial showing you how to create a server, create a server-based firewall, view server properties, create database, and connect and query by using workbench and mysql.exe, see [Design your first Azure Database for MySQL database](tutorial-design-database-using-portal.md)
-
-## Language and platform
-There are code samples available for various programming languages and platforms. You can find links to the code samples at:
-[Connectivity libraries used to connect to Azure Database for MySQL](concepts-connection-libraries.md)
-
-## Tools
-Azure Database for MySQL uses the MySQL community version, compatible with MySQL common management tools such as Workbench or MySQL utilities such as mysql.exe, [phpMyAdmin](https://www.phpmyadmin.net/), [Navicat](https://www.navicat.com/products/navicat-for-mysql), [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/) and others. You can also use the Azure portal, Azure CLI, and REST APIs to interact with the database service.
-
-## Resource limitations
-Azure Database for MySQL manages the resources available to a server by using two different mechanisms:
-- Resources Governance.-- Enforcement of Limits.-
-## Security
-Azure Database for MySQL provides resources for limiting access, protecting data, configuring users and roles, and monitoring activities on a MySQL database.
-
-## Authentication
-Azure Database for MySQL supports server authentication of users and logins.
-
-## Resiliency
-When a transient error occurs while connecting to a MySQL database, your code should retry the call. We recommend that the retry logic use back off logic so that it does not overwhelm the SQL database with multiple clients retrying simultaneously.
--- Code samples: For code samples that illustrate retry logic, see samples for the language of your choice at: [Connectivity libraries used to connect to Azure Database for MySQL](concepts-connection-libraries.md)-
-## Managing connections
-Database connections are a limited resource, so we recommend sensible use of connections when accessing your MySQL database to achieve better performance.
-- Access the database by using connection pooling or persistent connections.-- Access the database by using short connection life span. -- Use retry logic in your application at the point of the connection attempt to catch failures resulting from concurrent connections have reached the maximum allowed. In the retry logic, set a short delay, and then wait for a random time before the additional connection attempts.
mysql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-firewall-rules.md
- Title: Firewall rules - Azure Database for MySQL
-description: Learn about using firewall rules to enable connections to your Azure Database for MySQL server.
----- Previously updated : 07/17/2020--
-# Azure Database for MySQL server firewall rules
--
-Firewalls prevent all access to your database server until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request.
-
-To configure a firewall, create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
-
-**Firewall rules:** These rules enable clients to access your entire Azure Database for MySQL server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
-
-## Firewall overview
-All database access to your Azure Database for MySQL server is by default blocked by the firewall. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules.
-
-Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your Azure Database for MySQL database, as shown in the following diagram:
--
-## Connecting from the Internet
-Server-level firewall rules apply to all databases on the Azure Database for MySQL server.
-
-If the IP address of the request is within one of the ranges specified in the server-level firewall rules, then the connection is granted.
-
-If the IP address of the request is outside the ranges specified in any of the database-level or server-level firewall rules, then the connection request fails.
-
-## Connecting from Azure
-It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
-
-If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL server.
-
-> [!IMPORTANT]
-> The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
--
-### Connecting from a VNet
-To connect securely to your Azure Database for MySQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
-
-## Programmatically managing firewall rules
-In addition to the Azure portal, firewall rules can be managed programmatically by using the Azure CLI. See also [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Troubleshooting firewall issues
-Consider the following points when access to the Microsoft Azure Database for MySQL server service does not behave as expected:
-
-* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect.
-
-* **The login is not authorized or an incorrect password was used:** If a login does not have permissions on the Azure Database for MySQL server or the password used is incorrect, the connection to the Azure Database for MySQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must provide the necessary security credentials.
-
-* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you can try one of the following solutions:
-
- * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for MySQL server, and then add the IP address range as a firewall rule.
-
- * Get static IP addressing instead for your client computers, and then add the IP addresses as firewall rules.
-
-* **Server's IP appears to be public:** Connections to the Azure Database for MySQL server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
-
-* **Cannot connect from Azure resource with allowed IP:** Check whether the **Microsoft.Sql** service endpoint is enabled for the subnet you are connecting from. If **Microsoft.Sql** is enabled, it indicates that you only want to use [VNet service endpoint rules](concepts-data-access-and-security-vnet.md) on that subnet.
-
- For example, you may see the following error if you are connecting from an Azure VM in a subnet that has **Microsoft.Sql** enabled but has no corresponding VNet rule:
- `FATAL: Client from Azure Virtual Networks is not allowed to access the server`
-
-* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error.
-
-## Next steps
-
-* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./howto-manage-firewall-using-portal.md)
-* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./howto-manage-firewall-using-cli.md)
-* [VNet service endpoints in Azure Database for MySQL](./concepts-data-access-and-security-vnet.md)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-high-availability.md
- Title: High availability - Azure Database for MySQL
-description: This article provides information on high availability in Azure Database for MySQL
----- Previously updated : 7/7/2020--
-# High availability in Azure Database for MySQL
--
-The Azure Database for MySQL service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/mysql) uptime. Azure Database for MySQL provides high availability during planned events such as user-initated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MySQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
-
-Azure Database for MySQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
-
-## Components in Azure Database for MySQL
-
-| **Component** | **Description**|
-| | -- |
-| <b>MySQL Database Server | Azure Database for MySQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in 60-120 seconds depending on the transactional activity on the database. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (ib_log) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://dev.mysql.com/doc/refman/5.7/en/innodb-checkpoints.html) process, data pages from the database server memory are also flushed to the storage. |
-| <b>Remote Storage | All MySQL physical data files and log files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within 60 seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
-| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. |
-
-## Planned downtime mitigation
-Azure Database for MySQL is architected to provide high availability during planned downtime operations.
--
-Here are some planned maintenance scenarios:
-
-| **Scenario** | **Description**|
-| | -- |
-| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.|
-| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.|
-| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
-| <b>Minor version upgrades | Azure Database for MySQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
--
-## Unplanned downtime mitigation
-
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in 60-120 seconds. The remote storage is automatically attached to the new database server. MySQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MySQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
---
-### Unplanned downtime: failure scenarios and service recovery
-Here are some failure scenarios and how Azure Database for MySQL automatically recovers:
-
-| **Scenario** | **Automatic recovery** |
-| - | - |
-| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. |
-| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
-
-Here are some failure scenarios that require user action to recover:
-
-| **Scenario** | **Recovery plan** |
-| - | - |
-| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](howto-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
-| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [mysqldump](concepts-migrate-dump-restore.md), and then use [restore](concepts-migrate-dump-restore.md#restore-your-mysql-database-using-command-line-or-mysql-workbench) to restore those tables into your database. |
---
-## Summary
-
-Azure Database for MySQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for MySQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/mysql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications.
-
-## Next steps
-- Learn about [Azure regions](../availability-zones/az-overview.md)-- Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
mysql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-infrastructure-double-encryption.md
- Title: Infrastructure double encryption - Azure Database for MySQL
-description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service managed keys.
----- Previously updated : 6/30/2020--
-# Azure Database for MySQL Infrastructure double encryption
--
-Azure Database for MySQL uses storage [encryption of data at-rest](concepts-security.md#at-rest) for data using Microsoft's managed keys. Data, including backups, are encrypted on disk and this encryption is always on and can't be disabled. The encryption uses FIPS 140-2 validated cryptographic module and an AES 256-bit cipher for the Azure storage encryption.
-
-Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for MySQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact.
-
-> [!NOTE]
-> Like data encryption at rest, this feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-infrastructure-double-encryption.md#limitations) section.
-
-Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for MySQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-mysql.md) for the provisioned MySQL server.
-
-Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
-
-> [!NOTE]
-> Using Infrastructure double encryption will have 5-10% impact on the throughput of your Azure Database for MySQL server due to the additional encryption process.
-
-## Benefits
-
-Infrastructure double encryption for Azure Database for MySQL provides the following benefits:
-
-1. **Additional diversity of crypto implementation** - The planned move to hardware-based encryption will further diversify the implementations by providing a hardware-based implementation in addition to the software-based implementation.
-2. **Implementation errors** - Two layers of encryption at infrastructure layer protects against any errors in caching or memory management in higher layers that exposes plaintext data. Additionally, the two layers also ensures against errors in the implementation of the encryption in general.
-
-The combination of these provides strong protection against common threats and weaknesses used to attack cryptography.
-
-## Supported scenarios with infrastructure double encryption
-
-The encryption capabilities that are provided by Azure Database for MySQL can be used together. Below is a summary of the various scenarios that you can use:
-
-| ## | Default encryption | Infrastructure double encryption | Data encryption using Customer-managed keys |
-|:|::|:--:|:--:|
-| 1 | *Yes* | *No* | *No* |
-| 2 | *Yes* | *Yes* | *No* |
-| 3 | *Yes* | *No* | *Yes* |
-| 4 | *Yes* | *Yes* | *Yes* |
-| | | | |
-
-> [!Important]
-> - Scenario 2 and 4 can introduce 5-10 percent drop in throughput based on the workload type for Azure Database for MySQL server due to the additional layer of infrastructure encryption.
-> - Configuring Infrastructure double encryption for Azure Database for MySQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption. However, you can still enable Data encryption using customer-managed keys for the server created with/without infrastructure double encryption.
-
-## Limitations
-
-For Azure Database for MySQL, the support for infrastruction double encryption has few limitations -
-
-* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
-* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
-
- > [!NOTE]
- > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
- > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
---
-## Next steps
-
-Learn how to [set up Infrastructure double encryption for Azure database for MySQL](howto-double-encryption.md).
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-limits.md
- Title: Limitations - Azure Database for MySQL
-description: This article describes limitations in Azure Database for MySQL, such as number of connection and storage engine options.
----- Previously updated : 10/1/2020-
-# Limitations in Azure Database for MySQL
--
-The following sections describe capacity, storage engine support, privilege support, data manipulation statement support, and functional limits in the database service. Also see [general limitations](https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.6/en/limits.html) applicable to the MySQL database engine.
-
-## Server parameters
-
-> [!NOTE]
-> If you are looking for min/max values for server parameters like `max_connections` and `innodb_buffer_pool_size`, this information has moved to the **[server parameters](./concepts-server-parameters.md)** article.
-
-Azure Database for MySQL supports tuning the values of server parameters. The min and max value of some parameters (ex. `max_connections`, `join_buffer_size`, `query_cache_size`) is determined by the pricing tier and vCores of the server. Refer to [server parameters](./concepts-server-parameters.md) for more information about these limits.
-
-Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables aren't populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](howto-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](howto-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
-
-Password plugins such as "validate_password" and "caching_sha2_password" aren't supported by the service.
-
-## Storage engines
-
-MySQL supports many storage engines. On Azure Database for MySQL, the following storage engines are supported and unsupported:
-
-### Supported
-- [InnoDB](https://dev.mysql.com/doc/refman/5.7/en/innodb-introduction.html)-- [MEMORY](https://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html)-
-### Unsupported
-- [MyISAM](https://dev.mysql.com/doc/refman/5.7/en/myisam-storage-engine.html)-- [BLACKHOLE](https://dev.mysql.com/doc/refman/5.7/en/blackhole-storage-engine.html)-- [ARCHIVE](https://dev.mysql.com/doc/refman/5.7/en/archive-storage-engine.html)-- [FEDERATED](https://dev.mysql.com/doc/refman/5.7/en/federated-storage-engine.html)-
-## Privileges & data manipulation support
-
-Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MySQL server. To maintain the service integrity and SLA at a product level, this service doesn't expose multiple roles.
-
-The MySQL service doesn't allow direct access to the underlying file system. Some data manipulation commands aren't supported.
-
-### Unsupported
-
-The following are unsupported:
-- DBA role: Restricted. Alternatively, you can use the administrator user (created during new server creation), allows you to perform most of DDL and DML statements. -- SUPER privilege: Similarly, [SUPER privilege](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_super) is restricted.-- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, remove the `CREATE DEFINER` commands manually or by using the `--skip-definer` command when performing a [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html).-- System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionality. You can't make changes to the `mysql` system database.-- `SELECT ... INTO OUTFILE`: Not supported in the service.-- `LOAD_FILE(file_name)`: Not supported in the service.-- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege is not supported for taking backups using any [utility tools](./how-to-decide-on-right-migration-tools.md).-
-### Supported
-- `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). Additionally, if you are using MySQL client version >= 8.0 you need to include `-ΓÇôlocal-infile=1` parameter in your connection string.--
-## Functional limitations
-
-### Scale operations
-- Dynamic scaling to and from the Basic pricing tiers is currently not supported.-- Decreasing server storage size is not supported.-
-### Major version upgrades
-- [Major version upgrade is supported for v5.6 to v5.7 upgrades only](how-to-major-version-upgrade.md). Upgrades to v8.0 is not supported yet.-
-### Point-in-time-restore
-- When using the PITR feature, the new server is created with the same configurations as the server it is based on.-- Restoring a deleted server is not supported.-
-### VNet service endpoints
-- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.-
-### Storage size
-- Please refer to [pricing tiers](concepts-pricing-tiers.md#storage) for the storage size limits per pricing tier.-
-## Current known issues
-- MySQL server instance displays the wrong server version after connection is established. To get the correct server instance engine version, use the `select version();` command.-
-## Next steps
-- [What's available in each service tier](concepts-pricing-tiers.md)-- [Supported MySQL database versions](concepts-supported-versions.md)
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-dbforge-studio-for-mysql.md
- Title: Use dbForge Studio for MySQL to migrate a MySQL database to Azure Database for MySQL
-description: The article demonstrates how to migrate to Azure Database for MySQL by using dbForge Studio for MySQL.
----- Previously updated : 03/03/2021-
-# Migrate data to Azure Database for MySQL with dbForge Studio for MySQL
--
-Looking to move your MySQL databases to Azure Database for MySQL? Consider using the migration tools in dbForge Studio for MySQL. With it, database transfer can be configured, saved, edited, automated, and scheduled.
-
-To complete the examples in this article, you'll need to download and install [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/).
-
-## Connect to Azure Database for MySQL
-
-1. In dbForge Studio for MySQL, select **New Connection** from the **Database** menu.
-
-1. Provide a host name and sign-in credentials.
-
-1. Select **Test Connection** to check the configuration.
--
-## Migrate with the Backup and Restore functionality
-
-You can choose from many options when using dbForge Studio for MySQL to migrate databases to Azure. If you need to move the entire database, it's best to use the **Backup and Restore** functionality.
-
-In this example, we migrate the *sakila* database from MySQL server to Azure Database for MySQL. The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
-
-### Back up the database
-
-1. In dbForge Studio for MySQL, select **Backup Database** from the **Backup and Restore** menu. The **Database Backup Wizard** appears.
-
-1. On the **Backup content** tab of the **Database Backup Wizard**, select database objects you want to back up.
-
-1. On the **Options** tab, configure the backup process to fit your requirements.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png" alt-text="Screenshot showing the options pane of the Backup wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png":::
-
-1. Select **Next**, and then specify error processing behavior and logging options.
-
-1. Select **Backup**.
-
-### Restore the database
-
-1. In dbForge Studio for MySQL, connect to Azure Database for MySQL. [Refer to the instructions](#connect-to-azure-database-for-mysql).
-
-1. Select **Restore Database** from the **Backup and Restore** menu. The **Database Restore Wizard** appears.
-
-1. In the **Database Restore Wizard**, select a file with a database backup.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png" alt-text="Screenshot showing the Restore step of the Database Restore wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png":::
-
-1. Select **Restore**.
-
-1. Check the result.
-
-## Migrate with the Copy Databases functionality
-
-The **Copy Databases** functionality in dbForge Studio for MySQL is similar to **Backup and Restore**, except that it doesn't require two steps to migrate a database. It also lets you transfer two or more databases at once.
-
->[!NOTE]
-> The **Copy Databases** functionality is only available in the Enterprise edition of dbForge Studio for MySQL.
-
-In this example, we migrate the *world_x* database from MySQL server to Azure Database for MySQL.
-
-To migrate a database using the Copy Databases functionality:
-
-1. In dbForge Studio for MySQL, select **Copy Databases** from the **Database** menu.
-
-1. On the **Copy Databases** tab, specify the source and target connection. Also select the databases to be migrated.
-
- We enter the Azure MySQL connection and select the *world_x* database. Select the green arrow to start the process.
-
-1. Check the result.
-
-You'll see that the *world_x* database has successfully appeared in Azure MySQL.
--
-## Migrate a database with schema and data comparison
-
-You can choose from many options when using dbForge Studio for MySQL to migrate databases, schemas, and/or data to Azure. If you need to move selective tables from a MySQL database to Azure, it's best to use the **Schema Comparison** and the **Data Comparison** functionality.
-
-In this example, we migrate the *world* database from MySQL server to Azure Database for MySQL.
-
-The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
-
-The logic behind this approach is to create an empty database in Azure Database for MySQL and synchronize it with the source MySQL database. We first use the **Schema Comparison** tool, and next we use the **Data Comparison** functionality. These steps ensure that the MySQL schemas and data are accurately moved to Azure.
-
-To complete this exercise, you'll first need to [connect to Azure Database for MySQL](#connect-to-azure-database-for-mysql) and create an empty database.
-
-### Schema synchronization
-
-1. On the **Comparison** menu, select **New Schema Comparison**. The **New Schema Comparison Wizard** appears.
-
-1. Choose your source and target, and then specify the schema comparison options. Select **Compare**.
-
-1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Schema Synchronization Wizard**.
-
-1. Walk through the steps of the wizard to configure synchronization. Select **Synchronize** to deploy the changes.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png" alt-text="Screenshot showing the schema synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png":::
-
-### Data Comparison
-
-1. On the **Comparison** menu, select **New Data Comparison**. The **New Data Comparison Wizard** appears.
-
-1. Choose your source and target, and then specify the data comparison options. Change mappings if necessary, and then select **Compare**.
-
-1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Data Synchronization Wizard**.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png" alt-text="Screenshot showing the results of the data comparison." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png":::
-
-1. Walk through the steps of the wizard configuring synchronization. Select **Synchronize** to deploy the changes.
-
-1. Check the result.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png" alt-text="Screenshot showing the results of the Data Synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png":::
-
-## Next steps
-- [MySQL overview](overview.md)
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-dump-restore.md
- Title: Migrate using dump and restore - Azure Database for MySQL
-description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tools such as mysqldump, MySQL Workbench, and PHPMyAdmin.
----- Previously updated : 10/30/2020--
-# Migrate your MySQL database to Azure Database for MySQL using dump and restore
--
-This article explains two common ways to back up and restore databases in your Azure Database for MySQL
-- Dump and restore from the command-line (using mysqldump)-- Dump and restore using PHPMyAdmin-
-You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
-
-## Before you begin
-To step through this how-to guide, you need to have:
-- [Create Azure Database for MySQL server - Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)-- [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) command-line utility installed on a machine.-- [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool to do dump and restore commands.-
-> [!TIP]
-> If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
--
-## Common use-cases for dump and restore
-
-Most common use-cases are:
--- **Moving from other managed service provider** - Most managed service provider may not provide access to the physical storage file for security reasons so logical backup and restore is the only option to migrate.-- **Migrating from on-premises environment or Virtual machine** - Azure Database for MySQL doesn't support restore of physical backups which makes logical backup and restore as the ONLY approach.-- **Moving your backup storage from locally redundant to geo-redundant storage** - Azure Database for MySQL allows configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, dump and restore is the ONLY option. -- **Migrating from alternative storage engines to InnoDB** - Azure Database for MySQL supports only InnoDB Storage engine, and therefore does not support alternative storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before migration to Azure Database for MySQL.-
- For example, if you have a WordPress or WebApp using the MyISAM tables, first convert those tables by migrating into InnoDB format before restoring to Azure Database for MySQL. Use the clause `ENGINE=InnoDB` to set the engine used when creating a new table, then transfer the data into the compatible table before the restore.
-
- ```sql
- INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
- ```
-> [!Important]
-> - To avoid any compatibility issues, ensure the same version of MySQL is used on the source and destination systems when dumping databases. For example, if your existing MySQL server is version 5.7, then you should migrate to Azure Database for MySQL configured to run version 5.7. The `mysql_upgrade` command does not function in an Azure Database for MySQL server, and is not supported.
-> - If you need to upgrade across MySQL versions, first dump or export your lower version database into a higher version of MySQL in your own environment. Then run `mysql_upgrade`, before attempting migration into an Azure Database for MySQL.
-
-## Performance considerations
-To optimize performance, take notice of these considerations when dumping large databases:
-- Use the `exclude-triggers` option in mysqldump when dumping databases. Exclude triggers from dump files to avoid the trigger commands firing during the data restore.-- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and sends a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.-- Use the `extended-insert` multiple-row syntax that includes several VALUE lists. This results in a smaller dump file and speeds up inserts when the file is reloaded.-- Use the `order-by-primary` option in mysqldump when dumping databases, so that the data is scripted in primary key order.-- Use the `disable-keys` option in mysqldump when dumping data, to disable foreign key constraints before load. Disabling foreign key checks provides performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.-- Use partitioned tables when appropriate.-- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources using the metrics available in the Azure portal.-- Use the `defer-table-indexes` option in mysqldump when dumping databases, so that index creation happens after tables data is loaded.-- Use the `skip-definer` option in mysqldump to omit definer and SQL SECURITY clauses from the create statements for views and stored procedures. When you reload the dump file, it creates objects that use the default DEFINER and SQL SECURITY values.-- Copy the backup files to an Azure blob/store and perform the restore from there, which should be a lot faster than performing the restore across the Internet.-
-## Create a database on the target Azure Database for MySQL server
-Create an empty database on the target Azure Database for MySQL server where you want to migrate the data. Use a tool such as MySQL Workbench or mysql.exe to create the database. The database can have the same name as the database that is contained the dumped data or you can create a database with a different name.
-
-To get connected, locate the connection information in the **Overview** of your Azure Database for MySQL.
--
-Add the connection information into your MySQL Workbench.
--
-## Preparing the target Azure Database for MySQL server for fast data loads
-To prepare the target Azure Database for MySQL server for faster data loads, the following server parameters and configuration needs to be changed.
-- max_allowed_packet ΓÇô set to 1073741824 (i.e. 1GB) to prevent any overflow issue due to long rows.-- slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.-- query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.-- innodb_buffer_pool_size ΓÇô Scale up the server to 32 vCore Memory Optimized SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server.-- innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.-- innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.-- Scale up Storage tier ΓÇô The IOPs for Azure Database for MySQL server increases progressively with the increase in storage tier. For faster loads, you may want to increase the storage tier to increase the IOPs provisioned. Please do remember the storage can only be scaled up, not down.-
-Once the migration is completed, you can revert back the server parameters and compute tier configuration to its previous values.
-
-## Dump and restore using mysqldump utility
-
-### Create a backup file from the command-line using mysqldump
-To back up an existing MySQL database on the local on-premises server or in a virtual machine, run the following command:
-```bash
-$ mysqldump --opt -u [uname] -p[pass] [dbname] > [backupfile.sql]
-```
-
-The parameters to provide are:
-- [uname] Your database username-- [pass] The password for your database (note there is no space between -p and the password)-- [dbname] The name of your database-- [backupfile.sql] The filename for your database backup-- [--opt] The mysqldump option-
-For example, to back up a database named 'testdb' on your MySQL server with the username 'testuser' and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database. Make sure that the username 'testuser' has at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used.
-
-```bash
-GRANT SELECT, LOCK TABLES, SHOW VIEW ON *.* TO 'testuser'@'hostname' IDENTIFIED BY 'password';
-```
-Now run mysqldump to create the backup of `testdb` database
-
-```bash
-$ mysqldump -u root -p testdb > testdb_backup.sql
-```
-To select specific tables in your database to back up, list the table names separated by spaces. For example, to back up only table1 and table2 tables from the 'testdb', follow this example:
-
-```bash
-$ mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql
-```
-To back up more than one database at once, use the --database switch and list the database names separated by spaces.
-```bash
-$ mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
-```
-
-### Restore your MySQL database using command-line or MySQL Workbench
-Once you have created the target database, you can use the mysql command or MySQL Workbench to restore the data into the specific newly created database from the dump file.
-```bash
-mysql -h [hostname] -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
-```
-In this example, restore the data into the newly created database on the target Azure Database for MySQL server.
-
-Here is an example for how to use this **mysql** for **Single Server** :
-
-```bash
-$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p testdb < testdb_backup.sql
-```
-Here is an example for how to use this **mysql** for **Flexible Server** :
-
-```bash
-$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin -p testdb < testdb_backup.sql
-```
--
-## Dump and restore using PHPMyAdmin
-Follow these steps to dump and restore a database using PHPMyadmin.
-
-> [!NOTE]
-> For single server, the username must be in this format , 'username@servername' but for flexible server you can just use 'username' If you use 'username@servername' for flexible server, the connection will fail.
-
-### Export with PHPMyadmin
-To export, you can use the common tool phpMyAdmin, which you may already have installed locally in your environment. To export your MySQL database using PHPMyAdmin:
-1. Open phpMyAdmin.
-2. Select your database. Click the database name in the list on the left.
-3. Click the **Export** link. A new page appears to view the dump of database.
-4. In the Export area, click the **Select All** link to choose the tables in your database.
-5. In the SQL options area, click the appropriate options.
-6. Click the **Save as file** option and the corresponding compression option and then click the **Go** button. A dialog box should appear prompting you to save the file locally.
-
-### Import using PHPMyAdmin
-Importing your database is similar to exporting. Do the following actions:
-1. Open phpMyAdmin.
-2. In the phpMyAdmin setup page, click **Add** to add your Azure Database for MySQL server. Provide the connection details and login information.
-3. Create an appropriately named database and select it on the left of the screen. To rewrite the existing database, click the database name, select all the check boxes beside the table names, and select **Drop** to delete the existing tables.
-4. Click the **SQL** link to show the page where you can type in SQL commands, or upload your SQL file.
-5. Use the **browse** button to find the database file.
-6. Click the **Go** button to export the backup, execute the SQL commands, and re-create your database.
-
-## Known Issues
-For known issues, tips and tricks, we recommend you to look at our [techcommunity blog](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/tips-and-tricks-in-using-mysqldump-and-mysql-restore-to-azure/ba-p/916912).
-
-## Next steps
-- [Connect applications to Azure Database for MySQL](./howto-connection-string.md).-- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-import-export.md
- Title: Import and export - Azure Database for MySQL
-description: This article explains common ways to import and export databases in Azure Database for MySQL, by using tools such as MySQL Workbench.
----- Previously updated : 10/30/2020--
-# Migrate your MySQL database by using import and export
--
-This article explains two common approaches to importing and exporting data to an Azure Database for MySQL server by using MySQL Workbench.
-
-For detailed and comprehensive migration guidance, see the [migration guide resources](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
-
-For other migration scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
-
-## Prerequisites
-
-Before you begin migrating your MySQL database, you need to:
--- Create an [Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md).-- Download and install [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool for importing and exporting.-
-## Create a database on the Azure Database for MySQL server
-
-Create an empty database on the Azure Database for MySQL server by using MySQL Workbench, Toad, or Navicat. The database can have the same name as the database that contains the dumped data, or you can create a database with a different name.
-
-To get connected, do the following:
-
-1. In the Azure portal, look for the connection information on the **Overview** pane of your Azure Database for MySQL.
-
- :::image type="content" source="./media/concepts-migrate-import-export/1_server-overview-name-login.png" alt-text="Screenshot of the Azure Database for MySQL server connection information in the Azure portal.":::
-
-1. Add the connection information to MySQL Workbench.
-
- :::image type="content" source="./media/concepts-migrate-import-export/2_setup-new-connection.png" alt-text="Screenshot of the MySQL Workbench connection string.":::
-
-## Determine when to use import and export techniques
-
-> [!TIP]
-> For scenarios where you want to dump and restore the entire database, use the [dump and restore](concepts-migrate-dump-restore.md) approach instead.
-
-In the following scenarios, use MySQL tools to import and export databases into your MySQL database. For other tools, go to the "Migration Methods" section (page 22) of the [MySQL to Azure Database migration guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
--- When you need to selectively choose a few tables to import from an existing MySQL database into your Azure MySQL database, it's best to use the import and export technique. By doing so, you can omit any unneeded tables from the migration to save time and resources. For example, use the `--include-tables` or `--exclude-tables` switch with [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_include-tables), and the `--tables` switch with [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_tables).-- When you're moving database objects other than tables, explicitly create those objects. Include constraints (primary key, foreign key, and indexes), views, functions, procedures, triggers, and any other database objects that you want to migrate.-- When you're migrating data from external data sources other than a MySQL database, create flat files and import them by using [mysqlimport](https://dev.mysql.com/doc/refman/5.7/en/mysqlimport.html).-
-> [!Important]
-> Both Single Server and Flexible Server support only the InnoDB storage engine. Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MySQL.
->
-> If your source database uses another storage engine, convert to the InnoDB engine before you migrate the database. For example, if you have a WordPress or web app that uses the MyISAM engine, first convert the tables by migrating the data into InnoDB tables. Use the clause `ENGINE=INNODB` to set the engine for creating a table, and then transfer the data into the compatible table before the migration.
-
- ```sql
- INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
- ```
-
-## Performance recommendations for import and export
-
-For optimal data import and export performance, we recommend that you do the following:
--- Create clustered indexes and primary keys before you load data. Load the data in primary key order.-- Delay the creation of secondary indexes until after the data is loaded.-- Disable foreign key constraints before you load the data. Disabling foreign key checks provides significant performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.-- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal.-- Use partitioned tables when appropriate.-
-## Import and export data by using MySQL Workbench
-
-There are two ways to export and import data in MySQL Workbench: from the object browser context menu or from the Navigator pane. Each method serves a different purpose.
-
-> [!NOTE]
-> If you're adding a connection to MySQL Single Server or Flexible Server on MySQL Workbench, do the following:
->
-> - For MySQL Single Server, make sure that the user name is in the format *\<username@servername>*.
-> - For MySQL Flexible Server, use *\<username>* only. If you use *\<username@servername>* to connect, the connection will fail.
-
-### Run the table data export and import wizards from the object browser context menu
--
-The table data wizards support import and export operations by using CSV and JSON files. The wizards include several configuration options, such as separators, column selection, and encoding selection. You can run each wizard against local or remotely connected MySQL servers. The import action includes table, column, and type mapping.
-
-To access these wizards from the object browser context menu, right-click a table, and then select **Table Data Export Wizard** or **Table Data Import Wizard**.
-
-#### The table data export wizard
-
-To export a table to a CSV file:
-
-1. Right-click the table of the database to be exported.
-1. Select **Table Data Export Wizard**. Select the columns to be exported, row offset (if any), and count (if any).
-1. On the **Select data for export** pane, select **Next**. Select the file path, CSV, or JSON file type. Also select the line separator, method of enclosing strings, and field separator.
-1. On the **Select output file location** pane, select **Next**.
-1. On the **Export data** pane, select **Next**.
-
-#### The table data import wizard
-
-To import a table from a CSV file:
-
-1. Right-click the table of the database to be imported.
-1. Look for and select the CSV file to be imported, and then select **Next**.
-1. Select the destination table (new or existing), select or clear the **Truncate table before import** check box, and then select **Next**.
-1. Select the encoding and the columns to be imported, and then select **Next**.
-1. On the **Import data** pane, select **Next**. The wizard imports the data.
-
-### Run the SQL data export and import wizards from the Navigator pane
-
-Use a wizard to export or import SQL data that's generated from MySQL Workbench or from the mysqldump command. You can access the wizards from the **Navigator** pane or you can select **Server** from the main menu.
-
-#### Export data
--
-You can use the **Data Export** pane to export your MySQL data.
-
-1. In MySQL Workbench, on the **Navigator** pane, select **Data Export**.
-
-1. On the **Data Export** pane, select each schema that you want to export.
-
- For each schema, you can select specific schema objects or tables to export. Configuration options include export to a project folder or a self-contained SQL file, dump stored routines and events, or skip table data.
-
- Alternatively, use **Export a Result Set** to export a specific result set in the SQL editor to another format, such as CSV, JSON, HTML, and XML.
-
-1. Select the database objects to export, and configure the related options.
-1. Select **Refresh** to load the current objects.
-1. Optionally, select **Advanced Options** at the upper right to refine the export operation. For example, add table locks, use `replace` instead of `insert` statements, and quote identifiers with backtick characters.
-1. Select **Start Export** to begin the export process.
--
-#### Import data
--
-You can use the **Data Import** pane to import or restore exported data from the data export operation or from the mysqldump command.
-
-1. In MySQL Workbench, on the **Navigator** pane, select **Data Export/Restore**.
-1. Select the project folder or self-contained SQL file, select the schema to import into, or select the **New** button to define a new schema.
-1. Select **Start Import** to begin the import process.
-
-## Next steps
--- For another migration approach, see [Migrate your MySQL database to an Azure database for MySQL by using dump and restore](concepts-migrate-dump-restore.md).-- For more information about migrating databases to an Azure database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-mydumper-myloader.md
- Title: Migrate large databases to Azure Database for MySQL using mydumper/myloader
-description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tool mydumper/myloader
----- Previously updated : 06/18/2021--
-# Migrate large databases to Azure Database for MySQL using mydumper/myloader
--
-Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. To migrate MySQL databases larger than 1 TB to Azure Database for MySQL, consider using community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html), which provide the following benefits:
-
-* Parallelism, to help reduce the migration time.
-* Better performance, by avoiding expensive character set conversion routines.
-* An output format, with separate files for tables, metadata etc., that makes it easy to view/parse data. Consistency, by maintaining snapshot across all threads.
-* Accurate primary and replica log positions.
-* Easy management, as they support Perl Compatible Regular Expressions (PCRE) for specifying database and tables inclusions and exclusions.
-* Schema and data goes together. Don't need to handle it separately like other logical migration tools.
-
-This quickstart shows you how to install, back up, and restore a MySQL database by using mydumper/myloader.
-
-## Prerequisites
-
-Before you begin migrating your MySQL database, you need to:
-
-1. Create an Azure Database for MySQL server by using the [Azure portal](./flexible-server/quickstart-create-server-portal.md).
-
-2. Create an Azure VM running Linux by using the [Azure portal](../virtual-machines/linux/quick-create-portal.md) (preferably Ubuntu).
- > [!Note]
- > Prior to installing the tools, consider the following points:
- >
- > * If your source is on-premises and has a high bandwidth connection to Azure (using ExpressRoute), consider installing the tool on an Azure VM.<br>
- > * If you have a challenge in the bandwidth between the source and target, consider installing mydumper near the source and myloader near the target server. You can use tools **[Azcopy](../storage/common/storage-use-azcopy-v10.md)** to move the data from on-premises or other cloud solutions to Azure.
-
-3. Install mysql client, do the following steps:
-
-4.
-
- * Update the package index on the Azure VM running Linux by running the following command:
- ```bash
- $ sudo apt update
- ```
- * Install the mysql client package by running the following command:
- ```bash
- $ sudo apt install mysql-client
- ```
-
-## Install mydumper/myloader
-
-To install mydumper/myloader, do the following steps.
-
-1. Depending on your OS distribution, download the appropriate package for mydumper/myloader, running the following command:
-2.
- ```bash
- $ wget https://github.com/maxbube/mydumper/releases/download/v0.10.1/mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb
- ```
-
- > [!Note]
- > $(lsb_release -cs) helps to identify your distribution.
-
-3. To install the .deb package for mydumper, run the following command:
-
- ```bash
- $ dpkg -i mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb
- ```
-
- > [!Tip]
- > The command you use to install the package will differ based on the Linux distribution you have as the installers are different. The mydumper/myloader is available for following distributions Fedora, RedHat , Ubuntu, Debian, CentOS , openSUSE and MacOSX. For more information, see **[How to install mydumper](https://github.com/maxbube/mydumper#how-to-install-mydumpermyloader)**
-
-## Create a backup using mydumper
-
-* To create a backup using mydumper, run the following command:
-
- ```bash
- $ mydumper --host=<servername> --user=<username> --password=<Password> --outputdir=./backup --rows=100000 --compress --build-empty-files --threads=16 --compress-protocol --trx-consistency-only --ssl --regex '^(<Db_name>\.)' -L mydumper-logs.txt
- ```
-
-This command uses the following variables:
-
-* **--host:** The host to connect to
-* **--user:** Username with the necessary privileges
-* **--password:** User password
-* **--rows:** Try to split tables into chunks of this many rows
-* **--outputdir:** Directory to dump output files to
-* **--regex:** Regular expression for Database matching.
-* **--trx-consistency-only:** Transactional consistency only
-* **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer.
-
- >[!Note]
- >For more information on other options, you can use with mydumper, run the following command:
- **mydumper --help** . For more details see, [mydumper\myloader documentation](https://centminmod.com/mydumper.html)<br>
- >To dump multiple databases in parallel, you can modify regex variable as shown in the example: **regex ΓÇÖ^(DbName1\.|DbName2\.)**
-
-## Restore your database using myloader
-
-* To restore the database that you backed up using mydumper, run the following command:
-
- ```bash
- $ myloader --host=<servername> --user=<username> --password=<Password> --directory=./backup --queries-per-transaction=500 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt
- ```
-
-This command uses the following variables:
-
-* **--host:** The host to connect to
-* **--user:** Username with the necessary privileges
-* **--password:** User password
-* **--directory:** Location where the backup is stored.
-* **--queries-per-transaction:** Recommend setting to value not more than 500
-* **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer
-
-> [!Tip]
-> For more information on other options you can use with myloader, run the following command:
-**myloader --help**
-
-After the database is restored, itΓÇÖs always recommended to validate the data consistency between the source and the target databases.
-
-> [!Note]
-> Submit any issues or feedback regarding the mydumper/myloader tools **[here](https://github.com/maxbube/mydumper/issues)**.
-
-## Next steps
-
-* Learn more about the [mydumper/myloader project in GitHub](https://github.com/maxbube/mydumper).
-* Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
-* [Tutorial: Minimal Downtime Migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server](howto-migrate-single-flexible-minimum-downtime.md)
-* Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](./flexible-server/how-to-data-in-replication.md)
-* Commonly encountered [migration errors](./howto-troubleshoot-common-errors.md)
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-monitoring.md
- Title: Monitoring - Azure Database for MySQL
-description: This article describes the metrics for monitoring and alerting for Azure Database for MySQL, including CPU, storage, and connection statistics.
------ Previously updated : 10/21/2020-
-# Monitoring in Azure Database for MySQL
-
-Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for MySQL provides various metrics that give insight into the behavior of your server.
-
-## Metrics
-
-All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](howto-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../azure-monitor/data-platform.md).
-
-### List of metrics
-
-These metrics are available for Azure Database for MySQL:
-
-|Metric|Metric Display Name|Unit|Description|
-|||||
-|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
-|memory_percent|Memory percent|Percent|The percentage of memory in use.|
-|io_consumption_percent|IO percent|Percent|The percentage of IO in use. (Not applicable for Basic tier servers)|
-|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
-|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
-|serverlog_storage_percent|Server Log storage percent|Percent|The percentage of server log storage used out of the server's maximum server log storage.|
-|serverlog_storage_usage|Server Log storage used|Bytes|The amount of server log storage in use.|
-|serverlog_storage_limit|Server Log storage limit|Bytes|The maximum server log storage for this server.|
-|storage_limit|Storage limit|Bytes|The maximum storage for this server.|
-|active_connections|Active Connections|Count|The number of active connections to the server.|
-|connections_failed|Failed Connections|Count|The number of failed connections to the server.|
-|seconds_behind_master|Replication lag in seconds|Count|The number of seconds the replica server is lagging against the source server. (Not applicable for Basic tier servers)|
-|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
-|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
-|backup_storage_used|Backup Storage Used|Bytes|The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.|
-
-## Server logs
-
-You can enable slow query and audit logging on your server. These logs are also available through Azure Diagnostic Logs in Azure Monitor logs, Event Hubs, and Storage Account. To learn more about logging, visit the [audit logs](concepts-audit-logs.md) and [slow query logs](concepts-server-logs.md) articles.
-
-## Query Store
-
-[Query Store](concepts-query-store.md) is a feature that keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in the **mysql** schema. You can control the collection and storage of data via various configuration knobs.
-
-## Query Performance Insight
-
-[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible in the **Intelligent Performance** section of your Azure Database for MySQL server's portal page.
-
-## Performance Recommendations
-
-The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
-
-## Planned maintenance notification
-
-[Planned maintenance notifications](./concepts-planned-maintenance-notification.md) allow you to receive alerts for upcoming planned maintenance to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 hours before the event.
-
-Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document.
-
-## Next steps
--- See [How to set up alerts](howto-alert-on-metric.md) for guidance on creating an alert on a metric.-- For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../azure-monitor/data-platform.md).-- Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-mysql-monitoring/).-- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for MySQL - Single Server
mysql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-performance-recommendations.md
- Title: Performance recommendations - Azure Database for MySQL
-description: This article describes the Performance Recommendation feature in Azure Database for MySQL
----- Previously updated : 6/3/2020-
-# Performance Recommendations in Azure Database for MySQL
--
-**Applies to:** Azure Database for MySQL 5.7, 8.0
-
-The Performance Recommendations feature analyzes your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. If performance schema is OFF, turning on Query Store enables performance_schema and a subset of performance schema instruments required for the feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
-
-## Permissions
-
-**Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature.
-
-## Performance recommendations
-
-The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance.
-
-Open **Performance Recommendations** from the **Intelligent Performance** section of the menu bar on the Azure portal page for your MySQL server.
--
-Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, the analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods.
-
-The **Recommendations** window will show a list of recommendations if any were found and the related query ID that generated this recommendation. With the query ID, you can use the [mysql.query_store](concepts-query-store.md#mysqlquery_store) view to learn more about the query.
--
-Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation.
-
-## Recommendation types
-
-### Index recommendations
-
-*Create Index* recommendations suggest new indexes to speed up the most frequently run or time-consuming queries in the workload. This recommendation type requires [Query Store](concepts-query-store.md) to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation.
-
-### Query recommendations
-
-Query recommendations suggest optimizations and rewrites for queries in the workload. By identifying MySQL query anti-patterns and fixing them syntactically, the performance of time-consuming queries can be improved. This recommendation type requires Query Store to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation.
-
-## Next steps
-- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL.
mysql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-planned-maintenance-notification.md
- Title: Planned maintenance notification - Azure Database for MySQL - Single Server
-description: This article describes the Planned maintenance notification feature in Azure Database for MySQL - Single Server
----- Previously updated : 10/21/2020-
-# Planned maintenance notification in Azure Database for MySQL - Single Server
--
-Learn how to prepare for planned maintenance events on your Azure Database for MySQL.
-
-## What is a planned maintenance?
-
-Azure Database for MySQL service performs automated patching of the underlying hardware, OS, and database engine. The patch includes new service features, security, and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patch is tested extensively and rolled out using safe deployment practices.
-
-A planned maintenance is a maintenance window when these service updates are deployed to servers in a given Azure region. During planned maintenance, a notification event is created to inform customers when the service update is deployed in the Azure region hosting their servers. Minimum duration between two planned maintenance is 30 days. You receive a notification of the next maintenance window 72 hours in advance.
-
-## Planned maintenance - duration and customer impact
-
-A planned maintenance for a given Azure region is typically expected to run 15 hrs. The window also includes buffer time to execute a rollback plan if necessary. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events.
-
-In summary, while the planned maintenance event runs for 15 hours, the individual server impact generally lasts 60 seconds depending on the transactional activity on the server. A notification is sent 72 calendar hours before planned maintenance starts and another one while maintenance is in progress for a given region.
-
-## How can I get notified of planned maintenance?
-
-You can utilize the planned maintenance notifications feature to receive alerts for an upcoming planned maintenance event. You will receive the notification about the upcoming maintenance 72 calendar hours before the event and another one while maintenance is in-progress for a given region.
-
-### Planned maintenance notification
-
-> [!IMPORTANT]
-> Planned maintenance notifications are currently available in preview in all regions **except** West Central US
-
-**Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event.
-
-We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted.
-
-You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
-
-### Check planned maintenance notification from Azure portal
-
-1. In the [Azure portal](https://portal.azure.com), select **Service Health**.
-2. Select **Planned Maintenance** tab
-3. Select **Subscription**, **Region**, and **Service** for which you want to check the planned maintenance notification.
-
-### To receive planned maintenance notification
-
-1. In the [portal](https://portal.azure.com), select **Service Health**.
-2. In the **Alerts** section, select **Health alerts**.
-3. Select **+ Add service health alert** and fill in the fields.
-4. Fill out the required fields.
-5. Choose the **Event type**, select **Planned maintenance** or **Select all**
-6. In **Action groups** define how you would like to receive the alert (get an email, trigger a logic app etc.)
-7. Ensure Enable rule upon creation is set to Yes.
-8. Select **Create alert rule** to complete your alert
-
-For detailed steps on how to create **service health alerts**, refer to [Create activity log alerts on service notifications](../service-health/alerts-activity-log-service-notifications-portal.md).
-
-## Can I cancel or postpone planned maintenance?
-
-Maintenance is needed to keep your server secure, stable, and up-to-date. The planned maintenance event cannot be canceled or postponed. Once the notification is sent to a given Azure region, the patching schedule changes cannot be made for any individual server in that region. The patch is rolled out for entire region at once. Azure Database for MySQL - Single server service is designed for cloud native application that doesn't require granular control or customization of the service. If you are looking to have ability to schedule maintenance for your servers, we recommend you consider [Flexible servers](./flexible-server/overview.md).
-
-## Are all the Azure regions patched at the same time?
-
-No, all the Azure regions are patched during the deployment wise window timings. The deployment wise window generally stretches from 5 PM - 8 AM local time next day, in a given Azure region. Geo-paired Azure regions are patched on different days. For high availability and business continuity of database servers, leveraging [cross region read replicas](./concepts-read-replicas.md#cross-region-replication) is recommended.
-
-## Retry logic
-
-A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors).
--
-## Next steps
--- For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team at AskAzureDBforMySQL@service.microsoft.com-- See [How to set up alerts](howto-alert-on-metric.md) for guidance on creating an alert on a metric.-- [Troubleshoot connection issues to Azure Database for MySQL - Single Server](howto-troubleshoot-common-connection-issues.md)-- [Handle transient errors and connect efficiently to Azure Database for MySQL - Single Server](concepts-connectivity.md)
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-pricing-tiers.md
- Title: Pricing tiers - Azure Database for MySQL
-description: Learn about the various pricing tiers for Azure Database for MySQL including compute generations, storage types, storage size, vCores, memory, and backup retention periods.
----- Previously updated : 02/07/2022--
-# Azure Database for MySQL pricing tiers
--
-You can create an Azure Database for MySQL server in one of three different pricing tiers: Basic, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MySQL server level. A server can have one or many databases.
-
-| Attribute | **Basic** | **General Purpose** | **Memory Optimized** |
-|:|:-|:--|:|
-| Compute generation | Gen 4, Gen 5 | Gen 4, Gen 5 | Gen 5 |
-| vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 |
-| Memory per vCore | 2 GB | 5 GB | 10 GB |
-| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB |
-| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days |
-
-To choose a pricing tier, use the following table as a starting point.
-
-| Pricing tier | Target workloads |
-|:-|:--|
-| Basic | Workloads that require light compute and I/O performance. Examples include servers used for development or testing or small-scale infrequently used applications. |
-| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.|
-| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
-
-> [!NOTE]
-> Dynamic scaling to and from the Basic pricing tiers is currently not supported. Basic Tier SKUs servers can't be scaled up to General Purpose or Memory Optimized Tiers.
-
-After you create a General Purpose or Memory Optimized server, the number of vCores, hardware generation, and pricing tier can be changed up or down within seconds. You also can independently adjust the amount of storage up and the backup retention period up or down with no application downtime. You can't change the backup storage type after a server is created. For more information, see the [Scale resources](#scale-resources) section.
-
-## Compute generations and vCores
-
-Compute resources are provided as vCores, which represent the logical CPU of the underlying hardware. China East 1, China North 1, US DoD Central, and US DoD East utilize Gen 4 logical CPUs that are based on Intel E5-2673 v3 (Haswell) 2.4-GHz processors. All other regions utilize Gen 5 logical CPUs that are based on Intel E5-2673 v4 (Broadwell) 2.3-GHz processors.
-
-## Storage
-
-The storage you provision is the amount of storage capacity available to your Azure Database for MySQL server. The storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.
-
-Azure Database for MySQL ΓÇô Single Server supports the following the backend storage for the servers.
-
-| Storage type | Basic | General purpose v1 | General purpose v2 |
-|:|:-|:--|:|
-| Storage size | 5 GB to 1 TB | 5 GB to 4 TB | 5 GB to 16 TB |
-| Storage increment size | 1 GB | 1 GB | 1 GB |
-| IOPS | Variable |3 IOPS/GB<br/>Min 100 IOPS<br/>Max 6000 IOPS | 3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS |
-
->[!NOTE]
-> Basic storage does not provide an IOPS guarantee. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio.
-
-### Basic storage
-Basic storage is the backend storage supporting Basic pricing tier servers. Basic storage leverages Azure standard storage in the backend where iops provisioned are not guaranteed and latency is variable. Basic tier is best suited for workloads that require light compute, low cost and I/O performance for development or small-scale infrequently used applications.
-
-### General purpose storage
-General purpose storage is the backend storage supporting General Purpose and Memory Optimized tier server. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. There are two generations of general purpose storage as described below:
-
-#### General purpose storage v1 (Supports up to 4-TB)
-General purpose storage v1 is based on the legacy storage technology which can support up to 4-TB storage and 6000 IOPs per server. General purpose storage v1 is optimized to leverage memory from the compute nodes running MySQL engine for local caching and backups. The backup process on general purpose storage v1 reads from the data and log files in the memory of the compute nodes and copies it to the target backup storage for retention up to 35 days. As a result, the memory and io consumption of storage during backups is relatively higher.
-
-All Azure regions supports General purpose storage v1
-
-For General Purpose or Memory Optimized server on general purpose storage v1, we recommend you consider
-
-* Plan for compute sku tier accounting for 10-30% excess memory for storage caching and backup buffers
-* Provision 10% higher IOPs than required by the database workload to account for backup IOs
-* Alternatively, migrate to general purpose storage v2 described below that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred Azure regions shared below.
-
-#### General purpose storage v2 (Supports up to 16-TB storage)
-General purpose storage v2 is based on the latest storage infrastructure which can support up to 16-TB and 20000 IOPs. In a subset of Azure regions where the infrastructure is available, all newly provisioned servers land on general purpose storage v2 by default. General purpose storage v2 does not consume any memory from the compute node of MySQL and provides better predictable IO latencies compared to general purpose v1 storage. Backups on the general purpose v2 storage servers are snapshot-based with no additional IO overhead. On general purpose v2 storage, the MySQL server performance is expected to higher compared to general purpose storage v1 for the same storage and iops provisioned.There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.
-
-General purpose storage v2 is supported in the following Azure regions:
-
-| Region | General purpose storage v2 availability |
-| | |
-| Australia East | :heavy_check_mark: |
-| Australia South East | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: |
-| Canada Central | :heavy_check_mark: |
-| Canada East | :heavy_check_mark: |
-| Central US | :heavy_check_mark: |
-| East US | :heavy_check_mark: |
-| East US 2 | :heavy_check_mark: |
-| East Asia | :heavy_check_mark: |
-| Japan East | :heavy_check_mark: |
-| Japan West | :heavy_check_mark: |
-| Korea Central | :heavy_check_mark: |
-| Korea South | :heavy_check_mark: |
-| North Europe | :heavy_check_mark: |
-| North Central US | :heavy_check_mark: |
-| South Central US | :heavy_check_mark: |
-| Southeast Asia | :heavy_check_mark: |
-| UK South | :heavy_check_mark: |
-| UK West | :heavy_check_mark: |
-| West Central US | :heavy_check_mark: |
-| West US | :heavy_check_mark: |
-| West US 2 | :heavy_check_mark: |
-| West Europe | :heavy_check_mark: |
-| Central India* | :heavy_check_mark: |
-| France Central* | :heavy_check_mark: |
-| UAE North* | :heavy_check_mark: |
-| South Africa North* | :heavy_check_mark: |
-
-> [!Note]
-> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br />
-> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, following are the limitations, <br />
-> * Geo-Redundant Backup will not be supported<br />
-> * The replica server should be in the regions which support General purpose storage v2. <br />
-
-
-### How can I determine which storage type my server is running on?
-
-You can find the storage type of your server by going in the Pricing tier blade in portal.
-* If the server is provisioned using Basic SKU, the storage type is Basic storage.
-* If the server is provisioned using General Purpose or Memory Optimized SKU, the storage type is General Purpose storage
- * If the maximum storage that can be provisioned on your server is up to 4-TB, the storage type is General Purpose storage v1.
- * If the maximum storage that can be provisioned on your server is up to 16-TB, the storage type is General Purpose storage v2.
-
-### Can I move from general purpose storage v1 to general purpose storage v2? if yes, how and is there any additional cost?
-Yes, migration to general purpose storage v2 from v1 is supported if the underlying storage infrastructure is available in the Azure region of the source server. The migration and v2 storage is available at no additional cost.
-
-### Can I grow storage size after server is provisioned?
-You can add additional storage capacity during and after the creation of the server, and allow the system to grow storage automatically based on the storage consumption of your workload.
-
->[!IMPORTANT]
-> Storage can only be scaled up, not down.
-
-### Monitoring IO consumption
-You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and IO percent](concepts-monitoring.md).The monitoring metrics for the MySQL server with general purpose storage v1 reports the memory and IO consumed by the MySQL engine but may not capture the memory and IO consumption of the storage layer which is a limitation.
-
-### Reaching the storage limit
-
-Servers with less than or equal to 100 GB provisioned storage are marked read-only if the free storage is less than 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB.
-
-For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 256 MB.
-
-While the service attempts to make the server read-only, all new write transaction requests are blocked and existing active transactions will continue to execute. When the server is set to read-only, all subsequent write operations and transaction commits fail. Read queries will continue to work uninterrupted. After you increase the provisioned storage, the server will be ready to accept write transactions again.
-
-We recommend that you turn on storage auto-grow or to set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on [how to set up an alert](howto-alert-on-metric.md).
-
-### Storage auto-grow
-
-Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than or equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply.
-
-For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free.
-
-Remember that storage can only be scaled up, not down.
-
-## Backup storage
-
-Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any backup storage you use in excess of this amount is charged in GB per month. For example, if you provision a server with 250 GB of storage, youΓÇÖll have 250 GB of additional storage available for server backups at no charge. Storage for backups in excess of the 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/). To understand factors influencing backup storage usage, monitoring and controlling backup storage cost, you can refer to the [backup documentation](concepts-backup.md).
-
-## Scale resources
-
-After you create your server, you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI. For an example of scaling by using Azure CLI, see [Monitor and scale an Azure Database for MySQL server by using Azure CLI](scripts/sample-scale-server.md).
-
-When you change the number of vCores, the hardware generation, or the pricing tier, a copy of the original server is created with the new compute allocation. After the new server is up and running, connections are switched over to the new server. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This downtime during scaling can be around 60-120 seconds. The downtime during scaling is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of scaling operation. To avoid longer restart time, it is recommended to perform scaling operations during periods of low transactional activity on the server.
-
-Scaling storage and changing the backup retention period are true online operations. There is no downtime, and your application isn't affected. As IOPS scale with the size of the provisioned storage, you can increase the IOPS available to your server by scaling up storage.
-
-## Pricing
-
-For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/mysql/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.MySQLServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for MySQL** to customize the options.
-
-## Next steps
--- Learn how to [create a MySQL server in the portal](howto-create-manage-server-portal.md).-- Learn about [service limits](concepts-limits.md).-- Learn how to [scale out with read replicas](howto-read-replicas-portal.md).
mysql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-query-performance-insight.md
- Title: Query Performance Insight - Azure Database for MySQL
-description: This article describes the Query Performance Insight feature in Azure Database for MySQL
----- Previously updated : 01/12/2022-
-# Query Performance Insight in Azure Database for MySQL
--
-**Applies to:** Azure Database for MySQL 5.7, 8.0
-
-Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them.
-
-## Common scenarios
-
-### Long running queries
--- Identifying longest running queries in the past X hours-- Identifying top N queries that are waiting on resources
-
-### Wait statistics
--- Understanding wait nature for a query-- Understanding trends for resource waits and where resource contention exists-
-## Prerequisites
-
-For Query Performance Insight to function, data must exist in the [Query Store](concepts-query-store.md).
-
-## Viewing performance insights
-
-The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
-
-In the portal page of your Azure Database for MySQL server, select **Query Performance Insight** under the **Intelligent Performance** section of the menu bar.
-
-### Long running queries
-
-The **Long running queries** tab shows the top 5 Query IDs by average duration per execution, aggregated in 15-minute intervals. You can view more Query IDs by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
-
-> [!Note]
-> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk.
-
-The recommended steps to view the query text is shared below:
- 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal.
-1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries.
-
-```sql
- SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
- SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
-```
-
-You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger time period respectively.
--
-### Wait statistics
-
-> [!NOTE]
-> Wait statistics are meant for troubleshooting query performance issues. It is recommended to be turned on only for troubleshooting purposes. <br>If you receive the error message in the Azure portal "*The issue encountered for 'Microsoft.DBforMySQL'; cannot fulfill the request. If this issue continues or is unexpected, please contact support with this information.*" while viewing wait statistics, use a smaller time period.
-
-Wait statistics provides a view of the wait events that occur during the execution of a specific query. Learn more about the wait event types in the [MySQL engine documentation](https://go.microsoft.com/fwlink/?linkid=2098206).
-
-Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server.
-
-Queries displayed in the wait statistics view are grouped by the queries that exhibit the largest waits during the specified time interval.
-
-> [!Note]
-> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk.
-
-The recommended steps to view the query text is shared below:
- 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal.
-1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries.
-
-```sql
- SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
- SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
-```
--
-## Next steps
--- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL.
mysql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-query-store.md
-
Title: Query Store - Azure Database for MySQL
-description: Learn about the Query Store feature in Azure Database for MySQL to help you track performance over time.
----- Previously updated : 5/12/2020-
-# Monitor Azure Database for MySQL performance with Query Store
--
-**Applies to:** Azure Database for MySQL 5.7, 8.0
-
-The Query Store feature in Azure Database for MySQL provides a way to track query performance over time. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in the **mysql** schema database in the Azure Database for MySQL instance.
-
-## Common scenarios for using Query Store
-
-Query store can be used in a number of scenarios, including the following:
--- Detecting regressed queries-- Determining the number of times a query was executed in a given time window-- Comparing the average execution time of a query across time windows to see large deltas-
-## Enabling Query Store
-
-Query Store is an opt-in feature, so it isn't active by default on a server. The query store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database.
-
-### Enable Query Store using the Azure portal
-
-1. Sign in to the Azure portal and select your Azure Database for MySQL server.
-1. Select **Server Parameters** in the **Settings** section of the menu.
-1. Search for the query_store_capture_mode parameter.
-1. Set the value to ALL and **Save**.
-
-To enable wait statistics in your Query Store:
-
-1. Search for the query_store_wait_sampling_capture_mode parameter.
-1. Set the value to ALL and **Save**.
-
-Allow up to 20 minutes for the first batch of data to persist in the mysql database.
-
-## Information in Query Store
-
-Query Store has two stores:
--- A runtime statistics store for persisting the query execution statistics information.-- A wait statistics store for persisting wait statistics information.-
-To minimize space usage, the runtime execution statistics in the runtime statistics store are aggregated over a fixed, configurable time window. The information in these stores is visible by querying the query store views.
-
-The following query returns information about queries in Query Store:
-
-```sql
-SELECT * FROM mysql.query_store;
-```
-
-Or this query for wait statistics:
-
-```sql
-SELECT * FROM mysql.query_store_wait_stats;
-```
-
-## Finding wait queries
-
-> [!NOTE]
-> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely.
-
-Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics.
-
-Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store:
-
-| **Observation** | **Action** |
-|||
-|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level. |
-|High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. |
-|High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries. |
-
-## Configuration options
-
-When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window.
-
-The following options are available for configuring Query Store parameters.
-
-| **Parameter** | **Description** | **Default** | **Range** |
-|||||
-| query_store_capture_mode | Turn the query store feature ON/OFF based on the value. Note: If performance_schema is OFF, turning on query_store_capture_mode will turn on performance_schema and a subset of performance schema instruments required for this feature. | ALL | NONE, ALL |
-| query_store_capture_interval | The query store capture interval in minutes. Allows specifying the interval in which the query metrics are aggregated | 15 | 5 - 60 |
-| query_store_capture_utility_queries | Turning ON or OFF to capture all the utility queries that is executing in the system. | NO | YES, NO |
-| query_store_retention_period_in_days | Time window in days to retain the data in the query store. | 7 | 1 - 30 |
-
-The following options apply specifically to wait statistics.
-
-| **Parameter** | **Description** | **Default** | **Range** |
-|||||
-| query_store_wait_sampling_capture_mode | Allows turning ON / OFF the wait statistics. | NONE | NONE, ALL |
-| query_store_wait_sampling_frequency | Alters frequency of wait-sampling in seconds. 5 to 300 seconds. | 30 | 5-300 |
-
-> [!NOTE]
-> Currently **query_store_capture_mode** supersedes this configuration, meaning both **query_store_capture_mode** and **query_store_wait_sampling_capture_mode** have to be enabled to ALL for wait statistics to work. If **query_store_capture_mode** is turned off, then wait statistics is turned off as well since wait statistics utilizes the performance_schema enabled, and the query_text captured by query store.
-
-Use the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md) to get or set a different value for a parameter.
-
-## Views and functions
-
-View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](howto-create-users.md) can use these views to see the data in Query Store. These views are only available in the **mysql** database.
-
-Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash.
-
-### mysql.query_store
-
-This view returns all the data in Query Store. There is one row for each distinct database ID, user ID, and query ID.
-
-| **Name** | **Data Type** | **IS_NULLABLE** | **Description** |
-|||||
-| `schema_name`| varchar(64) | NO | Name of the schema |
-| `query_id`| bigint(20) | NO| Unique ID generated for the specific query, if the same query executes in different schema, a new ID will be generated |
-| `timestamp_id` | timestamp| NO| Timestamp in which the query is executed. This is based on the query_store_interval configuration|
-| `query_digest_text`| longtext| NO| The normalized query text after removing all the literals|
-| `query_sample_text` | longtext| NO| First appearance of the actual query with literals|
-| `query_digest_truncated` | bit| YES| Whether the query text has been truncated. Value will be Yes if the query is longer than 1 KB|
-| `execution_count` | bigint(20)| NO| The number of times the query got executed for this timestamp ID / during the configured interval period|
-| `warning_count` | bigint(20)| NO| Number of warnings this query generated during the internal|
-| `error_count` | bigint(20)| NO| Number of errors this query generated during the interval|
-| `sum_timer_wait` | double| YES| Total execution time of this query during the interval|
-| `avg_timer_wait` | double| YES| Average execution time for this query during the interval|
-| `min_timer_wait` | double| YES| Minimum execution time for this query|
-| `max_timer_wait` | double| YES| Maximum execution time|
-| `sum_lock_time` | bigint(20)| NO| Total amount of time spent for all the locks for this query execution during this time window|
-| `sum_rows_affected` | bigint(20)| NO| Number of rows affected|
-| `sum_rows_sent` | bigint(20)| NO| Number of rows sent to client|
-| `sum_rows_examined` | bigint(20)| NO| Number of rows examined|
-| `sum_select_full_join` | bigint(20)| NO| Number of full joins|
-| `sum_select_scan` | bigint(20)| NO| Number of select scans |
-| `sum_sort_rows` | bigint(20)| NO| Number of rows sorted|
-| `sum_no_index_used` | bigint(20)| NO| Number of times when the query did not use any indexes|
-| `sum_no_good_index_used` | bigint(20)| NO| Number of times when the query execution engine did not use any good indexes|
-| `sum_created_tmp_tables` | bigint(20)| NO| Total number of temp tables created|
-| `sum_created_tmp_disk_tables` | bigint(20)| NO| Total number of temp tables created in disk (generates I/O)|
-| `first_seen` | timestamp| NO| The first occurrence (UTC) of the query during the aggregation window|
-| `last_seen` | timestamp| NO| The last occurrence (UTC) of the query during this aggregation window|
-
-### mysql.query_store_wait_stats
-
-This view returns wait events data in Query Store. There is one row for each distinct database ID, user ID, query ID, and event.
-
-| **Name**| **Data Type** | **IS_NULLABLE** | **Description** |
-|||||
-| `interval_start` | timestamp | NO| Start of the interval (15-minute increment)|
-| `interval_end` | timestamp | NO| End of the interval (15-minute increment)|
-| `query_id` | bigint(20) | NO| Generated unique ID on the normalized query (from query store)|
-| `query_digest_id` | varchar(32) | NO| The normalized query text after removing all the literals (from query store) |
-| `query_digest_text` | longtext | NO| First appearance of the actual query with literals (from query store) |
-| `event_type` | varchar(32) | NO| Category of the wait event |
-| `event_name` | varchar(128) | NO| Name of the wait event |
-| `count_star` | bigint(20) | NO| Number of wait events sampled during the interval for the query |
-| `sum_timer_wait_ms` | double | NO| Total wait time (in milliseconds) of this query during the interval |
-
-### Functions
-
-| **Name**| **Description** |
-|||
-| `mysql.az_purge_querystore_data(TIMESTAMP)` | Purges all query store data before the given time stamp |
-| `mysql.az_procedure_purge_querystore_event(TIMESTAMP)` | Purges all wait event data before the given time stamp |
-| `mysql.az_procedure_purge_recommendation(TIMESTAMP)` | Purges recommendations whose expiration is before the given time stamp |
-
-## Limitations and known issues
--- If a MySQL server has the parameter `read_only` on, Query Store cannot capture data.-- Query Store functionality can be interrupted if it encounters long Unicode queries (\>= 6000 bytes).-- The retention period for wait statistics is 24 hours.-- Wait statistics uses sample to capture a fraction of events. The frequency can be modified using the parameter `query_store_wait_sampling_frequency`.-
-## Next steps
--- Learn more about [Query Performance Insights](concepts-query-performance-insight.md)
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-read-replicas.md
- Title: Read replicas - Azure Database for MySQL
-description: 'Learn about read replicas in Azure Database for MySQL: choosing regions, creating replicas, connecting to replicas, monitoring replication, and stopping replication.'
----- Previously updated : 06/17/2021---
-# Read replicas in Azure Database for MySQL
--
-The read replica feature allows you to replicate data from an Azure Database for MySQL server to a read-only server. You can replicate from the source server to up to five replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-
-Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
-
-To learn more about MySQL replication features and issues, see the [MySQL replication documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html).
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
-
-## When to use a read replica
-
-The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the source.
-
-A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
-
-Because replicas are read-only, they don't directly reduce write-capacity burdens on the source. This feature isn't targeted at write-intensive workloads.
-
-The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay.
-
-## Cross-region replication
-
-You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
-
-You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its [paired region](./../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) or the universal replica regions. The following picture shows which replica regions are available depending on your source region.
-
-### Universal replica regions
-
-You can create a read replica in any of the following regions, regardless of where your source server is located. The supported universal replica regions include:
-
-| Region | Replica availability |
-| | |
-| Australia East | :heavy_check_mark: |
-| Australia South East | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: |
-| Canada Central | :heavy_check_mark: |
-| Canada East | :heavy_check_mark: |
-| Central US | :heavy_check_mark: |
-| East US | :heavy_check_mark: |
-| East US 2 | :heavy_check_mark: |
-| East Asia | :heavy_check_mark: |
-| Japan East | :heavy_check_mark: |
-| Japan West | :heavy_check_mark: |
-| Korea Central | :heavy_check_mark: |
-| Korea South | :heavy_check_mark: |
-| North Europe | :heavy_check_mark: |
-| North Central US | :heavy_check_mark: |
-| South Central US | :heavy_check_mark: |
-| Southeast Asia | :heavy_check_mark: |
-| Switzerland North | :heavy_check_mark: |
-| UK South | :heavy_check_mark: |
-| UK West | :heavy_check_mark: |
-| West Central US | :heavy_check_mark: |
-| West US | :heavy_check_mark: |
-| West US 2 | :heavy_check_mark: |
-| West Europe | :heavy_check_mark: |
-| Central India* | :heavy_check_mark: |
-| France Central* | :heavy_check_mark: |
-| UAE North* | :heavy_check_mark: |
-| South Africa North* | :heavy_check_mark: |
-
-> [!Note]
-> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br />
-> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, you are limited to create replica server only in the Azure regions which support General purpose storage v2.
-
-### Paired regions
-
-In addition to the universal replica regions, you can create a read replica in the Azure paired region of your source server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../availability-zones/cross-region-replication-azure.md).
-
-If you're using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
-
-However, there are limitations to consider:
-
-* Regional availability: Azure Database for MySQL is available in France Central, UAE North, and Germany Central. However, their paired regions aren't available.
-
-* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia.
- This means that a source server in West India can create a replica in South India. However, a source server in South India can't create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region isn't West India.
-
-## Create a replica
-
-> [!IMPORTANT]
-> * The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
-> * If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
--
-When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the source server. The creation time depends on the amount of data on the source and the time since the last weekly full backup. The time can range from a few minutes to several hours. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation.
-
-Every replica is enabled for storage [auto-grow](concepts-pricing-tiers.md#storage-auto-grow). The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent an interruption in replication caused by out-of-storage errors.
-
-Learn how to [create a read replica in the Azure portal](howto-read-replicas-portal.md).
-
-## Connect to a replica
-
-At creation, a replica inherits the firewall rules of the source server. Afterwards, these rules are independent from the source server.
-
-The replica inherits the admin account from the source server. All user accounts on the source server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the source server.
-
-You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for MySQL server. For a server named **myreplica** with the admin username **myadmin**, you can connect to the replica by using the mysql CLI:
-
-```bash
-mysql -h myreplica.mysql.database.azure.com -u myadmin@myreplica -p
-```
-
-At the prompt, enter the password for the user account.
-
-## Monitor replication
-
-Azure Database for MySQL provides the **Replication lag in seconds** metric in Azure Monitor. This metric is available for replicas only. This metric is calculated using the `seconds_behind_master` metric available in MySQL's `SHOW SLAVE STATUS` command. Set an alert to inform you when the replication lag reaches a value that isn't acceptable for your workload.
-
-If you see increased replication lag, refer to [troubleshooting replication latency](howto-troubleshoot-replication-latency.md) to troubleshoot and understand possible causes.
-
-## Stop replication
-
-You can stop replication between a source and a replica. After replication is stopped between a source server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the source server.
-
-When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There's no automated failover between a source and its replica.
-
-> [!IMPORTANT]
-> The standalone server can't be made into a replica again.
-> Before you stop replication on a read replica, ensure the replica has all the data that you require.
-
-Learn how to [stop replication to a replica](howto-read-replicas-portal.md).
-
-## Failover
-
-There's no automated failover between source and replica servers.
-
-Since replication is asynchronous, there's lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
-
-> [!Tip]
-> If you failover to the replica, the lag at the time you delink the replica from the source will indicate how much data is lost.
-
-After you've decided you want to failover to a replica:
-
-1. Stop replication to the replica<br/>
- This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
-
-2. Point your application to the (former) replica<br/>
- Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
-
-After your application is successfully processing reads and writes, you've completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 listed previously.
-
-## Global transaction identifier (GTID)
-
-Global transaction identifier (GTID) is a unique identifier created with each committed transaction on a source server and is OFF by default in Azure Database for MySQL. GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB(General purpose storage v2). To learn more about GTID and how it's used in replication, refer to MySQL's [replication with GTID](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html) documentation.
-
-MySQL supports two types of transactions: GTID transactions (identified with GTID) and anonymous transactions (don't have a GTID allocated)
-
-The following server parameters are available for configuring GTID:
-
-|**Server parameter**|**Description**|**Default Value**|**Values**|
-|--|--|--|--|
-|`gtid_mode`|Indicates if GTIDs are used to identify transactions. Changes between modes can only be done one step at a time in ascending order (ex. `OFF` -> `OFF_PERMISSIVE` -> `ON_PERMISSIVE` -> `ON`)|`OFF`|`OFF`: Both new and replication transactions must be anonymous <br> `OFF_PERMISSIVE`: New transactions are anonymous. Replicated transactions can either be anonymous or GTID transactions. <br> `ON_PERMISSIVE`: New transactions are GTID transactions. Replicated transactions can either be anonymous or GTID transactions. <br> `ON`: Both new and replicated transactions must be GTID transactions.|
-|`enforce_gtid_consistency`|Enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. This value must be set to `ON` before enabling GTID replication. |`OFF`|`OFF`: All transactions are allowed to violate GTID consistency. <br> `ON`: No transaction is allowed to violate GTID consistency. <br> `WARN`: All transactions are allowed to violate GTID consistency, but a warning is generated. |
-
-> [!NOTE]
-> * After GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support.
->
-> * To change GTID's from one value to another can only be one step at a time in ascending order of modes. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to ON_PERMISSIVE but not to ON.
->
-> * To keep replication consistent, you cannot update it for a master/replica server.
->
-> * Recommended to SET enforce_gtid_consistency to ON before you can set gtid_mode=ON
--
-To enable GTID and configure the consistency behavior, update the `gtid_mode` and `enforce_gtid_consistency` server parameters using the [Azure portal](howto-server-parameters.md), [Azure CLI](howto-configure-server-parameters-using-cli.md), or [PowerShell](howto-configure-server-parameters-using-powershell.md).
-
-If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID replication. In order to make sure that the replication is consistent, `gtid_mode` cannot be changed once the master or replica server(s) is created with GTID enabled.
-
-## Considerations and limitations
-
-### Pricing tiers
-
-Read replicas are currently only available in the General Purpose and Memory Optimized pricing tiers.
-
-> [!NOTE]
-> The cost of running the replica server is based on the region where the replica server is running.
-
-### Source server restart
-
-Server that has General purpose storage v1, the `log_bin` parameter will be OFF by default. The value will be turned ON when you create the first read replica. If a source server has no existing read replicas, source server will first restart to prepare itself for replication. Please consider server restart and perform this operation during off-peak hours.
-
-Source server that has General purpose storage v2, the `log_bin` parameter will be ON by default and does not require a restart when you add a read replica.
-
-### New replicas
-
-A read replica is created as a new Azure Database for MySQL server. An existing server can't be made into a replica. You can't create a replica of another read replica.
-
-### Replica configuration
-
-A replica is created by using the same server configuration as the source. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and backup retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
-
-> [!IMPORTANT]
-> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the source.
-
-Firewall rules and parameter settings are inherited from the source server to the replica when the replica is created. Afterwards, the replica's rules are independent.
-
-### Stopped replicas
-
-If you stop replication between a source server and a read replica, the stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again.
-
-### Deleted source and standalone servers
-
-When a source server is deleted, replication is stopped to all read replicas. These replicas automatically become standalone servers and can accept both reads and writes. The source server itself is deleted.
-
-### User accounts
-
-Users on the source server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the source server.
-
-### Server parameters
-
-To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas.
-
-The following server parameters are locked on both the source and replica servers:
-
-* [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html)
-* [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators)
-
-The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers.
-
-To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas.
-
-### GTID
-
-GTID is supported on:
-
-* MySQL versions 5.7 and 8.0.
-* Servers that support storage up to 16 TB. Refer to the [pricing tier](concepts-pricing-tiers.md#storage) article for the full list of regions that support 16 TB storage.
-
-GTID is OFF by default. After GTID is enabled, you can't turn it back off. If you need to turn GTID OFF, contact support.
-
-If GTID is enabled on a source server, newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you can't update `gtid_mode` on the source or replica server(s).
-
-### Other
-
-* Creating a replica of a replica isn't supported.
-* In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information.
-* Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.
-* Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html)
-
-## Next steps
-
-* Learn how to [create and manage read replicas using the Azure portal](howto-read-replicas-portal.md)
-* Learn how to [create and manage read replicas using the Azure CLI and REST API](howto-read-replicas-cli.md)
mysql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-security.md
- Title: Security - Azure Database for MySQL
-description: An overview of the security features in Azure Database for MySQL.
----- Previously updated : 3/18/2020--
-# Security in Azure Database for MySQL
--
-There are multiple layers of security that are available to protect the data on your Azure Database for MySQL server. This article outlines those security options.
-
-## Information protection and encryption
-
-### In-transit
-Azure Database for MySQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default.
-
-### At-rest
-The Azure Database for MySQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
--
-## Network security
-Connections to an Azure Database for MySQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
-
-A newly created Azure Database for MySQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
-
-### IP firewall rules
-IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information.
-
-### Virtual network firewall rules
-Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for MySQL server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-and-security-vnet.md).
-
-### Private IP
-Private Link allows you to connect to your Azure Database for MySQL in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-security-private-link.md)
-
-## Access management
-
-While creating the Azure Database for MySQL server, you provide credentials for an administrator user. This administrator can be used to create additional MySQL users.
--
-## Threat protection
-
-You can opt in to [Microsoft Defender for open-source relational databases](../security-center/defender-for-databases-introduction.md) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
-
-[Audit logging](concepts-audit-logs.md) is available to track activity in your databases.
--
-## Next steps
-- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md)
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-server-logs.md
- Title: Slow query logs - Azure Database for MySQL
-description: Describes the slow query logs available in Azure Database for MySQL, and the available parameters for enabling different logging levels.
----- Previously updated : 11/6/2020-
-# Slow query logs in Azure Database for MySQL
-
-In Azure Database for MySQL, the slow query log is available to users. Access to the transaction log is not supported. The slow query log can be used to identify performance bottlenecks for troubleshooting.
-
-For more information about the MySQL slow query log, see the MySQL reference manual's [slow query log section](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html).
-
-When [Query Store](concepts-query-store.md) is enabled on your server, you may see the queries like "`CALL mysql.az_procedure_collect_wait_stats (900, 30);`" logged in your slow query logs. This behavior is expected as the Query Store feature collects statistics about your queries.
-
-## Configure slow query logging
-By default the slow query log is disabled. To enable it, set `slow_query_log` to ON. This can be enabled using the Azure portal or Azure CLI.
-
-Other parameters you can adjust include:
--- **long_query_time**: if a query takes longer than long_query_time (in seconds) that query is logged. The default is 10 seconds.-- **log_slow_admin_statements**: if ON includes administrative statements like ALTER_TABLE and ANALYZE_TABLE in the statements written to the slow_query_log.-- **log_queries_not_using_indexes**: determines whether queries that do not use indexes are logged to the slow_query_log-- **log_throttle_queries_not_using_indexes**: This parameter limits the number of non-index queries that can be written to the slow query log. This parameter takes effect when log_queries_not_using_indexes is set to ON.-- **log_output**: if "File", allows the slow query log to be written to both the local server storage and to Azure Monitor Diagnostic Logs. If "None", the slow query log will only be written to Azure Monitor Diagnostics Logs. -
-> [!IMPORTANT]
-> If your tables are not indexed, setting the `log_queries_not_using_indexes` and `log_throttle_queries_not_using_indexes` parameters to ON may affect MySQL performance since all queries running against these non-indexed tables will be written to the slow query log.<br><br>
-> If you plan on logging slow queries for an extended period of time, it is recommended to set `log_output` to "None". If set to "File", these logs are written to the local server storage and can affect MySQL performance.
-
-See the MySQL [slow query log documentation](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) for full descriptions of the slow query log parameters.
-
-## Access slow query logs
-There are two options for accessing slow query logs in Azure Database for MySQL: local server storage or Azure Monitor Diagnostic Logs. This is set using the `log_output` parameter.
-
-For local server storage, you can list and download slow query logs using the Azure portal or the Azure CLI. In the Azure portal, navigate to your server in the Azure portal. Under the **Monitoring** heading, select the **Server Logs** page. For more information on Azure CLI, see [Configure and access slow query logs using Azure CLI](howto-configure-server-logs-in-cli.md).
-
-Azure Monitor Diagnostic Logs allows you to pipe slow query logs to Azure Monitor Logs (Log Analytics), Azure Storage, or Event Hubs. See [below](concepts-server-logs.md#diagnostic-logs) for more information.
-
-## Local server storage log retention
-When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available.The 7 GB storage limit for the server logs is available free of cost and cannot be extended.
-
-Logs are rotated every 24 hours or 7 GB, whichever comes first.
-
-> [!Note]
-> The above log retention does not apply to logs that are piped using Azure Monitor Diagnostic Logs. You can change the retention period for the data sinks being emitted to (ex. Azure Storage).
-
-## Diagnostic logs
-Azure Database for MySQL is integrated with Azure Monitor Diagnostic Logs. Once you have enabled slow query logs on your MySQL server, you can choose to have them emitted to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs, see the how to section of the [diagnostic logs documentation](../azure-monitor/essentials/platform-logs-overview.md).
-
->[!Note]
->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings
-
-The following table describes what's in each log. Depending on the output method, the fields included and the order in which they appear may vary.
-
-| **Property** | **Description** |
-|||
-| `TenantId` | Your tenant ID |
-| `SourceSystem` | `Azure` |
-| `TimeGenerated` [UTC] | Time stamp when the log was recorded in UTC |
-| `Type` | Type of the log. Always `AzureDiagnostics` |
-| `SubscriptionId` | GUID for the subscription that the server belongs to |
-| `ResourceGroup` | Name of the resource group the server belongs to |
-| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
-| `ResourceType` | `Servers` |
-| `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
-| `Category` | `MySqlSlowLogs` |
-| `OperationName` | `LogEvent` |
-| `Logical_server_name_s` | Name of the server |
-| `start_time_t` [UTC] | Time the query began |
-| `query_time_s` | Total time in seconds the query took to execute |
-| `lock_time_s` | Total time in seconds the query was locked |
-| `user_host_s` | Username |
-| `rows_sent_d` | Number of rows sent |
-| `rows_examined_s` | Number of rows examined |
-| `last_insert_id_s` | [last_insert_id](https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id) |
-| `insert_id_s` | Insert ID |
-| `sql_text_s` | Full query |
-| `server_id_s` | The server's ID |
-| `thread_id_s` | Thread ID |
-| `\_ResourceId` | Resource URI |
-
-> [!Note]
-> For `sql_text`, log will be truncated if it exceeds 2048 characters.
-
-## Analyze logs in Azure Monitor Logs
-
-Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your slow queries. Below are some sample queries to help you get started. Make sure to update the below with your server name.
--- Queries longer than 10 seconds on a particular server-
- ```Kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | where query_time_d > 10
- ```
--- List top 5 longest queries on a particular server-
- ```Kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | order by query_time_d desc
- | take 5
- ```
--- Summarize slow queries by minimum, maximum, average, and standard deviation query time on a particular server-
- ```Kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | summarize count(), min(query_time_d), max(query_time_d), avg(query_time_d), stdev(query_time_d), percentile(query_time_d, 95) by LogicalServerName_s
- ```
--- Graph the slow query distribution on a particular server-
- ```Kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m)
- | render timechart
- ```
--- Display queries longer than 10 seconds across all MySQL servers with Diagnostic Logs enabled-
- ```Kusto
- AzureDiagnostics
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | where query_time_d > 10
- ```
-
-## Next Steps
-- [How to configure slow query logs from the Azure portal](howto-configure-server-logs-in-portal.md)-- [How to configure slow query logs from the Azure CLI](howto-configure-server-logs-in-cli.md)
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-server-parameters.md
- Title: Server parameters - Azure Database for MySQL
-description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL.
----- Previously updated : 1/26/2021-
-# Server parameters in Azure Database for MySQL
--
-This article provides considerations and guidelines for configuring server parameters in Azure Database for MySQL.
-
-## What are server parameters?
-
-The MySQL engine provides many different server variables and parameters that you use to configure and tune engine behavior. Some parameters can be set dynamically during runtime, while others are static, and require a server restart in order to apply.
-
-Azure Database for MySQL exposes the ability to change the value of various MySQL server parameters by using the [Azure portal](./howto-server-parameters.md), the [Azure CLI](./howto-configure-server-parameters-using-cli.md), and [PowerShell](./howto-configure-server-parameters-using-powershell.md) to match your workload's needs.
-
-## Configurable server parameters
-
-The list of supported server parameters is constantly growing. In the Azure portal, use the server parameters tab to view the full list and configure server parameters values.
-
-Refer to the following sections to learn more about the limits of several commonly updated server parameters. The limits are determined by the pricing tier and vCores of the server.
-
-### Thread pools
-
-MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there is a corresponding drop in performance. Many active threads can affect the performance significantly, due to increased context switching, thread contention, and bad locality for CPU caches.
-
-*Thread pools*, a server-side feature and distinct from connection pooling, maximize performance by introducing a dynamic pool of worker threads. You use this feature to limit the number of active threads running on the server and minimize thread churn. This helps ensure that a burst of connections won't cause the server to run out of resources or memory. Thread pools are most efficient for short queries and CPU intensive workloads, such as OLTP workloads.
-
-For more information, see [Introducing thread pools in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/introducing-thread-pools-in-azure-database-for-mysql-service/ba-p/1504173).
-
-> [!NOTE]
-> Thread pools aren't supported for MySQL 5.6.
-
-### Configure the thread pool
-
-To enable a thread pool, update the `thread_handling` server parameter to `pool-of-threads`. By default, this parameter is set to `one-thread-per-connection`, which means MySQL creates a new thread for each new connection. This is a static parameter, and requires a server restart to apply.
-
-You can also configure the maximum and minimum number of threads in the pool by setting the following server parameters:
--- `thread_pool_max_threads`: This value ensures that there won't be more than this number of threads in the pool.-- `thread_pool_min_threads`: This value sets the number of threads that will be reserved even after connections are closed.-
-To improve performance issues of short queries on the thread pool, you can enable *batch execution*. Instead of returning back to the thread pool immediately after running a query, threads will keep active for a short time to wait for the next query through this connection. The thread then runs the query rapidly and, when this is complete, the thread waits for the next one. This process continues until the overall time spent exceeds a threshold.
-
-You determine the behavior of batch execution by using the following server parameters:
--- `thread_pool_batch_wait_timeout`: This value specifies the time a thread waits for another query to process.-- `thread_pool_batch_max_time`: This value determines the maximum time a thread will repeat the cycle of query execution and waiting for the next query.-
-> [!IMPORTANT]
-> Don't turn on the thread pool in production until you've tested it.
-
-### log_bin_trust_function_creators
-
-In Azure Database for MySQL, binary logs are always enabled (the `log_bin` parameter is set to `ON`). If you want to use triggers, you get error similar to the following: *You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*.
-
-The binary logging format is always **ROW**, and all connections to the server *always* use row-based binary logging. Row-based binary logging helps maintain security, and binary logging can't break, so you can safely set [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to `TRUE`.
-
-### innodb_buffer_pool_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) to learn more about this parameter.
-
-#### Servers on [general purpose storage v1 (supporting up to 4 TB)](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb)
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|872415232|134217728|872415232|
-|Basic|2|2684354560|134217728|2684354560|
-|General Purpose|2|3758096384|134217728|3758096384|
-|General Purpose|4|8053063680|134217728|8053063680|
-|General Purpose|8|16106127360|134217728|16106127360|
-|General Purpose|16|32749125632|134217728|32749125632|
-|General Purpose|32|66035122176|134217728|66035122176|
-|General Purpose|64|132070244352|134217728|132070244352|
-|Memory Optimized|2|7516192768|134217728|7516192768|
-|Memory Optimized|4|16106127360|134217728|16106127360|
-|Memory Optimized|8|32212254720|134217728|32212254720|
-|Memory Optimized|16|65498251264|134217728|65498251264|
-|Memory Optimized|32|132070244352|134217728|132070244352|
-
-#### Servers on [general purpose storage v2 (supporting up to 16 TB)](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage)
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|872415232|134217728|872415232|
-|Basic|2|2684354560|134217728|2684354560|
-|General Purpose|2|7516192768|134217728|7516192768|
-|General Purpose|4|16106127360|134217728|16106127360|
-|General Purpose|8|32212254720|134217728|32212254720|
-|General Purpose|16|65498251264|134217728|65498251264|
-|General Purpose|32|132070244352|134217728|132070244352|
-|General Purpose|64|264140488704|134217728|264140488704|
-|Memory Optimized|2|15032385536|134217728|15032385536|
-|Memory Optimized|4|32212254720|134217728|32212254720|
-|Memory Optimized|8|64424509440|134217728|64424509440|
-|Memory Optimized|16|130996502528|134217728|130996502528|
-|Memory Optimized|32|264140488704|134217728|264140488704|
-
-### innodb_file_per_table
-
-MySQL stores the `InnoDB` table in different tablespaces, based on the configuration you provide during the table creation. The [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html) is the storage area for the `InnoDB` data dictionary. A [file-per-table tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html) contains data and indexes for a single `InnoDB` table, and is stored in the file system in its own data file.
-
-You control this behavior by using the `innodb_file_per_table` server parameter. Setting `innodb_file_per_table` to `OFF` causes `InnoDB` to create tables in the system tablespace. Otherwise, `InnoDB` creates tables in file-per-table tablespaces.
-
-> [!NOTE]
-> You can only update `innodb_file_per_table` in the general purpose and memory optimized pricing tiers on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) and [general purpose storage v1](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb).
-
-Azure Database for MySQL supports 4 TB (at the largest) in a single data file on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage). If your database size is larger than 4 TB, you should create the table in the [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size that is larger than 4 TB, you should use the partition table.
-
-### join_buffer_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_join_buffer_size) to learn more about this parameter.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|262144|128|268435455|
-|General Purpose|4|262144|128|536870912|
-|General Purpose|8|262144|128|1073741824|
-|General Purpose|16|262144|128|2147483648|
-|General Purpose|32|262144|128|4294967295|
-|General Purpose|64|262144|128|4294967295|
-|Memory Optimized|2|262144|128|536870912|
-|Memory Optimized|4|262144|128|1073741824|
-|Memory Optimized|8|262144|128|2147483648|
-|Memory Optimized|16|262144|128|4294967295|
-|Memory Optimized|32|262144|128|4294967295|
-
-### max_connections
-
-|**Pricing tier**|**vCore(s)**|**Default value**|**Min value**|**Max value**|
-||||||
-|Basic|1|50|10|50|
-|Basic|2|100|10|100|
-|General Purpose|2|300|10|600|
-|General Purpose|4|625|10|1250|
-|General Purpose|8|1250|10|2500|
-|General Purpose|16|2500|10|5000|
-|General Purpose|32|5000|10|10000|
-|General Purpose|64|10000|10|20000|
-|Memory Optimized|2|625|10|1250|
-|Memory Optimized|4|1250|10|2500|
-|Memory Optimized|8|2500|10|5000|
-|Memory Optimized|16|5000|10|10000|
-|Memory Optimized|32|10000|10|20000|
-
-When the number of connections exceeds the limit, you might receive an error.
-
-> [!TIP]
-> To manage connections efficiently, it's a good idea to use a connection pooler, like ProxySQL. To learn about setting up ProxySQL, see the blog post [Load balance read replicas using ProxySQL in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042). Note that ProxySQL is an open source community tool. It's supported by Microsoft on a best-effort basis.
-
-### max_heap_table_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_heap_table_size) to learn more about this parameter.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|16777216|16384|268435455|
-|General Purpose|4|16777216|16384|536870912|
-|General Purpose|8|16777216|16384|1073741824|
-|General Purpose|16|16777216|16384|2147483648|
-|General Purpose|32|16777216|16384|4294967295|
-|General Purpose|64|16777216|16384|4294967295|
-|Memory Optimized|2|16777216|16384|536870912|
-|Memory Optimized|4|16777216|16384|1073741824|
-|Memory Optimized|8|16777216|16384|2147483648|
-|Memory Optimized|16|16777216|16384|4294967295|
-|Memory Optimized|32|16777216|16384|4294967295|
-
-### query_cache_size
-
-The query cache is turned off by default. To enable the query cache, configure the `query_cache_type` parameter.
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_query_cache_size) to learn more about this parameter.
-
-> [!NOTE]
-> The query cache is deprecated as of MySQL 5.7.20 and has been removed in MySQL 8.0.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|0|0|16777216|
-|General Purpose|4|0|0|33554432|
-|General Purpose|8|0|0|67108864|
-|General Purpose|16|0|0|134217728|
-|General Purpose|32|0|0|134217728|
-|General Purpose|64|0|0|134217728|
-|Memory Optimized|2|0|0|33554432|
-|Memory Optimized|4|0|0|67108864|
-|Memory Optimized|8|0|0|134217728|
-|Memory Optimized|16|0|0|134217728|
-|Memory Optimized|32|0|0|134217728|
-
-### lower_case_table_names
-
-The `lower_case_table_name` parameter is set to 1 by default, and you can update this parameter in MySQL 5.6 and MySQL 5.7.
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_lower_case_table_names) to learn more about this parameter.
-
-> [!NOTE]
-> In MySQL 8.0, `lower_case_table_name` is set to 1 by default, and you can't change it.
-
-### innodb_strict_mode
-
-If you receive an error similar to `Row size too large (> 8126)`, consider turning off the `innodb_strict_mode` parameter. You can't modify `innodb_strict_mode` globally at the server level. If row data size is larger than 8K, the data is truncated, without an error notification, leading to potential data loss. It's a good idea to modify the schema to fit the page size limit.
-
-You can set this parameter at a session level, by using `init_connect`. To set `innodb_strict_mode` at a session level, refer to [setting parameter not listed](./howto-server-parameters.md#setting-parameters-not-listed).
-
-> [!NOTE]
-> If you have a read replica server, setting `innodb_strict_mode` to `OFF` at the session-level on a source server will break the replication. We suggest keeping the parameter set to `ON` if you have read replicas.
-
-### sort_buffer_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sort_buffer_size) to learn more about this parameter.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|524288|32768|4194304|
-|General Purpose|4|524288|32768|8388608|
-|General Purpose|8|524288|32768|16777216|
-|General Purpose|16|524288|32768|33554432|
-|General Purpose|32|524288|32768|33554432|
-|General Purpose|64|524288|32768|33554432|
-|Memory Optimized|2|524288|32768|8388608|
-|Memory Optimized|4|524288|32768|16777216|
-|Memory Optimized|8|524288|32768|33554432|
-|Memory Optimized|16|524288|32768|33554432|
-|Memory Optimized|32|524288|32768|33554432|
-
-### tmp_table_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_tmp_table_size) to learn more about this parameter.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|16777216|1024|67108864|
-|General Purpose|4|16777216|1024|134217728|
-|General Purpose|8|16777216|1024|268435456|
-|General Purpose|16|16777216|1024|536870912|
-|General Purpose|32|16777216|1024|1073741824|
-|General Purpose|64|16777216|1024|1073741824|
-|Memory Optimized|2|16777216|1024|134217728|
-|Memory Optimized|4|16777216|1024|268435456|
-|Memory Optimized|8|16777216|1024|536870912|
-|Memory Optimized|16|16777216|1024|1073741824|
-|Memory Optimized|32|16777216|1024|1073741824|
-
-### InnoDB buffer pool warmup
-
-After you restart Azure Database for MySQL, the data pages that reside in the disk are loaded, as the tables are queried. This leads to increased latency and slower performance for the first run of the queries. For workloads that are sensitive to latency, you might find this slower performance unacceptable.
-
-You can use `InnoDB` buffer pool warmup to shorten the warmup period. This process reloads disk pages that were in the buffer pool *before* the restart, rather than waiting for DML or SELECT operations to access corresponding rows. For more information, see [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html).
-
-Note that improved performance comes at the expense of longer start-up time for the server. When you enable this parameter, the server startup and restart time is expected to increase, depending on the IOPS provisioned on the server. It's a good idea to test and monitor the restart time, to ensure that the start-up or restart performance is acceptable, because the server is unavailable during that time. Don't use this parameter when the IOPS provisioned is less than 1000 IOPS (in other words, when the storage provisioned is less than 335 GB).
-
-To save the state of the buffer pool at server shutdown, set the server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set the server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up or restart by lowering and fine-tuning the value of the server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`.
-
-> [!Note]
-> `InnoDB` buffer pool warmup parameters are only supported in general purpose storage servers with up to 16 TB storage. For more information, see [Azure Database for MySQL storage options](./concepts-pricing-tiers.md#storage).
-
-### time_zone
-
-Upon initial deployment, a server running Azure Database for MySQL includes systems tables for time zone information, but these tables aren't populated. You can populate the tables by calling the `mysql.az_load_timezone` stored procedure from tools like the MySQL command line or MySQL Workbench. For information about how to call the stored procedures and set the global or session-level time zones, see [Working with the time zone parameter (Azure portal)](howto-server-parameters.md#working-with-the-time-zone-parameter) or [Working with the time zone parameter (Azure CLI)](howto-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter).
-
-### binlog_expire_logs_seconds
-
-In Azure Database for MySQL, this parameter specifies the number of seconds the service waits before purging the binary log file.
-
-The *binary log* contains events that describe database changes, such as table creation operations or changes to table data. It also contains events for statements that can potentially make changes. The binary log is used mainly for two purposes, replication and data recovery operations.
-
-Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. In case of multiple replicas, the binary logs wait for the slowest replica to read the changes before being purged. If you want binary logs to persist longer, you can configure the parameter `binlog_expire_logs_seconds`. If you set `binlog_expire_logs_seconds` to `0`, which is the default value, it purges as soon as the handle to the binary log is freed. If you set `binlog_expire_logs_seconds` to greater than 0, then the binary log only purges after that period of time.
-
-For Azure Database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data out from the Azure Database for MySQL service, you must set this parameter in the primary to avoid purging binary logs before the replica reads from the changes from the primary. If you set the `binlog_expire_logs_seconds` to a higher value, then the binary logs won't get purged soon enough. This can lead to an increase in the storage billing.
-
-## Non-configurable server parameters
-
-The following server parameters aren't configurable in the service:
-
-|**Parameter**|**Fixed value**|
-| : | :-- |
-|`innodb_file_per_table` in the basic tier|OFF|
-|`innodb_flush_log_at_trx_commit`|1|
-|`sync_binlog`|1|
-|`innodb_log_file_size`|256 MB|
-|`innodb_log_files_in_group`|2|
-
-Other variables not listed here are set to the default MySQL values. Refer to the MySQL docs for versions [8.0](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html), [5.7](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html), and [5.6](https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html).
-
-## Next steps
--- Learn how to [configure server parameters by using the Azure portal](./howto-server-parameters.md)-- Learn how to [configure server parameters by using the Azure CLI](./howto-configure-server-parameters-using-cli.md)-- Learn how to [configure server parameters by using PowerShell](./howto-configure-server-parameters-using-powershell.md)
mysql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-servers.md
- Title: Server concepts - Azure Database for MySQL
-description: This topic provides considerations and guidelines for working with Azure Database for MySQL servers.
----- Previously updated : 3/18/2020-
-# Server concepts in Azure Database for MySQL
--
-This article provides considerations and guidelines for working with Azure Database for MySQL servers.
-
-## What is an Azure Database for MySQL server?
-
-An Azure Database for MySQL server is a central administrative point for multiple databases. It is the same MySQL server construct that you may be familiar with in the on-premises world. Specifically, the Azure Database for MySQL service is managed, provides performance guarantees, and exposes access and features at server-level.
-
-An Azure Database for MySQL server:
--- Is created within an Azure subscription.-- Is the parent resource for databases.-- Provides a namespace for databases.-- Is a container with strong lifetime semantics - delete a server and it deletes the contained databases.-- Collocates resources in a region.-- Provides a connection endpoint for server and database access.-- Provides the scope for management policies that apply to its databases: login, firewall, users, roles, configurations, etc.-- Is available in multiple versions. For more information, see [Supported Azure Database for MySQL database versions](./concepts-supported-versions.md).-
-Within an Azure Database for MySQL server, you can create one or multiple databases. You can opt to create a single database per server to use all the resources or to create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Pricing tiers](./concepts-pricing-tiers.md).
-
-## How do I connect and authenticate to an Azure Database for MySQL server?
-
-The following elements help ensure safe access to your database.
-
-| Security concept | Description |
-| :-- | :-- |
-| **Authentication and authorization** | Azure Database for MySQL server supports native MySQL authentication. You can connect and authenticate to a server with the server's admin login. |
-| **Protocol** | The service supports a message-based protocol used by MySQL. |
-| **TCP/IP** | The protocol is supported over TCP/IP and over Unix-domain sockets. |
-| **Firewall** | To help protect your data, a firewall rule prevents all access to your database server, until you specify which computers have permission. See [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md). |
-| **SSL** | The service supports enforcing SSL connections between your applications and your database server. See [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./howto-configure-ssl.md). |
-
-## Stop/Start an Azure Database for MySQL
-
-Azure Database for MySQL gives you the ability to **Stop** the server when not in use and **Start** the server when you resume activity. This is essentially done to save costs on the database servers and only pay for the resource when in use. This becomes even more important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. Later, when you want to bring the server back online, you can either use the [Azure portal](how-to-stop-start-server.md) or [CLI](how-to-stop-start-server.md).
-
-When the server is in the **Stopped** state, the server's compute is not billed. However, storage continues to to be billed as the server's storage remains to ensure that data files are available when the server is started again.
-
-> [!IMPORTANT]
-> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can chose to **Stop** it again if you are not using the server.
-
-During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to [start the server](how-to-stop-start-server.md).
-
-### Limitations of Stop/start operation
-- Not supported with read replica configurations (both source and replicas).-
-## How do I manage a server?
-
-You can manage the creation, deletion, server parameter configuration (my.cnf), scaling, networking, security, high availability, backup & restore, monitoring of your Azure Database for MySQL servers by using the Azure portal or the Azure CLI. In addition, following stored procedures are available in Azure Database for MySQL to perform certain database administration tasks required as SUPER user privilege is not supported on the server.
-
-|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**|
-|--|--|--|--|
-|*mysql.az_kill*|processlist_id|N/A|Equivalent to [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the connection associated with the provided processlist_id after terminating any statement the connection is executing.|
-|*mysql.az_kill_query*|processlist_id|N/A|Equivalent to [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the statement the connection is currently executing. Leaves the connection itself alive.|
-|*mysql.az_load_timezone*|N/A|N/A|Loads [time zone tables](howto-server-parameters.md#working-with-the-time-zone-parameter) to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").|
-
-## Next steps
--- For an overview of the service, see [Azure Database for MySQL Overview](./overview.md)-- For information about specific resource quotas and limitations based on your **pricing tier**, see [Pricing tiers](./concepts-pricing-tiers.md)-- For information about connecting to the service, see [Connection libraries for Azure Database for MySQL](./concepts-connection-libraries.md).
mysql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-ssl-connection-security.md
- Title: SSL/TLS connectivity - Azure Database for MySQL
-description: Information for configuring Azure Database for MySQL and associated applications to properly use SSL connections
----- Previously updated : 07/09/2020--
-# SSL/TLS connectivity in Azure Database for MySQL
--
-Azure Database for MySQL supports connecting your database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application.
-
-> [!NOTE]
-> Updating the `require_secure_transport` server parameter value does not affect the MySQL service's behavior. Use the SSL and TLS enforcement features outlined in this article to secure connections to your database.
-
->[!NOTE]
-> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021).
-
-> [!IMPORTANT]
-> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md)
-
-## SSL Default settings
-
-By default, the database service should be configured to require SSL connections when connecting to MySQL. We recommend to avoid disabling the SSL option whenever possible.
-
-When provisioning a new Azure Database for MySQL server through the Azure portal and CLI, enforcement of SSL connections is enabled by default.
-
-Connection strings for various programming languages are shown in the Azure portal. Those connection strings include the required SSL parameters to connect to your database. In the Azure portal, select your server. Under the **Settings** heading, select the **Connection strings**. The SSL parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations.
-
-In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MySQL server which is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem.
-
-Similarly, the following links point to the certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
-
-To learn how to enable or disable SSL connection when developing application, refer to [How to configure SSL](howto-configure-ssl.md).
-
-## TLS enforcement in Azure Database for MySQL
-
-Azure Database for MySQL supports encryption for clients connecting to your database server using Transport Layer Security (TLS). TLS is an industry standard protocol that ensures secure network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-
-### TLS settings
-
-Azure Database for MySQL provides the ability to enforce the TLS version for the client connections. To enforce the TLS version, use the **Minimum TLS version** option setting. The following values are allowed for this option setting:
-
-| Minimum TLS setting | Client TLS version supported |
-|:|-:|
-| TLSEnforcementDisabled (default) | No TLS required |
-| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher |
-| TLS1_1 | TLS 1.1, TLS 1.2 and higher |
-| TLS1_2 | TLS version 1.2 and higher |
--
-For example, setting the value of minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected.
-
-> [!Note]
-> By default, Azure Database for MySQL does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`).
->
-> Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement.
-
-The minimum TLS version setting doesnt require any restart of the server can be set while the server is online. To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](howto-tls-configurations.md).
-
-## Cipher support by Azure Database for MySQL Single server
-
-As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database serer. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites doesn't match one of suites listed below, incoming client connections will be rejected.
-
-### Cipher suite supported
-
-* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-
-## Next steps
--- [Connection libraries for Azure Database for MySQL](concepts-connection-libraries.md)-- Learn how to [configure SSL](howto-configure-ssl.md)-- Learn how to [configure TLS](howto-tls-configurations.md)
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-supported-versions.md
- Title: Supported versions - Azure Database for MySQL
-description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL service.
------ Previously updated : 11/4/2021-
-# Supported Azure Database for MySQL server versions
--
-Azure Database for MySQL has been developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports all the current major version supported by the community namely MySQL 5.7 and 8.0. MySQL uses the X.Y.Z naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
-
-## Connect to a gateway node that is running a specific MySQL version
-
-In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL service architecture.
-
-As Azure Database for MySQL supports major version v5.7 and v8.0, the default port 3306 to connect to Azure Database for MySQL runs MySQL client version 5.6 (least common denominator) to support connections to servers of all 2 supported major versions. However, if your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string.
-
-In Azure Database for MySQL service, gateway nodes listens on port 3308 for v5.7 clients and port 3309 for v8.0 clients. In other words, if you would like to connect to v5.7 gateway client, you should use your fully qualified server name and port 3308 to connect to your server from client application. Similarly, if you would like to connect to v8.0 gateway client, you can use your fully qualified server name and port 3309 to connect to your server. Check the following example for further clarity.
--
-> [!NOTE]
-> Connecting to Azure Database for MySQL via ports 3308 and 3309 are only supported for public connectivity, Private Link and VNet service endpoints can only be used with port 3306.
-
-## Azure Database for MySQL currently supports the following major and minor versions of MySQL:
-
-| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](./flexible-server/overview.md) <br/> Current minor version |
-|:-|:-|:|
-|MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html) (Retired) | Not supported|
-|MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-32.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
-|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
-
-Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
-
-## Managing updates and upgrades
-
-The service automatically manages patching for bug fix version updates. For example, 5.7.20 to 5.7.21.
-
-Major version upgrade is currently supported by service for upgrades from MySQL v5.6 to v5.7. For more details, refer [how to perform major version upgrades](how-to-major-version-upgrade.md). If you'd like to upgrade from 5.7 to 8.0, we recommend you perform [dump and restore](./concepts-migrate-dump-restore.md) to a server that was created with the new engine version.
-
-## Next steps
--- For details around Azure Database for MySQL versioning policy, see [this document](concepts-version-policy.md).-- For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](./concepts-pricing-tiers.md)
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
- Title: Version support policy - Azure Database for MySQL - Single Server and Flexible Server (Preview)
-description: Describes the policy around MySQL major and minor versions in Azure Database for MySQL
------ Previously updated : 11/03/2020-
-# Azure Database for MySQL version support policy
--
-This page describes the Azure Database for MySQL versioning policy, and is applicable to Azure Database for MySQL - Single Server and Azure Database for MySQL - Flexible Server (Preview) deployment modes.
-
-## Supported MySQL versions
-
-Azure Database for MySQL has been developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports all the current major version supported by the community namely MySQL 5.6, 5.7 and 8.0. MySQL uses the X.Y.Z naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
-
-Azure Database for MySQL currently supports the following major and minor versions of MySQL:
-
-| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](./flexible-server/overview.md) <br/> Current minor version |
-|:-|:-|:|
-|MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html)(Retired) | Not supported|
-|MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
-|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
-
-> [!NOTE]
-> In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. If your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string as explained in our documentation [here.](concepts-supported-versions.md#connect-to-a-gateway-node-that-is-running-a-specific-mysql-version)
-
-> [!IMPORTANT]
-> MySQL v5.6 is retired on Single Server as of Febuary 2021. Starting from September 1st 2021, you will not be able to create new v5.6 servers on Azure Database for MySQL - Single Server deployment option. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
-
-Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
-
-## Major version support
-
-Each major version of MySQL will be supported by Azure Database for MySQL from the date on which Azure begins supporting the version until the version is retired by the MySQL community, as provided in the [versioning policy](https://www.mysql.com/support/eol-notice.html).
-
-## Minor version support
-
-Azure Database for MySQL automatically performs minor version upgrades to the Azure preferred MySQL version as part of periodic maintenance.
-
-## Major version retirement policy
-
-The table below provides the retirement details for MySQL major versions. The dates follow the [MySQL versioning policy](https://www.mysql.com/support/eol-notice.html).
-
-| Version | What's New | Azure support start date | Retirement date|
-| -- | -- | | -- |
-| [MySQL 5.6](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/)| [Features](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-49.html) | March 20, 2018 | February 2021
-| [MySQL 5.7](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-31.html) | March 20, 2018 | October 2023
-| [MySQL 8](https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html)) | December 11, 2019 | April 2026
-
-## Retired MySQL engine versions not supported in Azure Database for MySQL
-
-After the retirement date for each MySQL database version, if you continue running the retired version, note the following restrictions:
--- As the community will not be releasing any further bug fixes or security fixes, Azure Database for MySQL will not patch the retired database engine for any bugs or security issues or otherwise take security measures with regard to the retired database engine. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.-- If any support issue you may experience relates to the MySQL database, we may not be able to provide you with support. In such cases, you will have to upgrade your database in order for us to provide you with any support.-- You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.-- New service capabilities developed by Azure Database for MySQL may only be available to supported database server versions.-- Uptime SLAs will apply solely to Azure Database for MySQL service-related issues and not to any downtime caused by database engine-related bugs. -- In the extreme event of a serious threat to the service caused by the MySQL database engine vulnerability identified in the retired database version, Azure may chose to stop the compute node of your database server to secure the service first. You will be asked to upgrade the server before bringing the server online. During the upgrade process, your data will always be protected using automatic backups performed on the service which can be used to restore back to the older version if desired. -
-## Next steps
--- See Azure Database for MySQL - Single Server [supported versions](./concepts-supported-versions.md)-- See Azure Database for MySQL - Flexible Server [supported versions](flexible-server/concepts-supported-versions.md)-- See MySQL [dump and restore](./concepts-migrate-dump-restore.md) to perform upgrades.
mysql Connect Cpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-cpp.md
- Title: 'Quickstart: Connect using C++ - Azure Database for MySQL'
-description: This quickstart provides a C++ code sample you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 5/26/2020
-adobe-target: true
--
-# Quickstart: Use Connector/C++ to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C++ application. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes you're familiar with developing using C++ and you're new to working with Azure Database for MySQL.
-
-## Prerequisites
-
-This quickstart uses the resources created in either of the following guides as a starting point:
-- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)-
-You also need to:
-- Install [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework)-- Install [Visual Studio](https://www.visualstudio.com/downloads/)-- Install [MySQL Connector/C++](https://dev.mysql.com/downloads/connector/cpp/) -- Install [Boost](https://www.boost.org/)-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install Visual Studio and .NET
-The steps in this section assume that you're familiar with developing using .NET.
-
-### **Windows**
-- Install Visual Studio 2019 Community. Visual Studio 2019 Community is a full featured, extensible, free IDE. With this IDE, you can create modern applications for Android, iOS, Windows, web and database applications, and cloud services. You can install either the full .NET Framework or just .NET Core: the code snippets in the Quickstart work with either. If you already have Visual Studio installed on your computer, skip the next two steps.
- 1. Download the [Visual Studio 2019 installer](https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15).
- 2. Run the installer and follow the installation prompts to complete the installation.
-
-### **Configure Visual Studio**
-1. From Visual Studio, Project -> Properties -> Linker -> General > Additional Library Directories, add the "\lib\opt" directory (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\lib\opt) of the C++ connector.
-2. From Visual Studio, Project -> Properties -> C/C++ -> General -> Additional Include Directories:
- - Add the "\include" directory of c++ connector (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\include\).
- - Add the Boost library's root directory (for example: C:\boost_1_64_0\).
-3. From Visual Studio, Project -> Properties -> Linker -> Input > Additional Dependencies, add **mysqlcppconn.lib** into the text field.
-4. Either copy **mysqlcppconn.dll** from the C++ connector library folder in step 3 to the same directory as the application executable or add it to the environment variable so your application can find it.
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-cpp/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-## Connect, create table, and insert data
-Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method createStatement() and execute() to run the database commands.
-
-Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
-
-```c++
-#include <stdlib.h>
-#include <iostream>
-#include "stdafx.h"
-
-#include "mysql_connection.h"
-#include <cppconn/driver.h>
-#include <cppconn/exception.h>
-#include <cppconn/prepared_statement.h>
-using namespace std;
-
-//for demonstration only. never save your password in the code!
-const string server = "tcp://yourservername.mysql.database.azure.com:3306";
-const string username = "username@servername";
-const string password = "yourpassword";
-
-int main()
-{
- sql::Driver *driver;
- sql::Connection *con;
- sql::Statement *stmt;
- sql::PreparedStatement *pstmt;
-
- try
- {
- driver = get_driver_instance();
- con = driver->connect(server, username, password);
- }
- catch (sql::SQLException e)
- {
- cout << "Could not connect to server. Error message: " << e.what() << endl;
- system("pause");
- exit(1);
- }
-
- //please create database "quickstartdb" ahead of time
- con->setSchema("quickstartdb");
-
- stmt = con->createStatement();
- stmt->execute("DROP TABLE IF EXISTS inventory");
- cout << "Finished dropping table (if existed)" << endl;
- stmt->execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);");
- cout << "Finished creating table" << endl;
- delete stmt;
-
- pstmt = con->prepareStatement("INSERT INTO inventory(name, quantity) VALUES(?,?)");
- pstmt->setString(1, "banana");
- pstmt->setInt(2, 150);
- pstmt->execute();
- cout << "One row inserted." << endl;
-
- pstmt->setString(1, "orange");
- pstmt->setInt(2, 154);
- pstmt->execute();
- cout << "One row inserted." << endl;
-
- pstmt->setString(1, "apple");
- pstmt->setInt(2, 100);
- pstmt->execute();
- cout << "One row inserted." << endl;
-
- delete pstmt;
- delete con;
- system("pause");
- return 0;
-}
-```
-
-## Read data
-
-Use the following code to connect and read the data by using a **SELECT** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the select commands. Next, the code uses next() to advance to the records in the results. Finally, the code uses getInt() and getString() to parse the values in the record.
-
-Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
-
-```c++
-#include <stdlib.h>
-#include <iostream>
-#include "stdafx.h"
-
-#include "mysql_connection.h"
-#include <cppconn/driver.h>
-#include <cppconn/exception.h>
-#include <cppconn/resultset.h>
-#include <cppconn/prepared_statement.h>
-using namespace std;
-
-//for demonstration only. never save your password in the code!
-const string server = "tcp://yourservername.mysql.database.azure.com:3306";
-const string username = "username@servername";
-const string password = "yourpassword";
-
-int main()
-{
- sql::Driver *driver;
- sql::Connection *con;
- sql::PreparedStatement *pstmt;
- sql::ResultSet *result;
-
- try
- {
- driver = get_driver_instance();
- //for demonstration only. never save password in the code!
- con = driver->connect(server, username, password);
- }
- catch (sql::SQLException e)
- {
- cout << "Could not connect to server. Error message: " << e.what() << endl;
- system("pause");
- exit(1);
- }
-
- con->setSchema("quickstartdb");
-
- //select
- pstmt = con->prepareStatement("SELECT * FROM inventory;");
- result = pstmt->executeQuery();
-
- while (result->next())
- printf("Reading from table=(%d, %s, %d)\n", result->getInt(1), result->getString(2).c_str(), result->getInt(3));
-
- delete result;
- delete pstmt;
- delete con;
- system("pause");
- return 0;
-}
-```
-
-## Update data
-Use the following code to connect and read the data by using an **UPDATE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the update commands.
-
-Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
-
-```c++
-#include <stdlib.h>
-#include <iostream>
-#include "stdafx.h"
-
-#include "mysql_connection.h"
-#include <cppconn/driver.h>
-#include <cppconn/exception.h>
-#include <cppconn/resultset.h>
-#include <cppconn/prepared_statement.h>
-using namespace std;
-
-//for demonstration only. never save your password in the code!
-const string server = "tcp://yourservername.mysql.database.azure.com:3306";
-const string username = "username@servername";
-const string password = "yourpassword";
-
-int main()
-{
- sql::Driver *driver;
- sql::Connection *con;
- sql::PreparedStatement *pstmt;
-
- try
- {
- driver = get_driver_instance();
- //for demonstration only. never save password in the code!
- con = driver->connect(server, username, password);
- }
- catch (sql::SQLException e)
- {
- cout << "Could not connect to server. Error message: " << e.what() << endl;
- system("pause");
- exit(1);
- }
-
- con->setSchema("quickstartdb");
-
- //update
- pstmt = con->prepareStatement("UPDATE inventory SET quantity = ? WHERE name = ?");
- pstmt->setInt(1, 200);
- pstmt->setString(2, "banana");
- pstmt->executeQuery();
- printf("Row updated\n");
-
- delete con;
- delete pstmt;
- system("pause");
- return 0;
-}
-```
--
-## Delete data
-Use the following code to connect and read the data by using a **DELETE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the delete commands.
-
-Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
-
-```c++
-#include <stdlib.h>
-#include <iostream>
-#include "stdafx.h"
-
-#include "mysql_connection.h"
-#include <cppconn/driver.h>
-#include <cppconn/exception.h>
-#include <cppconn/resultset.h>
-#include <cppconn/prepared_statement.h>
-using namespace std;
-
-//for demonstration only. never save your password in the code!
-const string server = "tcp://yourservername.mysql.database.azure.com:3306";
-const string username = "username@servername";
-const string password = "yourpassword";
-
-int main()
-{
- sql::Driver *driver;
- sql::Connection *con;
- sql::PreparedStatement *pstmt;
- sql::ResultSet *result;
-
- try
- {
- driver = get_driver_instance();
- //for demonstration only. never save password in the code!
- con = driver->connect(server, username, password);
- }
- catch (sql::SQLException e)
- {
- cout << "Could not connect to server. Error message: " << e.what() << endl;
- system("pause");
- exit(1);
- }
-
- con->setSchema("quickstartdb");
-
- //delete
- pstmt = con->prepareStatement("DELETE FROM inventory WHERE name = ?");
- pstmt->setString(1, "orange");
- result = pstmt->executeQuery();
- printf("Row deleted\n");
-
- delete pstmt;
- delete con;
- delete result;
- system("pause");
- return 0;
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md)
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-csharp.md
- Title: 'Quickstart: Connect using C# - Azure Database for MySQL'
-description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for MySQL."
------ Previously updated : 10/18/2020--
-# Quickstart: Use .NET (C#) to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database.
-
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-- Install the [.NET SDK for your platform](https://dotnet.microsoft.com/download) (Windows, Ubuntu Linux, or macOS) for your platform.-
-|Action| Connectivity method|How-to guide|
-|: |: |: |
-| **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
-| **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
-| **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- [Create a database and non-admin user](./howto-create-users.md)-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Create a C# project
-At a command prompt, run:
-
-```
-mkdir AzureMySqlExample
-cd AzureMySqlExample
-dotnet new console
-dotnet add package MySqlConnector
-```
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-csharp/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-## Step 1: Connect and insert data
-Use the following code to connect and load the data by using `CREATE TABLE` and `INSERT INTO` SQL statements. The code uses the methods of the `MySqlConnection` class:
-- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand), sets the CommandText property-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. -
-Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using MySqlConnector;
-
-namespace AzureMySqlExample
-{
- class MySqlCreate
- {
- static async Task Main(string[] args)
- {
- var builder = new MySqlConnectionStringBuilder
- {
- Server = "YOUR-SERVER.mysql.database.azure.com",
- Database = "YOUR-DATABASE",
- UserID = "USER@YOUR-SERVER",
- Password = "PASSWORD",
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.WriteLine("Opening connection");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "DROP TABLE IF EXISTS inventory;";
- await command.ExecuteNonQueryAsync();
- Console.WriteLine("Finished dropping table (if existed)");
-
- command.CommandText = "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
- await command.ExecuteNonQueryAsync();
- Console.WriteLine("Finished creating table");
-
- command.CommandText = @"INSERT INTO inventory (name, quantity) VALUES (@name1, @quantity1),
- (@name2, @quantity2), (@name3, @quantity3);";
- command.Parameters.AddWithValue("@name1", "banana");
- command.Parameters.AddWithValue("@quantity1", 150);
- command.Parameters.AddWithValue("@name2", "orange");
- command.Parameters.AddWithValue("@quantity2", 154);
- command.Parameters.AddWithValue("@name3", "apple");
- command.Parameters.AddWithValue("@quantity3", 100);
-
- int rowCount = await command.ExecuteNonQueryAsync();
- Console.WriteLine(String.Format("Number of rows inserted={0}", rowCount));
- }
-
- // connection will be closed by the 'using' block
- Console.WriteLine("Closing connection");
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
--
-## Step 2: Read data
-
-Use the following code to connect and read the data by using a `SELECT` SQL statement. The code uses the `MySqlConnection` class with methods:
-- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.-- [ExecuteReaderAsync()](/dotnet/api/system.data.common.dbcommand.executereaderasync) to run the database commands. -- [ReadAsync()](/dotnet/api/system.data.common.dbdatareader.readasync#System_Data_Common_DbDataReader_ReadAsync) to advance to the records in the results. Then the code uses GetInt32 and GetString to parse the values in the record.--
-Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using MySqlConnector;
-
-namespace AzureMySqlExample
-{
- class MySqlRead
- {
- static async Task Main(string[] args)
- {
- var builder = new MySqlConnectionStringBuilder
- {
- Server = "YOUR-SERVER.mysql.database.azure.com",
- Database = "YOUR-DATABASE",
- UserID = "USER@YOUR-SERVER",
- Password = "PASSWORD",
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.WriteLine("Opening connection");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "SELECT * FROM inventory;";
-
- using (var reader = await command.ExecuteReaderAsync())
- {
- while (await reader.ReadAsync())
- {
- Console.WriteLine(string.Format(
- "Reading from table=({0}, {1}, {2})",
- reader.GetInt32(0),
- reader.GetString(1),
- reader.GetInt32(2)));
- }
- }
- }
-
- Console.WriteLine("Closing connection");
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Step 3: Update data
-Use the following code to connect and read the data by using an `UPDATE` SQL statement. The code uses the `MySqlConnection` class with method:
-- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL. -- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. --
-Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using MySqlConnector;
-
-namespace AzureMySqlExample
-{
- class MySqlUpdate
- {
- static async Task Main(string[] args)
- {
- var builder = new MySqlConnectionStringBuilder
- {
- Server = "YOUR-SERVER.mysql.database.azure.com",
- Database = "YOUR-DATABASE",
- UserID = "USER@YOUR-SERVER",
- Password = "PASSWORD",
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.WriteLine("Opening connection");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "UPDATE inventory SET quantity = @quantity WHERE name = @name;";
- command.Parameters.AddWithValue("@quantity", 200);
- command.Parameters.AddWithValue("@name", "banana");
-
- int rowCount = await command.ExecuteNonQueryAsync();
- Console.WriteLine(String.Format("Number of rows updated={0}", rowCount));
- }
-
- Console.WriteLine("Closing connection");
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Step 4: Delete data
-Use the following code to connect and delete the data by using a `DELETE` SQL statement.
-
-The code uses the `MySqlConnection` class with method
-- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. --
-Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using MySqlConnector;
-
-namespace AzureMySqlExample
-{
- class MySqlDelete
- {
- static async Task Main(string[] args)
- {
- var builder = new MySqlConnectionStringBuilder
- {
- Server = "YOUR-SERVER.mysql.database.azure.com",
- Database = "YOUR-DATABASE",
- UserID = "USER@YOUR-SERVER",
- Password = "PASSWORD",
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.WriteLine("Opening connection");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "DELETE FROM inventory WHERE name = @name;";
- command.Parameters.AddWithValue("@name", "orange");
-
- int rowCount = await command.ExecuteNonQueryAsync();
- Console.WriteLine(String.Format("Number of rows deleted={0}", rowCount));
- }
-
- Console.WriteLine("Closing connection");
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-
-[Cannot find what you are looking for?Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-go.md
- Title: 'Quickstart: Connect using Go - Azure Database for MySQL'
-description: This quickstart provides several Go code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 5/26/2020--
-# Quickstart: Use Go language to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL from Windows, Ubuntu Linux, and Apple macOS platforms by using code written in the [Go](https://go.dev/) language. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Go and that you are new to working with Azure Database for MySQL.
-
-## Prerequisites
-This quickstart uses the resources created in either of these guides as a starting point:
-- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install Go and MySQL connector
-Install [Go](https://go.dev/doc/install) and the [go-sql-driver for MySQL](https://github.com/go-sql-driver/mysql#installation) on your own computer. Depending on your platform, follow the steps in the appropriate section:
-
-### Windows
-1. [Download](https://go.dev/dl/) and install Go for Microsoft Windows according to the [installation instructions](https://go.dev/doc/install).
-2. Launch the command prompt from the start menu.
-3. Make a folder for your project such. `mkdir %USERPROFILE%\go\src\mysqlgo`.
-4. Change directory into the project folder, such as `cd %USERPROFILE%\go\src\mysqlgo`.
-5. Set the environment variable for GOPATH to point to the source code directory. `set GOPATH=%USERPROFILE%\go`.
-6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
-
- In summary, install Go, then run these commands in the command prompt:
- ```cmd
- mkdir %USERPROFILE%\go\src\mysqlgo
- cd %USERPROFILE%\go\src\mysqlgo
- set GOPATH=%USERPROFILE%\go
- go get github.com/go-sql-driver/mysql
- ```
-
-### Linux (Ubuntu)
-1. Launch the Bash shell.
-2. Install Go by running `sudo apt-get install golang-go`.
-3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`.
-4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`.
-5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
-6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
-
- In summary, run these bash commands:
- ```bash
- sudo apt-get install golang-go
- mkdir -p ~/go/src/mysqlgo/
- cd ~/go/src/mysqlgo/
- export GOPATH=~/go/
- go get github.com/go-sql-driver/mysql
- ```
-
-### Apple macOS
-1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform.
-2. Launch the Bash shell.
-3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`.
-4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`.
-5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
-6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
-
- In summary, install Go, then run these bash commands:
- ```bash
- mkdir -p ~/go/src/mysqlgo/
- cd ~/go/src/mysqlgo/
- export GOPATH=~/go/
- go get github.com/go-sql-driver/mysql
- ```
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-go/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-
-## Build and run Go code
-1. To write Golang code, you can use a simple text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE), try [Gogland](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/).
-2. Paste the Go code from the sections below into text files, and then save them into your project folder with file extension \*.go (such as Windows path `%USERPROFILE%\go\src\mysqlgo\createtable.go` or Linux path `~/go/src/mysqlgo/createtable.go`).
-3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and then replace the example values with your own values.
-4. Launch the command prompt or Bash shell. Change directory into your project folder. For example, on Windows `cd %USERPROFILE%\go\src\mysqlgo\`. On Linux `cd ~/go/src/mysqlgo/`. Some of the IDE editors mentioned offer debug and runtime capabilities without requiring shell commands.
-5. Run the code by typing the command `go run createtable.go` to compile the application and run it.
-6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application.
-
-## Connect, create table, and insert data
-Use the following code to connect to the server, create a table, and load the data by using an **INSERT** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and it checks the connection by using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several DDL commands. The code also uses [Prepare()](http://go-database-sql.org/prepared.html) and Exec() to run prepared statements with different parameters to insert three rows. Each time, a custom checkError() method is used to check if an error occurred and panic to exit.
-
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
-
-```Go
-package main
-
-import (
- "database/sql"
- "fmt"
-
- _ "github.com/go-sql-driver/mysql"
-)
-
-const (
- host = "mydemoserver.mysql.database.azure.com"
- database = "quickstartdb"
- user = "myadmin@mydemoserver"
- password = "yourpassword"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
-
- // Initialize connection object.
- db, err := sql.Open("mysql", connectionString)
- checkError(err)
- defer db.Close()
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database.")
-
- // Drop previous table of same name if one exists.
- _, err = db.Exec("DROP TABLE IF EXISTS inventory;")
- checkError(err)
- fmt.Println("Finished dropping table (if existed).")
-
- // Create table.
- _, err = db.Exec("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
- checkError(err)
- fmt.Println("Finished creating table.")
-
- // Insert some data into table.
- sqlStatement, err := db.Prepare("INSERT INTO inventory (name, quantity) VALUES (?, ?);")
- res, err := sqlStatement.Exec("banana", 150)
- checkError(err)
- rowCount, err := res.RowsAffected()
- fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
-
- res, err = sqlStatement.Exec("orange", 154)
- checkError(err)
- rowCount, err = res.RowsAffected()
- fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
-
- res, err = sqlStatement.Exec("apple", 100)
- checkError(err)
- rowCount, err = res.RowsAffected()
- fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
- fmt.Println("Done.")
-}
-
-```
-
-## Read data
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Query()](https://go.dev/pkg/database/sql/#DB.Query) method to run the select command. Then it runs [Next()](https://go.dev/pkg/database/sql/#Rows.Next) to iterate through the result set and [Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) to parse the column values, saving the value into variables. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
-
-```Go
-package main
-
-import (
- "database/sql"
- "fmt"
-
- _ "github.com/go-sql-driver/mysql"
-)
-
-const (
- host = "mydemoserver.mysql.database.azure.com"
- database = "quickstartdb"
- user = "myadmin@mydemoserver"
- password = "yourpassword"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
-
- // Initialize connection object.
- db, err := sql.Open("mysql", connectionString)
- checkError(err)
- defer db.Close()
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database.")
-
- // Variables for printing column data when scanned.
- var (
- id int
- name string
- quantity int
- )
-
- // Read some data from the table.
- rows, err := db.Query("SELECT id, name, quantity from inventory;")
- checkError(err)
- defer rows.Close()
- fmt.Println("Reading data:")
- for rows.Next() {
- err := rows.Scan(&id, &name, &quantity)
- checkError(err)
- fmt.Printf("Data row = (%d, %s, %d)\n", id, name, quantity)
- }
- err = rows.Err()
- checkError(err)
- fmt.Println("Done.")
-}
-```
-
-## Update data
-Use the following code to connect and update the data using a **UPDATE** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the update command. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
-
-```Go
-package main
-
-import (
- "database/sql"
- "fmt"
-
- _ "github.com/go-sql-driver/mysql"
-)
-
-const (
- host = "mydemoserver.mysql.database.azure.com"
- database = "quickstartdb"
- user = "myadmin@mydemoserver"
- password = "yourpassword"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
-
- // Initialize connection object.
- db, err := sql.Open("mysql", connectionString)
- checkError(err)
- defer db.Close()
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database.")
-
- // Modify some data in table.
- rows, err := db.Exec("UPDATE inventory SET quantity = ? WHERE name = ?", 200, "banana")
- checkError(err)
- rowCount, err := rows.RowsAffected()
- fmt.Printf("Updated %d row(s) of data.\n", rowCount)
- fmt.Println("Done.")
-}
-```
-
-## Delete data
-Use the following code to connect and remove data using a **DELETE** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the delete command. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
-
-```Go
-package main
-
-import (
- "database/sql"
- "fmt"
- _ "github.com/go-sql-driver/mysql"
-)
-
-const (
- host = "mydemoserver.mysql.database.azure.com"
- database = "quickstartdb"
- user = "myadmin@mydemoserver"
- password = "yourpassword"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
-
- // Initialize connection object.
- db, err := sql.Open("mysql", connectionString)
- checkError(err)
- defer db.Close()
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database.")
-
- // Modify some data in table.
- rows, err := db.Exec("DELETE FROM inventory WHERE name = ?", "orange")
- checkError(err)
- rowCount, err := rows.RowsAffected()
- fmt.Printf("Deleted %d row(s) of data.\n", rowCount)
- fmt.Println("Done.")
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-java.md
- Title: 'Quickstart: Use Java and JDBC with Azure Database for MySQL'
-description: Learn how to use Java and JDBC with an Azure Database for MySQL database.
------ Previously updated : 08/17/2020--
-# Quickstart: Use Java and JDBC with Azure Database for MySQL
--
-This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL](./index.yml).
-
-JDBC is the standard Java API to connect to traditional relational databases.
-
-## Prerequisites
--- An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).-- [Azure Cloud Shell](../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.-- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).-- The [Apache Maven](https://maven.apache.org/) build tool.-
-## Prepare the working environment
-
-We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
-
-Set up those environment variables by using the following commands:
-
-```bash
-AZ_RESOURCE_GROUP=database-workshop
-AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
-AZ_LOCATION=<YOUR_AZURE_REGION>
-AZ_MYSQL_USERNAME=demo
-AZ_MYSQL_PASSWORD=<YOUR_MYSQL_PASSWORD>
-AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
-```
-
-Replace the placeholders with the following values, which are used throughout this article:
--- `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.-- `<YOUR_MYSQL_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).-
-Next, create a resource group:
-
-```azurecli
-az group create \
- --name $AZ_RESOURCE_GROUP \
- --location $AZ_LOCATION \
- | jq
-```
-
-> [!NOTE]
-> We use the `jq` utility, which is installed by default on [Azure Cloud Shell](https://shell.azure.com/) to display JSON data and make it more readable.
-> If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
-
-## Create an Azure Database for MySQL instance
-
-The first thing we'll create is a managed MySQL server.
-
-> [!NOTE]
-> You can read more detailed information about creating MySQL servers in [Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md).
-
-In [Azure Cloud Shell](https://shell.azure.com/), run the following script:
-
-```azurecli
-az mysql server create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME \
- --location $AZ_LOCATION \
- --sku-name B_Gen5_1 \
- --storage-size 5120 \
- --admin-user $AZ_MYSQL_USERNAME \
- --admin-password $AZ_MYSQL_PASSWORD \
- | jq
-```
-
-This command creates a small MySQL server.
-
-### Configure a firewall rule for your MySQL server
-
-Azure Database for MySQL instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
-
-Because you configured our local IP address at the beginning of this article, you can open the server's firewall by running:
-
-```azurecli
-az mysql server firewall-rule create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME-database-allow-local-ip \
- --server $AZ_DATABASE_NAME \
- --start-ip-address $AZ_LOCAL_IP_ADDRESS \
- --end-ip-address $AZ_LOCAL_IP_ADDRESS \
- | jq
-```
-
-### Configure a MySQL database
-
-The MySQL server that you created earlier is empty. It doesn't have any database that you can use with the Java application. Create a new database called `demo`:
-
-```azurecli
-az mysql db create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name demo \
- --server-name $AZ_DATABASE_NAME \
- | jq
-```
-
-### Create a new Java project
-
-Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
-
-```xml
-<?xml version="1.0" encoding="UTF-8"?>
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>com.example</groupId>
- <artifactId>demo</artifactId>
- <version>0.0.1-SNAPSHOT</version>
- <name>demo</name>
-
- <properties>
- <java.version>1.8</java.version>
- <maven.compiler.source>1.8</maven.compiler.source>
- <maven.compiler.target>1.8</maven.compiler.target>
- </properties>
-
- <dependencies>
- <dependency>
- <groupId>mysql</groupId>
- <artifactId>mysql-connector-java</artifactId>
- <version>8.0.20</version>
- </dependency>
- </dependencies>
-</project>
-```
-
-This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
--- Java 8-- A recent MySQL driver for Java-
-### Prepare a configuration file to connect to Azure Database for MySQL
-
-Create a *src/main/resources/application.properties* file, and add:
-
-```properties
-url=jdbc:mysql://$AZ_DATABASE_NAME.mysql.database.azure.com:3306/demo?serverTimezone=UTC
-user=demo@$AZ_DATABASE_NAME
-password=$AZ_MYSQL_PASSWORD
-```
--- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.-- Replace the `$AZ_MYSQL_PASSWORD` variable with the value that you configured at the beginning of this article.-
-> [!NOTE]
-> We append `?serverTimezone=UTC` to the configuration property `url`, to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, our Java server would not use the same date format as the database, which would result in an error.
-
-### Create an SQL file to generate the database schema
-
-We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
-
-```sql
-DROP TABLE IF EXISTS todo;
-CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN);
-```
-
-## Code the application
-
-### Connect to the database
-
-Next, add the Java code that will use JDBC to store and retrieve data from your MySQL server.
-
-Create a *src/main/java/DemoApplication.java* file, that contains:
-
-```java
-package com.example.demo;
-
-import com.mysql.cj.jdbc.AbandonedConnectionCleanupThread;
-
-import java.sql.*;
-import java.util.*;
-import java.util.logging.Logger;
-
-public class DemoApplication {
-
- private static final Logger log;
-
- static {
- System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
- log =Logger.getLogger(DemoApplication.class.getName());
- }
-
- public static void main(String[] args) throws Exception {
- log.info("Loading application properties");
- Properties properties = new Properties();
- properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
-
- log.info("Connecting to the database");
- Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
- log.info("Database connection test: " + connection.getCatalog());
-
- log.info("Create database schema");
- Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
- Statement statement = connection.createStatement();
- while (scanner.hasNextLine()) {
- statement.execute(scanner.nextLine());
- }
-
- /*
- Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
- insertData(todo, connection);
- todo = readData(connection);
- todo.setDetails("congratulations, you have updated data!");
- updateData(todo, connection);
- deleteData(todo, connection);
- */
-
- log.info("Closing database connection");
- connection.close();
- AbandonedConnectionCleanupThread.uncheckedShutdown();
- }
-}
-```
-
-This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the MySQL server and create a schema that will store our data.
-
-In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
-
-> [!NOTE]
-> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
-
-> [!NOTE]
-> The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver specific command to destroy an internal thread when shutting down the application.
-> It can be safely ignored.
-
-You can now execute this main class with your favorite tool:
--- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.-- Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.-
-The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection, as you should see in the console logs:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Closing database connection
-```
-
-### Create a domain class
-
-Create a new `Todo` Java class, next to the `DemoApplication` class, and add the following code:
-
-```java
-package com.example.demo;
-
-public class Todo {
-
- private Long id;
- private String description;
- private String details;
- private boolean done;
-
- public Todo() {
- }
-
- public Todo(Long id, String description, String details, boolean done) {
- this.id = id;
- this.description = description;
- this.details = details;
- this.done = done;
- }
-
- public Long getId() {
- return id;
- }
-
- public void setId(Long id) {
- this.id = id;
- }
-
- public String getDescription() {
- return description;
- }
-
- public void setDescription(String description) {
- this.description = description;
- }
-
- public String getDetails() {
- return details;
- }
-
- public void setDetails(String details) {
- this.details = details;
- }
-
- public boolean isDone() {
- return done;
- }
-
- public void setDone(boolean done) {
- this.done = done;
- }
-
- @Override
- public String toString() {
- return "Todo{" +
- "id=" + id +
- ", description='" + description + '\'' +
- ", details='" + details + '\'' +
- ", done=" + done +
- '}';
- }
-}
-```
-
-This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script.
-
-### Insert data into Azure Database for MySQL
-
-In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database:
-
-```java
-private static void insertData(Todo todo, Connection connection) throws SQLException {
- log.info("Insert data");
- PreparedStatement insertStatement = connection
- .prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);");
-
- insertStatement.setLong(1, todo.getId());
- insertStatement.setString(2, todo.getDescription());
- insertStatement.setString(3, todo.getDetails());
- insertStatement.setBoolean(4, todo.isDone());
- insertStatement.executeUpdate();
-}
-```
-
-You can now uncomment the two following lines in the `main` method:
-
-```java
-Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
-insertData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Closing database connection
-```
-
-### Reading data from Azure Database for MySQL
-
-Let's read the data previously inserted, to validate that our code works correctly.
-
-In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
-
-```java
-private static Todo readData(Connection connection) throws SQLException {
- log.info("Read data");
- PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;");
- ResultSet resultSet = readStatement.executeQuery();
- if (!resultSet.next()) {
- log.info("There is no data in the database!");
- return null;
- }
- Todo todo = new Todo();
- todo.setId(resultSet.getLong("id"));
- todo.setDescription(resultSet.getString("description"));
- todo.setDetails(resultSet.getString("details"));
- todo.setDone(resultSet.getBoolean("done"));
- log.info("Data read from the database: " + todo.toString());
- return todo;
-}
-```
-
-You can now uncomment the following line in the `main` method:
-
-```java
-todo = readData(connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Closing database connection
-```
-
-### Updating data in Azure Database for MySQL
-
-Let's update the data we previously inserted.
-
-Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
-
-```java
-private static void updateData(Todo todo, Connection connection) throws SQLException {
- log.info("Update data");
- PreparedStatement updateStatement = connection
- .prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;");
-
- updateStatement.setString(1, todo.getDescription());
- updateStatement.setString(2, todo.getDetails());
- updateStatement.setBoolean(3, todo.isDone());
- updateStatement.setLong(4, todo.getId());
- updateStatement.executeUpdate();
- readData(connection);
-}
-```
-
-You can now uncomment the two following lines in the `main` method:
-
-```java
-todo.setDetails("congratulations, you have updated data!");
-updateData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
-[INFO ] Closing database connection
-```
-
-### Deleting data in Azure Database for MySQL
-
-Finally, let's delete the data we previously inserted.
-
-Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
-
-```java
-private static void deleteData(Todo todo, Connection connection) throws SQLException {
- log.info("Delete data");
- PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;");
- deleteStatement.setLong(1, todo.getId());
- deleteStatement.executeUpdate();
- readData(connection);
-}
-```
-
-You can now uncomment the following line in the `main` method:
-
-```java
-deleteData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
-[INFO ] Delete data
-[INFO ] Read data
-[INFO ] There is no data in the database!
-[INFO ] Closing database connection
-```
-
-## Clean up resources
-
-Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for MySQL.
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md)
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-nodejs.md
- Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL'
-description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 12/11/2020-
-# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL
--
-In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
-
-This topic assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Database for MySQL server. [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install Node.js and the MySQL connector
-
-Depending on your platform, follow the instructions in the appropriate section to install [Node.js](https://nodejs.org). Use npm to install the [mysql](https://www.npmjs.com/package/mysql) package and its dependencies into your project folder.
-
-### Windows
-
-1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your desired Windows installer option.
-2. Make a local project folder such as `nodejsmysql`.
-3. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\`
-4. Run the NPM tool to install the mysql library into the project folder.
-
- ```cmd
- cd c:\nodejsmysql\
- "C:\Program Files\nodejs\npm" install mysql
- "C:\Program Files\nodejs\npm" list
- ```
-
-5. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
-
-### Linux (Ubuntu)
-
-1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js.
-
- ```bash
- # Using Ubuntu
- curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
- sudo apt-get install -y nodejs
-
- # Using Debian, as root
- curl -sL https://deb.nodesource.com/setup_14.x | bash -
- apt-get install -y nodejs
- ```
-
-2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
-
- ```bash
- mkdir nodejsmysql
- cd nodejsmysql
- npm install --save mysql
- npm list
- ```
-3. Verify the installation by checking npm list output text. The version number may vary as new patches are released.
-
-### macOS
-
-1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your macOS installer.
-
-2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
-
- ```bash
- mkdir nodejsmysql
- cd nodejsmysql
- npm install --save mysql
- npm list
- ```
-
-3. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
-
-## Get connection information
-
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Select the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-nodejs/server-name-azure-database-mysql.png" alt-text="Azure Database for MySQL server name":::
-
-## Running the code samples
-
-1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js).
-1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the server and database.
-1. **Obtain SSL certificate**: Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive.
-
- **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
-
- See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
-1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file.
-1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`.
-1. To run the application, enter the node command followed by the file name, such as `node createtable.js`.
-1. On Windows, if the node application is not in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
-
-## Connect, create table, and insert data
-
-Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) function is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) function is used to execute the SQL query against MySQL database.
-
-```javascript
-const mysql = require('mysql');
-const fs = require('fs');
-
-var config =
-{
- host: 'mydemoserver.mysql.database.azure.com',
- user: 'myadmin@mydemoserver',
- password: 'your_password',
- database: 'quickstartdb',
- port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
-};
-
-const conn = new mysql.createConnection(config);
-
-conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else
- {
- console.log("Connection established.");
- queryDatabase();
- }
-});
-
-function queryDatabase(){
- conn.query('DROP TABLE IF EXISTS inventory;', function (err, results, fields) {
- if (err) throw err;
- console.log('Dropped inventory table if existed.');
- })
- conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);',
- function (err, results, fields) {
- if (err) throw err;
- console.log('Created inventory table.');
- })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150],
- function (err, results, fields) {
- if (err) throw err;
- else console.log('Inserted ' + results.affectedRows + ' row(s).');
- })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 154],
- function (err, results, fields) {
- if (err) throw err;
- console.log('Inserted ' + results.affectedRows + ' row(s).');
- })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100],
- function (err, results, fields) {
- if (err) throw err;
- console.log('Inserted ' + results.affectedRows + ' row(s).');
- })
- conn.end(function (err) {
- if (err) throw err;
- else console.log('Done.')
- });
-};
-```
-
-## Read data
-
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query.
-
-```javascript
-const mysql = require('mysql');
-const fs = require('fs');
-
-var config =
-{
- host: 'mydemoserver.mysql.database.azure.com',
- user: 'myadmin@mydemoserver',
- password: 'your_password',
- database: 'quickstartdb',
- port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
-};
-
-const conn = new mysql.createConnection(config);
-
-conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else {
- console.log("Connection established.");
- readData();
- }
- });
-
-function readData(){
- conn.query('SELECT * FROM inventory',
- function (err, results, fields) {
- if (err) throw err;
- else console.log('Selected ' + results.length + ' row(s).');
- for (i = 0; i < results.length; i++) {
- console.log('Row: ' + JSON.stringify(results[i]));
- }
- console.log('Done.');
- })
- conn.end(
- function (err) {
- if (err) throw err;
- else console.log('Closing connection.')
- });
-};
-```
-
-## Update data
-
-Use the following code to connect and update data by using an **UPDATE** SQL statement.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
-
-```javascript
-const mysql = require('mysql');
-const fs = require('fs');
-
-var config =
-{
- host: 'mydemoserver.mysql.database.azure.com',
- user: 'myadmin@mydemoserver',
- password: 'your_password',
- database: 'quickstartdb',
- port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
-};
-
-const conn = new mysql.createConnection(config);
-
-conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else {
- console.log("Connection established.");
- updateData();
- }
- });
-
-function updateData(){
- conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [200, 'banana'],
- function (err, results, fields) {
- if (err) throw err;
- else console.log('Updated ' + results.affectedRows + ' row(s).');
- })
- conn.end(
- function (err) {
- if (err) throw err;
- else console.log('Done.')
- });
-};
-```
-
-## Delete data
-
-Use the following code to connect and delete data by using a **DELETE** SQL statement.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
--
-```javascript
-const mysql = require('mysql');
-const fs = require('fs');
-
-var config =
-{
- host: 'mydemoserver.mysql.database.azure.com',
- user: 'myadmin@mydemoserver',
- password: 'your_password',
- database: 'quickstartdb',
- port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
-};
-
-const conn = new mysql.createConnection(config);
-
-conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else {
- console.log("Connection established.");
- deleteData();
- }
- });
-
-function deleteData(){
- conn.query('DELETE FROM inventory WHERE name = ?', ['orange'],
- function (err, results, fields) {
- if (err) throw err;
- else console.log('Deleted ' + results.affectedRows + ' row(s).');
- })
- conn.end(
- function (err) {
- if (err) throw err;
- else console.log('Done.')
- });
-};
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-php.md
- Title: 'Quickstart: Connect using PHP - Azure Database for MySQL'
-description: This quickstart provides several PHP code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 10/28/2020--
-# Quickstart: Use PHP to connect and query data in Azure Database for MySQL
-
-This quickstart demonstrates how to connect to an Azure Database for MySQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database.
-
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-
- |Action| Connectivity method|How-to guide|
- |: |: |: |
- | **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
- | **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
- | **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- [Create a database and non-admin user](./howto-create-users.md?tabs=single-server)-- Install latest PHP version for your operating system
- - [PHP on macOS](https://secure.php.net/manual/install.macosx.php)
- - [PHP on Linux](https://secure.php.net/manual/install.unix.php)
- - [PHP on Windows](https://secure.php.net/manual/install.windows.php)
-
-> [!NOTE]
-> We are using [MySQLi](https://www.php.net/manual/en/book.mysqli.php) library to manage connect and query the server in this quickstart.
-
-## Get connection information
-You can get the database server connection information from the Azure portal by following these steps:
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. Navigate to the Azure Databases for MySQL page. You can search for and select **Azure Database for MySQL**.
-
-2. Select your MySQL server (such as **mydemoserver**).
-3. In the **Overview** page, copy the fully qualified server name next to **Server name** and the admin user name next to **Server admin login name**. To copy the server name or host name, hover over it and select the **Copy** icon.
-
-> [!IMPORTANT]
-> - If you forgot your password, you can [reset the password](./howto-create-manage-server-portal.md#update-admin-password).
-> - Replace the **host, username, password,** and **db_name** parameters with your own values**
-
-## Step 1: Connect to the server
-SSL is enabled by default. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. This code calls:
-- [mysqli_init](https://secure.php.net/manual/mysqli.init.php) to initialize MySQLi.-- [mysqli_ssl_set](https://www.php.net/manual/en/mysqli.ssl-set.php) to point to the SSL certificate path. This is required for your local environment but not required for App Service Web App or Azure Virtual machines.-- [mysqli_real_connect](https://secure.php.net/manual/mysqli.real-connect.php) to connect to MySQL.-- [mysqli_close](https://secure.php.net/manual/mysqli.close.php) to close the connection.--
-```php
-$host = 'mydemoserver.mysql.database.azure.com';
-$username = 'myadmin@mydemoserver';
-$password = 'your_password';
-$db_name = 'your_database';
-
-//Initializes MySQLi
-$conn = mysqli_init();
-
-mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootG2.crt.pem", NULL, NULL);
-
-// Establish the connection
-mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, NULL, MYSQLI_CLIENT_SSL);
-
-//If connection failed, show the error
-if (mysqli_connect_errno())
-{
- die('Failed to connect to MySQL: '.mysqli_connect_error());
-}
-```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Step 2: Create a Table
-Use the following code to connect. This code calls:
-- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) to run the query.
-```php
-// Run the create table query
-if (mysqli_query($conn, '
-CREATE TABLE Products (
-`Id` INT NOT NULL AUTO_INCREMENT ,
-`ProductName` VARCHAR(200) NOT NULL ,
-`Color` VARCHAR(50) NOT NULL ,
-`Price` DOUBLE NOT NULL ,
-PRIMARY KEY (`Id`)
-);
-')) {
-printf("Table created\n");
-}
-```
-
-## Step 3: Insert data
-Use the following code to insert data by using an **INSERT** SQL statement. This code uses the methods:
-- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared insert statement-- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) to bind the parameters for each inserted column value.-- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php)-- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) to close the statement by using method--
-```php
-//Create an Insert prepared statement and run it
-$product_name = 'BrandNewProduct';
-$product_color = 'Blue';
-$product_price = 15.5;
-if ($stmt = mysqli_prepare($conn, "INSERT INTO Products (ProductName, Color, Price) VALUES (?, ?, ?)"))
-{
- mysqli_stmt_bind_param($stmt, 'ssd', $product_name, $product_color, $product_price);
- mysqli_stmt_execute($stmt);
- printf("Insert: Affected %d rows\n", mysqli_stmt_affected_rows($stmt));
- mysqli_stmt_close($stmt);
-}
-
-```
-
-## Step 4: Read data
-Use the following code to read the data by using a **SELECT** SQL statement. The code uses the method:
-- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) execute the **SELECT** query-- [mysqli_fetch_assoc](https://secure.php.net/manual/mysqli-result.fetch-assoc.php) to fetch the resulting rows.-
-```php
-//Run the Select query
-printf("Reading data from table: \n");
-$res = mysqli_query($conn, 'SELECT * FROM Products');
-while ($row = mysqli_fetch_assoc($res))
- {
- var_dump($row);
- }
-
-```
--
-## Step 5: Delete data
-Use the following code delete rows by using a **DELETE** SQL statement. The code uses the methods:
-- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared delete statement-- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) binds the parameters-- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php) executes the prepared delete statement-- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) closes the statement-
-```php
-//Run the Delete statement
-$product_name = 'BrandNewProduct';
-if ($stmt = mysqli_prepare($conn, "DELETE FROM Products WHERE ProductName = ?")) {
-mysqli_stmt_bind_param($stmt, 's', $product_name);
-mysqli_stmt_execute($stmt);
-printf("Delete: Affected %d rows\n", mysqli_stmt_affected_rows($stmt));
-mysqli_stmt_close($stmt);
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-
-[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-python.md
- Title: 'Quickstart: Connect using Python - Azure Database for MySQL'
-description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 10/28/2020--
-# Quickstart: Use Python to connect and query data in Azure Database for MySQL
--
-In this quickstart, you connect to an Azure Database for MySQL by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
-
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-
- |Action| Connectivity method|How-to guide|
- |: |: |: |
- | **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
- | **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
- | **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- [Create a database and non-admin user](./howto-create-users.md)-
-## Install Python and the MySQL connector
-
-Install Python and the MySQL connector for Python on your computer by using the following steps:
-
-> [!NOTE]
-> This quickstart is using [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/).
-
-1. Download and install [Python 3.7 or above](https://www.python.org/downloads/) for your OS. Make sure to add Python to your `PATH`, because the MySQL connector requires that.
-
-2. Open a command prompt or `bash` shell, and check your Python version by running `python -V` with the uppercase V switch.
-
-3. The `pip` package installer is included in the latest versions of Python. Update `pip` to the latest version by running `pip install -U pip`.
-
- If `pip` isn't installed, you can download and install it with `get-pip.py`. For more information, see [Installation](https://pip.pypa.io/en/stable/installing/).
-
-4. Use `pip` to install the MySQL connector for Python and its dependencies:
-
- ```bash
- pip install mysql-connector-python
- ```
-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Get connection information
-
-Get the connection information you need to connect to Azure Database for MySQL from the Azure portal. You need the server name, database name, and login credentials.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the portal search bar, search for and select the Azure Database for MySQL server you created, such as **mydemoserver**.
-
- :::image type="content" source="./media/connect-python/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-1. From the server's **Overview** page, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this page.
-
- :::image type="content" source="./media/connect-python/azure-database-for-mysql-server-overview-name-login.png" alt-text="Azure Database for MySQL server name 2":::
-
-## Running the Python code samples
-
-For each code example in this article:
-
-1. Create a new file in a text editor.
-2. Add the code example to the file. In the code, replace the `<mydemoserver>`, `<myadmin>`, `<mypassword>`, and `<mydatabase>` placeholders with the values for your MySQL server and database.
-1. SSL is enabled by default on Azure Database for MySQL servers. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. Replace the `ssl_ca` value in the code with path to this file on your computer.
-1. Save the file in a project folder with a *.py* extension, such as *C:\pythonmysql\createtable.py* or */home/username/pythonmysql/createtable.py*.
-1. To run the code, open a command prompt or `bash` shell and change directory into your project folder, for example `cd pythonmysql`. Type the `python` command followed by the file name, for example `python createtable.py`, and press Enter.
-
- > [!NOTE]
- > On Windows, if *python.exe* is not found, you may need to add the Python path into your PATH environment variable, or provide the full path to *python.exe*, for example `C:\python27\python.exe createtable.py`.
-
-## Step 1: Create a table and insert data
-
-Use the following code to connect to the server and database, create a table, and load data by using an **INSERT** SQL statement.The code imports the mysql.connector library, and uses the method:
-- [connect()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysql-connector-connect.html) function to connect to Azure Database for MySQL using the [arguments](https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html) in the config collection. -- [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. -- [cursor.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-close.html) when you are done using a cursor.-- [conn.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-close.html) to close the connection the connection.-
-```python
-import mysql.connector
-from mysql.connector import errorcode
-
-# Obtain connection string information from the portal
-
-config = {
- 'host':'<mydemoserver>.mysql.database.azure.com',
- 'user':'<myadmin>@<mydemoserver>',
- 'password':'<mypassword>',
- 'database':'<mydatabase>',
- 'client_flags': [mysql.connector.ClientFlag.SSL],
- 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
-}
-
-# Construct connection string
-
-try:
- conn = mysql.connector.connect(**config)
- print("Connection established")
-except mysql.connector.Error as err:
- if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
- print("Something is wrong with the user name or password")
- elif err.errno == errorcode.ER_BAD_DB_ERROR:
- print("Database does not exist")
- else:
- print(err)
-else:
- cursor = conn.cursor()
-
- # Drop previous table of same name if one exists
- cursor.execute("DROP TABLE IF EXISTS inventory;")
- print("Finished dropping table (if existed).")
-
- # Create table
- cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
- print("Finished creating table.")
-
- # Insert some data into table
- cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150))
- print("Inserted",cursor.rowcount,"row(s) of data.")
- cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154))
- print("Inserted",cursor.rowcount,"row(s) of data.")
- cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100))
- print("Inserted",cursor.rowcount,"row(s) of data.")
-
- # Cleanup
- conn.commit()
- cursor.close()
- conn.close()
- print("Done.")
-```
-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Step 2: Read data
-
-Use the following code to connect and read the data by using a **SELECT** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
-
-The code reads the data rows using the [fetchall()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchall.html) method, keeps the result set in a collection row, and uses a `for` iterator to loop over the rows.
-
-```python
-import mysql.connector
-from mysql.connector import errorcode
-
-# Obtain connection string information from the portal
-
-config = {
- 'host':'<mydemoserver>.mysql.database.azure.com',
- 'user':'<myadmin>@<mydemoserver>',
- 'password':'<mypassword>',
- 'database':'<mydatabase>',
- 'client_flags': [mysql.connector.ClientFlag.SSL],
- 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
-}
-
-# Construct connection string
-
-try:
- conn = mysql.connector.connect(**config)
- print("Connection established")
-except mysql.connector.Error as err:
- if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
- print("Something is wrong with the user name or password")
- elif err.errno == errorcode.ER_BAD_DB_ERROR:
- print("Database does not exist")
- else:
- print(err)
-else:
- cursor = conn.cursor()
-
- # Read data
- cursor.execute("SELECT * FROM inventory;")
- rows = cursor.fetchall()
- print("Read",cursor.rowcount,"row(s) of data.")
-
- # Print all rows
- for row in rows:
- print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2])))
-
- # Cleanup
- conn.commit()
- cursor.close()
- conn.close()
- print("Done.")
-```
-
-## Step 3: Update data
-
-Use the following code to connect and update the data by using an **UPDATE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
-
-```python
-import mysql.connector
-from mysql.connector import errorcode
-
-# Obtain connection string information from the portal
-
-config = {
- 'host':'<mydemoserver>.mysql.database.azure.com',
- 'user':'<myadmin>@<mydemoserver>',
- 'password':'<mypassword>',
- 'database':'<mydatabase>',
- 'client_flags': [mysql.connector.ClientFlag.SSL],
- 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
-}
-
-# Construct connection string
-
-try:
- conn = mysql.connector.connect(**config)
- print("Connection established")
-except mysql.connector.Error as err:
- if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
- print("Something is wrong with the user name or password")
- elif err.errno == errorcode.ER_BAD_DB_ERROR:
- print("Database does not exist")
- else:
- print(err)
-else:
- cursor = conn.cursor()
-
- # Update a data row in the table
- cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (300, "apple"))
- print("Updated",cursor.rowcount,"row(s) of data.")
-
- # Cleanup
- conn.commit()
- cursor.close()
- conn.close()
- print("Done.")
-```
-
-## Step 4: Delete data
-
-Use the following code to connect and remove data by using a **DELETE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
-
-```python
-import mysql.connector
-from mysql.connector import errorcode
-
-# Obtain connection string information from the portal
-
-config = {
- 'host':'<mydemoserver>.mysql.database.azure.com',
- 'user':'<myadmin>@<mydemoserver>',
- 'password':'<mypassword>',
- 'database':'<mydatabase>',
- 'client_flags': [mysql.connector.ClientFlag.SSL],
- 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
-}
-
-# Construct connection string
-
-try:
- conn = mysql.connector.connect(**config)
- print("Connection established")
-except mysql.connector.Error as err:
- if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
- print("Something is wrong with the user name or password")
- elif err.errno == errorcode.ER_BAD_DB_ERROR:
- print("Database does not exist")
- else:
- print(err)
-else:
- cursor = conn.cursor()
-
- # Delete a data row in the table
- cursor.execute("DELETE FROM inventory WHERE name=%(param1)s;", {'param1':"orange"})
- print("Deleted",cursor.rowcount,"row(s) of data.")
-
- # Cleanup
- conn.commit()
- cursor.close()
- conn.close()
- print("Done.")
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-
-[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-ruby.md
- Title: 'Quickstart: Connect using Ruby - Azure Database for MySQL'
-description: This quickstart provides several Ruby code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 5/26/2020--
-# Quickstart: Use Ruby to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL using a [Ruby](https://www.ruby-lang.org) application and the [mysql2](https://rubygems.org/gems/mysql2) gem from Windows, Ubuntu Linux, and Mac platforms. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Ruby and that you are new to working with Azure Database for MySQL.
-
-## Prerequisites
-
-This quickstart uses the resources created in either of these guides as a starting point:
--- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install Ruby
-
-Install Ruby, Gem, and the MySQL2 library on your own computer.
-
-### Windows
-
-1. Download and Install the 2.3 version of [Ruby](https://rubyinstaller.org/downloads/).
-2. Launch a new command prompt (cmd) from the Start menu.
-3. Change directory into the Ruby directory for version 2.3. `cd c:\Ruby23-x64\bin`
-4. Test the Ruby installation by running the command `ruby -v` to see the version installed.
-5. Test the Gem installation by running the command `gem -v` to see the version installed.
-6. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`.
-
-### macOS
-
-1. Install Ruby using Homebrew by running the command `brew install ruby`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/#homebrew).
-2. Test the Ruby installation by running the command `ruby -v` to see the version installed.
-3. Test the Gem installation by running the command `gem -v` to see the version installed.
-4. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`.
-
-### Linux (Ubuntu)
-
-1. Install Ruby by running the command `sudo apt-get install ruby-full`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/).
-2. Test the Ruby installation by running the command `ruby -v` to see the version installed.
-3. Install the latest updates for Gem by running the command `sudo gem update --system`.
-4. Test the Gem installation by running the command `gem -v` to see the version installed.
-5. Install the gcc, make, and other build tools by running the command `sudo apt-get install build-essential`.
-6. Install the MySQL client developer libraries by running the command `sudo apt-get install libmysqlclient-dev`.
-7. Build the mysql2 module for Ruby using Gem by running the command `sudo gem install mysql2`.
-
-## Get connection information
-
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-ruby/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-## Run Ruby code
-
-1. Paste the Ruby code from the sections below into text files, and then save the files into a project folder with file extension .rb (such as `C:\rubymysql\createtable.rb` or `/home/username/rubymysql/createtable.rb`).
-2. To run the code, launch the command prompt or Bash shell. Change directory into your project folder `cd rubymysql`
-3. Then type the Ruby command followed by the file name, such as `ruby createtable.rb` to run the application.
-4. On the Windows OS, if the Ruby application is not in your path environment variable, you may need to use the full path to launch the node application, such as `"c:\Ruby23-x64\bin\ruby.exe" createtable.rb`
-
-## Connect and create a table
-
-Use the following code to connect and create a table by using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
-
-The code uses a mysql2::client class to connect to MySQL server. Then it calls method ```query()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. Finally, call the ```close()``` to close the connection before terminating.
-
-Replace the `host`, `database`, `username`, and `password` strings with your own values.
-
-```ruby
-require 'mysql2'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.mysql.database.azure.com')
- database = String('quickstartdb')
- username = String('myadmin@mydemoserver')
- password = String('yourpassword')
-
- # Initialize connection object.
- client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
- puts 'Successfully created connection to database.'
-
- # Drop previous table of same name if one exists
- client.query('DROP TABLE IF EXISTS inventory;')
- puts 'Finished dropping table (if existed).'
-
- # Drop previous table of same name if one exists.
- client.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);')
- puts 'Finished creating table.'
-
- # Insert some data into table.
- client.query("INSERT INTO inventory VALUES(1, 'banana', 150)")
- client.query("INSERT INTO inventory VALUES(2, 'orange', 154)")
- client.query("INSERT INTO inventory VALUES(3, 'apple', 100)")
- puts 'Inserted 3 rows of data.'
-
-# Error handling
-
-rescue Exception => e
- puts e.message
-
-# Cleanup
-
-ensure
- client.close if client
- puts 'Done.'
-end
-```
-
-## Read data
-
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
-
-The code uses a mysql2::client class to connect to Azure Database for MySQL with ```new()```method. Then it calls method ```query()``` to run the SELECT commands. Then it calls method ```close()``` to close the connection before terminating.
-
-Replace the `host`, `database`, `username`, and `password` strings with your own values.
-
-```ruby
-require 'mysql2'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.mysql.database.azure.com')
- database = String('quickstartdb')
- username = String('myadmin@mydemoserver')
- password = String('yourpassword')
-
- # Initialize connection object.
- client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
- puts 'Successfully created connection to database.'
-
- # Read data
- resultSet = client.query('SELECT * from inventory;')
- resultSet.each do |row|
- puts 'Data row = (%s, %s, %s)' % [row['id'], row['name'], row['quantity']]
- end
- puts 'Read ' + resultSet.count.to_s + ' row(s).'
-
-# Error handling
-
-rescue Exception => e
- puts e.message
-
-# Cleanup
-
-ensure
- client.close if client
- puts 'Done.'
-end
-```
-
-## Update data
-
-Use the following code to connect and update the data by using an **UPDATE** SQL statement.
-
-The code uses a [mysql2::client](https://rubygems.org/gems/mysql2-client-general_log) class .new() method to connect to Azure Database for MySQL. Then it calls method ```query()``` to run the UPDATE commands. Then it calls method ```close()``` to close the connection before terminating.
-
-Replace the `host`, `database`, `username`, and `password` strings with your own values.
-
-```ruby
-require 'mysql2'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.mysql.database.azure.com')
- database = String('quickstartdb')
- username = String('myadmin@mydemoserver')
- password = String('yourpassword')
-
- # Initialize connection object.
- client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
- puts 'Successfully created connection to database.'
-
- # Update data
- client.query('UPDATE inventory SET quantity = %d WHERE name = %s;' % [200, '\'banana\''])
- puts 'Updated 1 row of data.'
-
-# Error handling
-
-rescue Exception => e
- puts e.message
-
-# Cleanup
-
-ensure
- client.close if client
- puts 'Done.'
-end
-```
-
-## Delete data
-
-Use the following code to connect and read the data by using a **DELETE** SQL statement.
-
-The code uses a [mysql2::client](https://rubygems.org/gems/mysql2/) class to connect to MySQL server, run the DELETE command and then close the connection to the server.
-
-Replace the `host`, `database`, `username`, and `password` strings with your own values.
-
-```ruby
-require 'mysql2'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.mysql.database.azure.com')
- database = String('quickstartdb')
- username = String('myadmin@mydemoserver')
- password = String('yourpassword')
-
- # Initialize connection object.
- client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
- puts 'Successfully created connection to database.'
-
- # Delete data
- resultSet = client.query('DELETE FROM inventory WHERE name = %s;' % ['\'orange\''])
- puts 'Deleted 1 row.'
-
-# Error handling
--
-rescue Exception => e
- puts e.message
-
-# Cleanup
--
-ensure
- client.close if client
- puts 'Done.'
-end
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
-
-> [!div class="nextstepaction"]
-> [Learn more about MySQL2 client](https://rubygems.org/gems/mysql2-client-general_log)
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-workbench.md
- Title: 'Quickstart: Connect - MySQL Workbench - Azure Database for MySQL'
-description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL.
------ Previously updated : 5/26/2020--
-# Quickstart: Use MySQL Workbench to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL using the MySQL Workbench application.
-
-## Prerequisites
-
-This quickstart uses the resources created in either of these guides as a starting point:
-- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install MySQL Workbench
-Download and install MySQL Workbench on your computer from [the MySQL website](https://dev.mysql.com/downloads/workbench/).
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-
-3. Click the server name.
-
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-php/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-## Connect to the server by using MySQL Workbench
-To connect to Azure MySQL Server by using the GUI tool MySQL Workbench:
-
-1. Launch the MySQL Workbench application on your computer.
-
-2. In **Setup New Connection** dialog box, enter the following information on the **Parameters** tab:
--
-| **Setting** | **Suggested value** | **Field description** |
-||||
-| Connection Name | Demo Connection | Specify a label for this connection. |
-| Connection Method | Standard (TCP/IP) | Standard (TCP/IP) is sufficient. |
-| Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you do not remember your server name. |
-| Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. |
-| Username | *server admin login name* | Type in the server admin login username supplied when you created the Azure Database for MySQL earlier. Our example username is myadmin@mydemoserver. Follow the steps in the previous section to get the connection information if you do not remember the username. The format is *username\@servername*.
-| Password | your password | Click **Store in Vault...** button to save the password. |
-
-3. Click **Test Connection** to test if all parameters are correctly configured.
-
-4. Click **OK** to save the connection.
-
-5. In the listing of **MySQL Connections**, click the tile corresponding to your server, and then wait for the connection to be established.
-
- A new SQL tab opens with a blank editor where you can type your queries.
-
- > [!NOTE]
- > By default, SSL connection security is required and enforced on your Azure Database for MySQL server. Although typically no additional configuration with SSL certificates is required for MySQL Workbench to connect to your server, we recommend binding the SSL CA certification with MySQL Workbench. For more information on how to download and bind the certification, see [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./howto-configure-ssl.md). If you need to disable SSL, visit the Azure portal and click the Connection security page to disable the Enforce SSL connection toggle button.
-
-## Create a table, insert data, read data, update data, delete data
-1. Copy and paste the sample SQL code into a blank SQL tab to illustrate some sample data.
-
- This code creates an empty database named quickstartdb, and then creates a sample table named inventory. It inserts some rows, then reads the rows. It changes the data with an update statement, and reads the rows again. Finally it deletes a row, and then reads the rows again.
-
- ```sql
- -- Create a database
- -- DROP DATABASE IF EXISTS quickstartdb;
- CREATE DATABASE quickstartdb;
- USE quickstartdb;
-
- -- Create a table and insert rows
- DROP TABLE IF EXISTS inventory;
- CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
- INSERT INTO inventory (name, quantity) VALUES ('banana', 150);
- INSERT INTO inventory (name, quantity) VALUES ('orange', 154);
- INSERT INTO inventory (name, quantity) VALUES ('apple', 100);
-
- -- Read
- SELECT * FROM inventory;
-
- -- Update
- UPDATE inventory SET quantity = 200 WHERE id = 1;
- SELECT * FROM inventory;
-
- -- Delete
- DELETE FROM inventory WHERE id = 2;
- SELECT * FROM inventory;
- ```
-
- The screenshot shows an example of the SQL code in SQL Workbench and the output after it has been run.
-
- :::image type="content" source="media/connect-workbench/3-workbench-sql-tab.png" alt-text="MySQL Workbench SQL Tab to run sample SQL code":::
-
-2. To run the sample SQL Code, click the lightening bolt icon in the toolbar of the **SQL File** tab.
-3. Notice the three tabbed results in the **Result Grid** section in the middle of the page.
-4. Notice the **Output** list at the bottom of the page. The status of each command is shown.
-
-Now, you have connected to Azure Database for MySQL by using MySQL Workbench, and you have queried data using the SQL language.
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql How To Connect Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-connect-overview-single-server.md
- Title: Connect and query - Single Server MySQL
-description: Links to quickstarts showing how to connect to your Azure My SQL Database Single Server and run queries.
------ Previously updated : 09/22/2020--
-# Connect and query overview for Azure database for MySQL- Single Server
--
-The following document includes links to examples showing how to connect and query with Azure Database for MySQL Single Server. This guide also includes TLS recommendations and libraries that you can use to connect to the server in supported languages below.
-
-## Quickstarts
-
-| Quickstart | Description |
-|||
-|[MySQL workbench](connect-workbench.md)|This quickstart demonstrates how to use MySQL Workbench Client to connect to a database. You can then use MySQL statements to query, insert, update, and delete data in the database.|
-|[Azure Cloud Shell](./quickstart-create-mysql-server-database-using-azure-cli.md#connect-to-azure-database-for-mysql-server-using-mysql-command-line-client)|This article shows how to run **mysql.exe** in [Azure Cloud Shell](../cloud-shell/overview.md) to connect to your server and then run statements to query, insert, update, and delete data in the database.|
-|[MySQL with Visual Studio](https://www.mysql.com/why-mysql/windows/visualstudio)|You can use MySQL for Visual Studio for connecting to your MySQL server. MySQL for Visual Studio integrates directly into Server Explorer making it easy to setup new connections and working with database objects.|
-|[PHP](connect-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use MySQL statements to query data.|
-|[Java](connect-java.md)|This quickstart demonstrates how to use Java to connect to a database and then use MySQL statements to query data.|
-|[Node.js](connect-nodejs.md)|This quickstart demonstrates how to use Node.js to create a program to connect to a database and use MySQL statements to query data.|
-|[.NET(C#)](connect-csharp.md)|This quickstart demonstrates how to use.NET (C#) to create a C# program to connect to a database and use MySQL statements to query data.|
-|[Go](connect-go.md)|This quickstart demonstrates how to use Go to connect to a database. Transact-SQL statements to query and modify data are also demonstrated.|
-|[Python](connect-python.md)|This quickstart demonstrates how to use Python to connect to a database and use MySQL statements to query data. |
-|[Ruby](connect-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use MySQL statements to query data.|
-|[C++](connect-cpp.md)|This quickstart demonstrates how to use C+++ to create a program to connect to a database and use query data.|
-
-## TLS considerations for database connectivity
-
-Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to databases in Azure Database for MySQL. No special configuration is necessary but do enforce TLS 1.2 for newly created servers. We recommend if you are using TLS 1.0 and 1.1, then you update the TLS version for your servers. See [How to configure TLS](howto-tls-configurations.md)
-
-## Libraries
-
-Azure Database for MySQL uses the world's most popular community edition of MySQL database. Hence, it is compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open source community to constantly improve the functionality and usability of MySQL drivers continue.
-
-See what [drivers](concepts-compatibility.md) are compatible with Azure Database for MySQL Single server.
-
-## Next steps
--- [Migrate data using dump and restore](concepts-migrate-dump-restore.md)-- [Migrate data using import and export](concepts-migrate-import-export.md)
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-decide-on-right-migration-tools.md
- Title: "Select the right tools for migration to Azure Database for MySQL"
-description: "This topic provides a decision table which helps customers in picking the right tools for migrating into Azure Database for MySQL"
------- Previously updated : 10/12/2021--
-# Select the right tools for migration to Azure Database for MySQL
--
-## Overview
-
-Migrations are multi-step projects that are tough to pull off. Migrating database servers across platforms involves more than data and schema migration. There are also several other components, such as server configuration parameters, networking, access control rules, etc., to move. These are required to ensure that the functionality of the database server in the new target platform mimics the source.
-
-For detailed information and use cases about migrating databases to Azure Database for MySQL, you can refer to the [Database Migration Guide](migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md). This document provides pointers that will help you successfully plan and execute a MySQL migration to Azure.
-
-In general, migrations can be categorized as either offline or online.
--- With an offline migration, the source server is taken offline and a dump and restore of the databases is performed on the target server. --- With an online migration (migration with minimal downtime), the source server allows updates, and the migration solution will take care of replicating the ongoing changes between the source and target server along with the initial dump and restore on the target. -
-If your application can afford some downtime, offline migrations are always the preferred choice, as they are simple and easy to execute. However, if your application can only afford minimal downtime, an online migration is the best choice. Migrations of the majority of OLTP systems, such as payment processing and e-commerce, fall into this category.
-
-## Decision table
-
-To help you with selecting the right tools for migrating to Azure Database for MySQL, consider the detail in the following table.
-
-| Scenarios | Recommended Tools | Links |
-|-|||
-| Offline Migrations to move databases >= 1 TB | Dump and Restore using **MyDumper/MyLoader** + High Compute VM | [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md) <br><br> [Best Practices for migrating large databases to Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699)|
-| Offline Migrations to move databases < 1TB | If network bandwidth between source and target is good (e.g: Highspeed express route), use **Azure DMS** (database migration service) <br><br> **-OR-** <br><br> If you have low network bandwidth between source and Azure, use **Mydumper/Myloader + High compute VM** to take advantage of compression settings to efficiently move data over low speed networks <br><br> **-OR-** <br><br> Use **mysqldump** and **MySQL Workbench Export/Import** utility to perform offline migrations for smaller databases. | [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS - Azure Database Migration Service](../dms/tutorial-mysql-azure-mysql-offline-portal.md)<br><br> [Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench](how-to-migrate-rds-mysql-workbench.md)<br><br> [Import and export - Azure Database for MySQL](concepts-migrate-import-export.md)|
-| Online Migration | **Mydumper/Myloader with Data-in replication** <br><br> **Mysqldump with data-in replication** can be considered for small databases( less than 100GB). These methods are applicable to both external and intra-platform migrations. | [Configure Data-in replication - Azure Database for MySQL Flexible Server](flexible-server/how-to-data-in-replication.md) <br><br> [Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime](howto-migrate-single-flexible-minimum-downtime.md) |
-|Single to Flexible Server Migrations | **Offline**: Custom shell script hosted in [GitHub](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) This script also moves other server components such as security settings and server parameter configurations. <br><br>**Online**: **Mydumper/Myloader with Data-in replication** | [Migrate from Azure Database for MySQL - Single Server to Flexible Server in 5 easy steps!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057)<br><br> [Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime](howto-migrate-single-flexible-minimum-downtime.md)|
-
-## Next steps
-* [Migrate MySQL on-premises to Azure Database for MySQL](migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md)
-
-<br><br>
mysql How To Fix Corrupt Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-fix-corrupt-database.md
- Title: Resolve database corruption - Azure Database for MySQL
-description: In this article, you'll learn about how to fix database corruption problems in Azure Database for MySQL.
----- Previously updated : 09/21/2020--
-# Troubleshoot database corruption in Azure Database for MySQL
--
-Database corruption can cause downtime for your application. It's also critical to resolve corruption problems in time to avoid data loss. When database corruption occurs, you'll see this error in your server logs: `InnoDB: Database page corruption on disk or a failed.`
-
-In this article, you'll learn how to resolve database or table corruption problems. Azure Database for MySQL uses the InnoDB engine. It features automated corruption checking and repair operations. InnoDB checks for corrupt pages by running checksums on every page it reads. If it finds a checksum discrepancy, it will automatically stop the MySQL server.
-
-Try the following options to quickly mitigate your database corruption problems.
-
-## Restart your MySQL server
-
-You typically notice a database or table is corrupt when your application accesses the table or database. InnoDB features a crash recovery mechanism that can resolve most problems when the server is restarted. So restarting the server can help the server recover from a crash that caused the database to be in bad state.
-
-## Use the dump and restore method
-
-We recommend that you resolve corruption problems by using a *dump and restore* method. This method involves:
-
-1. Accessing the corrupt table.
-2. Using the mysqldump utility to create a logical backup of the table. The backup will retain the table structure and the data within it.
-3. Reloading the table into the database.
-
-### Back up your database or tables
-
-> [!Important]
->
-> - Make sure you have configured a firewall rule to access the server from your client machine. For more information, see [configure a firewall rule on Single Server](howto-manage-firewall-using-portal.md) and [configure a firewall rule on Flexible Server](flexible-server/how-to-connect-tls-ssl.md).
-> - Use SSL option `--ssl-cert` for mysqldump if you have SSL enabled.
-
-Create a backup file from the command line by using mysqldump. Use this command:
-
-```
-$ mysqldump [--ssl-cert=/path/to/pem] -h [host] -u [uname] -p[pass] [dbname] > [backupfile.sql]
-```
-
-Parameter descriptions:
-- `[ssl-cert=/path/to/pem]`: The path to the SSL certificate. Download the SSL certificate on your client machine and set the path in it in the command. Don't use this parameter if SSL is disabled.-- `[host]`: Your Azure Database for MySQL server.-- `[uname]`: Your server admin user name.-- `[pass]`: The password for your admin user.-- `[dbname]`: The name of your database.-- `[backupfile.sql]`: The file name of your database backup.-
-> [!Important]
-> - For Single Server, use the format `admin-user@servername` to replace `myserveradmin` in the following commands.
-> - For Flexible Server, use the format `admin-user` to replace `myserveradmin` in the following commands.
-
-If a specific table is corrupt, select specific tables in your database to back up:
-```
-$ mysqldump --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p testdb table1 table2 > testdb_tables_backup.sql
-```
-
-To back up one or more databases, use the `--database` switch and list the database names, separated by spaces:
-
-```
-$ mysqldump --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
-```
-
-### Restore your database or tables
-
-The following steps show how to restore your database or tables. After you create the backup file, you can restore the tables or databases by using the mysql utility. Run this command:
-
-```
-mysql --ssl-cert=</path/to/pem> -h [hostname] -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
-```
-Here's an example that restores `testdb` from a backup file created with mysqldump:
-
-> [!Important]
-> - For Single Server, use the format `admin-user@servername` to replace `myserveradmin` in the following command.
-> - For Flexible Server, use the format ```admin-user``` to replace `myserveradmin` in the following command.
-
-```
-$ mysql --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p testdb < testdb_backup.sql
-```
-
-## Next steps
-If the preceding steps don't resolve the problem, you can always restore the entire server:
-- [Restore server in Azure Database for MySQL - Single Server](howto-restore-server-portal.md)-- [Restore server in Azure Database for MySQL - Flexible Server](flexible-server/how-to-restore-server-portal.md)---
mysql How To Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-major-version-upgrade.md
- Title: Major version upgrade in Azure Database for MySQL - Single Server
-description: This article describes how you can upgrade major version for Azure Database for MySQL - Single Server
----- Previously updated : 1/28/2021-
-# Major version upgrade in Azure Database for MySQL Single Server
--
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
->
-
-> [!IMPORTANT]
-> Major version upgrade for Azure database for MySQL Single Server is in public preview.
-
-This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL single server.
-
-This feature will enable customers to perform in-place upgrades of their MySQL 5.6 servers to MySQL 5.7 with a click of button without any data movement or the need of any application connection string changes.
-
-> [!Note]
-> * Major version upgrade is only available for major version upgrade from MySQL 5.6 to MySQL 5.7.
-> * The server will be unavailable throughout the upgrade operation. It is therefore recommended to perform upgrades during your planned maintenance window. You can consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
-
-## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure portal
-
-Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.6 server using Azure portal
-
-> [!IMPORTANT]
-> We recommend to perform upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](howto-restore-server-portal.md#point-in-time-restore).
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 server.
-
-2. From the **Overview** page, click the **Upgrade** button in the toolbar.
-
-3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 server to 5.7 server.
-
- :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade":::
-
-4. A notification will confirm that upgrade is successful.
--
-## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure CLI
-
-Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.6 server using Azure CLI
-
-> [!IMPORTANT]
-> We recommend to perform upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](howto-restore-server-cli.md#server-point-in-time-restore).
-
-1. Install [Azure CLI for Windows](/cli/azure/install-azure-cli) or use Azure CLI in [Azure Cloud Shell](../cloud-shell/overview.md) to run the upgrade commands.
-
- This upgrade requires version 2.16.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
-
-2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command:
-
- ```azurecli
- az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --target-server-version 5.7"
- ```
-
- The command prompt shows the "-Running" message. After this message is no longer displayed, the version upgrade is complete.
-
-## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 on read replica using Azure portal
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 read replica server.
-
-2. From the **Overview** page, click the **Upgrade** button in the toolbar.
-
-3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 read replica server to 5.7 server.
-
- :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade":::
-
-4. A notification will confirm that upgrade is successful.
-
-5. From the **Overview** page, confirm that your Azure database for MySQL read replica server version is 5.7.
-
-6. Now go to your primary server and [Perform major version upgrade](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-using-azure-portal) on it.
-
-## Perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replicas
-
-You can perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 by utilizing read replicas. The idea is to upgrade the read replica of your server to 5.7 first and later failover your application to point to read replica and make it a new primary.
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6.
-
-2. Create a [read replica](./concepts-read-replicas.md#create-a-replica) from your primary server.
-
-3. [Upgrade your read replica](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-on-read-replica-using-azure-portal) to version 5.7.
-
-4. Once you confirm that the replica server is running on version 5.7, stop your application from connecting to your primary server.
-
-5. Check replication status, and make sure replica is all caught up with primary so all the data is in sync and ensure there are no new operations performed in primary.
-
- Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
-
- ```sql
- SHOW SLAVE STATUS\G
- ```
-
- If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm `Seconds_Behind_Master` is "0" it's safe to stop replication.
-
-6. Promote your read replica to primary by [stopping replication](./howto-read-replicas-portal.md#stop-replication-to-a-replica-server).
-
-7. Point your application to the new primary (former replica) which is running server 5.7. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
-
-> [!Note]
-> This scenario will have downtime during steps 4, 5 and 6 only.
--
-## Frequently asked questions
-
-### When will this upgrade feature be GA as we have MySQL v5.6 in our production environment that we need to upgrade?
-
-The GA of this feature is planned before MySQL v5.6 retirement. However, the feature is production ready and fully supported by Azure so you should run it with confidence on your environment. As a recommended best practice, we strongly suggest you to run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production. For more information, see [how to perform point-in-time restore](howto-restore-server-portal.md#point-in-time-restore) to create a point in time copy of your server.
-
-### Will this cause downtime of the server and if so, how long?
-
-Yes, the server will be unavailable during the upgrade process so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server.The upgrades of Basic SKU servers are expected to take longer time as it is on standard storage platform. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. Consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
-
-### What will happen if we do not choose to upgrade our MySQL v5.6 server before February 5, 2021?
-
-You can still continue running your MySQL v5.6 server as before. Azure **will never** perform force upgrade on your server. However, the restrictions documented in [Azure Database for MySQL versioning policy](concepts-version-policy.md) will apply.
-
-## Next steps
-
-Learn about [Azure Database for MySQL versioning policy](concepts-version-policy.md).
mysql How To Manage Single Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-manage-single-server-cli.md
- Title: Manage server - Azure CLI - Azure Database for MySQL
-description: Learn how to manage an Azure Database for MySQL server from the Azure CLI.
----- Previously updated : 9/22/2020--
-# Manage an Azure Database for MySQL Single server using the Azure CLI
--
-This article shows you how to manage your Single servers deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
-
-## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-You'll need to log in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
-
-```azurecli-interactive
-az login
-```
-
-Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
-
-```azurecli
-az account set --subscription <subscription id>
-```
-
-If you have not already created a server , refer to this [quickstart](quickstart-create-mysql-server-database-using-azure-cli.md) to create one.
-
-## Scale compute and storage
-You can scale up your pricing tier , compute and storage easily using the following command. You can see all the server operation you can perform [az mysql server overview](/cli/azure/mysql/server)
-
-```azurecli-interactive
-az mysql server update --resource-group myresourcegroup --name mydemoserver --sku-name GP_Gen5_4 --storage-size 6144
-```
-
-Here are the details for arguments above :
-
-**Setting** | **Sample value** | **Description**
-||
-name | mydemoserver | Enter a unique name for your Azure Database for MySQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
-resource-group | myresourcegroup | Provide the name of the Azure resource group.
-sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information.
-storage-size | 6144 | The storage capacity of the server (unit is megabytes). Minimum 5120 and increases in 1024 increments.
-
-> [!Important]
-> - Storage can be scaled up (however, you cannot scale storage down)
-> - Scaling up from Basic to General purpose or Memory optimized pricing tier is not supported. You can manually scale up with either [using a bash script](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/upgrade-from-basic-to-general-purpose-or-memory-optimized-tiers/ba-p/830404) or [using MySQL Workbench](https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-scale-up-azure-database-for-mysql-from-basic-tier-to/ba-p/369134)
--
-## Manage MySQL databases on a server
-You can use any of these commands to create, delete , list and view database properties of a database on your server
-
-| Cmdlet | Usage| Description |
-| | | |
-|[az mysql db create](/cli/azure/sql/db#az-mysql-db-create)|```az mysql db create -g myresourcegroup -s mydemoserver -n mydatabasename``` |Creates a database|
-|[az mysql db delete](/cli/azure/sql/db#az-mysql-db-delete)|```az mysql db delete -g myresourcegroup -s mydemoserver -n mydatabasename```|Delete your database from your server. This command does not delete your server. |
-|[az mysql db list](/cli/azure/sql/db#az-mysql-db-list)|```az mysql db list -g myresourcegroup -s mydemoserver```|lists all the databases on the server|
-|[az mysql db show](/cli/azure/sql/db#az-mysql-db-show)|```az mysql db show -g myresourcegroup -s mydemoserver -n mydatabasename```|Shows more details of the database|
-
-## Update admin password
-You can change the administrator role's password with this command
-```azurecli-interactive
-az mysql server update --resource-group myresourcegroup --name mydemoserver --admin-password <new-password>
-```
-
-> [!Important]
-> Make sure password is minimum 8 characters and maximum 128 characters.
-> Password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
-
-## Delete a server
-If you would just like to delete the MySQL Single server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
-
-```azurecli-interactive
-az mysql server delete --resource-group myresourcegroup --name mydemoserver
-```
-
-## Next steps
-- [Restart a server](howto-restart-server-cli.md)-- [Restore a server in a bad state](howto-restore-server-cli.md)-- [Monitor and tune the server](concepts-monitoring.md)
mysql How To Migrate Rds Mysql Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-migrate-rds-mysql-data-in-replication.md
- Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication
-description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using Data-in Replication.
----- Previously updated : 09/24/2021--
-# Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication
--
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-
-You can use methods such as MySQL dump and restore, MySQL Workbench Export and Import, or Azure Database Migration Service to migrate your MySQL databases to Azure Database for MySQL. By using a combination of open-source tools such as mysqldump or mydumper and myloader with Data-in Replication, you can migrate your workloads with minimum downtime.
-
-Data-in Replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as *events* to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and execute the events in the binary log on the replica's local database.
-
-If you set up [Data-in Replication](../mysql/flexible-server/concepts-data-in-replication.md) to synchronize data from a source MySQL server to a target MySQL server, you can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
-
-In this tutorial, you'll learn how to set up Data-in Replication between a source server that runs Amazon Relational Database Service (RDS) for MySQL and a target server that runs Azure Database for MySQL.
-
-## Performance considerations
-
-Before you begin this tutorial, consider the performance implications of the location and capacity of the client computer you'll use to perform the operation.
-
-### Client location
-
-Perform dump or restore operations from a client computer that's launched in the same location as the database server:
--- For Azure Database for MySQL servers, the client machine should be in the same virtual network and the same availability zone as the target database server.-- For source Amazon RDS database instances, the client instance should exist in the same Amazon Virtual Private Cloud and availability zone as the source database server.
-In the preceding case, you can move dump files between client machines by using file transfer protocols like FTP or SFTP or upload them to Azure Blob Storage. To reduce the total migration time, compress files before you transfer them.
-
-### Client capacity
-
-No matter where the client computer is located, it requires adequate compute, I/O, and network capacity to perform the requested operations. The general recommendations are:
--- If the dump or restore involves real-time processing of data, for example, compression or decompression, choose an instance class with at least one CPU core per dump or restore thread.-- Ensure there's enough network bandwidth available to the client instance. Use instance types that support the accelerated networking feature. For more information, see the "Accelerated Networking" section in the [Azure Virtual Machine Networking Guide](../virtual-network/create-vm-accelerated-networking-cli.md).-- Ensure that the client machine's storage layer provides the expected read/write capacity. We recommend that you use an Azure virtual machine with Premium SSD storage.-
-## Prerequisites
-
-To complete this tutorial, you need to:
--- Install the [mysqlclient](https://dev.mysql.com/downloads/) on your client computer to create a dump, and perform a restore operation on your target Azure Database for MySQL server.-- For larger databases, install [mydumper and myloader](https://centminmod.com/mydumper.html) for parallel dumping and restoring of databases.-
- > [!NOTE]
- > Mydumper can only run on Linux distributions. For more information, see [How to install mydumper](https://github.com/maxbube/mydumper#how-to-install-mydumpermyloader).
--- Create an instance of Azure Database for MySQL server that runs version 5.7 or 8.0.-
- > [!IMPORTANT]
- > If your target is Azure Database for MySQL Flexible Server with zone-redundant high availability (HA), note that Data-in Replication isn't supported for this configuration. As a workaround, during server creation set up zone-redundant HA:
- >
- > 1. Create the server with zone-redundant HA enabled.
- > 1. Disable HA.
- > 1. Follow the article to set up Data-in Replication.
- > 1. Post-cutover, remove the Data-in Replication configuration.
- > 1. Enable HA.
-
-Ensure that several parameters and features are configured and set up properly, as described:
--- For compatibility reasons, have the source and target database servers on the same MySQL version.-- Have a primary key in each table. A lack of primary keys on tables can slow the replication process.-- Make sure the character set of the source and the target database are the same.-- Set the `wait_timeout` parameter to a reasonable time. The time depends on the amount of data or workload you want to import or migrate.-- Verify that all your tables use InnoDB. The Azure Database for MySQL server only supports the InnoDB storage engine.-- For tables with many secondary indexes or for tables that are large, the effects of performance overhead are visible during restore. Modify the dump files so that the `CREATE TABLE` statements don't include secondary key definitions. After you import the data, re-create secondary indexes to avoid the performance penalty during the restore process.-
-Finally, to prepare for Data-in Replication:
--- Verify that the target Azure Database for MySQL server can connect to the source Amazon RDS for MySQL server over port 3306.-- Ensure that the source Amazon RDS for MySQL server allows both inbound and outbound traffic on port 3306.-- Make sure you provide [site-to-site connectivity](../vpn-gateway/tutorial-site-to-site-portal.md) to your source server by using either [Azure ExpressRoute](../expressroute/expressroute-introduction.md) or [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Azure Virtual Network documentation](../virtual-network/index.yml). Also see the quickstart articles with step-by-step details.-- Configure your source database server's network security groups to allow the target Azure Database for MySQL's server IP address.-
-> [!IMPORTANT]
-> If the source Amazon RDS for MySQL instance has GTID_mode set to ON, the target instance of Azure Database for MySQL Flexible Server must also have GTID_mode set to ON.
-
-## Configure the target instance of Azure Database for MySQL
-
-To configure the target instance of Azure Database for MySQL, which is the target for Data-in Replication:
-
-1. Set the `max_allowed_packet` parameter value to the maximum of **1073741824**, which is 1 GB. This value prevents any overflow issues related to long rows.
-1. Set the `slow_query_log`, `general_log`, `audit_log_enabled`, and `query_store_capture_mode` parameters to **OFF** during the migration to help eliminate any overhead related to query logging.
-1. Scale up the compute size of the target Azure Database for MySQL server to the maximum of 64 vCores. This size provides more compute resources when you restore the database dump from the source server.
-
- You can always scale back the compute to meet your application demands after the migration is complete.
-
-1. Scale up the storage size to get more IOPS during the migration or increase the maximum IOPS for the migration.
-
- > [!NOTE]
- > Available maximum IOPS are determined by compute size. For more information, see the IOPS section in [Compute and storage options in Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-compute-storage.md#iops).
-
-## Configure the source Amazon RDS for MySQL server
-
-To prepare and configure the MySQL server hosted in Amazon RDS, which is the *source* for Data-in Replication:
-
-1. Confirm that binary logging is enabled on the source Amazon RDS for MySQL server. Check that automated backups are enabled, or ensure that a read replica exists for the source Amazon RDS for MySQL server.
-
-1. Ensure that the binary log files on the source server are retained until after the changes are applied on the target instance of Azure Database for MySQL.
-
- With Data-in Replication, Azure Database for MySQL doesn't manage the replication process.
-
-1. To check the binary log retention on the source Amazon RDS server to determine the number of hours the binary logs are retained, call the `mysql.rds_show_configuration` stored procedure:
-
- ```
- mysql> call mysql.rds_show_configuration;
- ++-+--+
- | name | value | description |
- ++-+--+
- | binlog retention hours | 24 | binlog retention hours specifies the duration in hours before binary logs are automatically deleted. |
- | source delay | 0 | source delay specifies replication delay in seconds between current instance and its master. |
- | target delay | 0 | target delay specifies replication delay in seconds between current instance and its future read-replica. |
- ++- +--+
- 3 rows in set (0.00 sec)
- ```
-1. To configure the binary log retention period, run the `rds_set_configuration` stored procedure to ensure that the binary logs are retained on the source server for the desired length of time. For example:
-
- ```
- Mysql> Call mysql.rds_set_configuration(ΓÇÿbinlog retention hours', 96);
- ```
-
- If you're creating a dump and then restoring, the preceding command helps you to quickly catch up with the delta changes.
-
- > [!NOTE]
- > Ensure there's ample disk space to store the binary logs on the source server based on the retention period defined.
-
-There are two ways to capture a dump of data from the source Amazon RDS for MySQL server. One approach involves capturing a dump of data directly from the source server. The other approach involves capturing a dump from an Amazon RDS for MySQL read replica.
--- To capture a dump of data directly from the source server:-
- 1. Ensure that you stop the writes from the application for a few minutes to get a transactionally consistent dump of data.
-
- You can also temporarily set the `read_only` parameter to a value of **1** so that writes aren't processed when you're capturing a dump of data.
-
- 1. After you stop the writes on the source server, collect the binary log file name and offset by running the command `Mysql> Show master status;`.
- 1. Save these values to start replication from your Azure Database for MySQL server.
- 1. To create a dump of the data, execute `mysqldump` by running the following command:
-
- ```
- $ mysqldump -h hostname -u username -p ΓÇôsingle-transaction ΓÇôdatabases dbnames ΓÇôorder-by-primary> dumpname.sql
- ```
--- If stopping writes on the source server isn't an option or the performance of dumping data isn't acceptable on the source server, capture a dump on a replica server:-
- 1. Create an Amazon MySQL read replica with the same configuration as the source server. Then create the dump there.
- 1. Let the Amazon RDS for MySQL read replica catch up with the source Amazon RDS for MySQL server.
- 1. When the replica lag reaches **0** on the read replica, stop replication by calling the `mysql.rds_stop_replication` stored procedure.
-
- ```
- Mysql> call mysql.rds_stop_replication;
- ```
-
- 1. With replication stopped, connect to the replica. Then run the `SHOW SLAVE STATUS` command to retrieve the current binary log file name from the **Relay_Master_Log_File** field and the log file position from the **Exec_Master_Log_Pos** field.
- 1. Save these values to start replication from your Azure Database for MySQL server.
- 1. To create a dump of the data from the Amazon RDS for MySQL read replica, execute `mysqldump` by running the following command:
-
- ```
- $ mysqldump -h hostname -u username -p ΓÇôsingle-transaction ΓÇôdatabases dbnames ΓÇôorder-by-primary> dumpname.sql
- ```
-
- > [!NOTE]
- > You can also use mydumper for capturing a parallelized dump of your data from your source Amazon RDS for MySQL database. For more information, see [Migrate large databases to Azure Database for MySQL using mydumper/myloader](../mysql/concepts-migrate-mydumper-myloader.md).
-
-## Link source and replica servers to start Data-in Replication
-
-1. To restore the database by using mysql native restore, run the following command:
-
- ```
- $ mysql -h <target_server> -u <targetuser> -p < dumpname.sql
- ```
-
- > [!NOTE]
- > If you're instead using myloader, see [Migrate large databases to Azure Database for MySQL using mydumper/myloader](../mysql/concepts-migrate-mydumper-myloader.md).
-
-1. Sign in to the source Amazon RDS for MySQL server, and set up a replication user. Then grant the necessary privileges to this user.
-
- - If you're using SSL, run the following commands:
-
- ```
- Mysql> CREATE USER 'syncuser'@'%' IDENTIFIED BY 'userpassword';
- Mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'syncuser'@'%' REQUIRE SSL;
- Mysql> SHOW GRANTS FOR syncuser@'%';
- ```
-
- - If you're not using SSL, run the following commands:
-
- ```
- Mysql> CREATE USER 'syncuser'@'%' IDENTIFIED BY 'userpassword';
- Mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'syncuser'@'%';
- Mysql> SHOW GRANTS FOR syncuser@'%';
- ```
-
- All Data-in Replication functions are done by stored procedures. For information about all procedures, see [Data-in Replication stored procedures](../mysql/reference-stored-procedures.md#data-in-replication-stored-procedures). You can run these stored procedures in the MySQL shell or MySQL Workbench.
-
-1. To link the Amazon RDS for MySQL source server and the Azure Database for MySQL target server, sign in to the target Azure Database for MySQL server. Set the Amazon RDS for MySQL server as the source server by running the following command:
-
- ```
- CALL mysql.az_replication_change_master('source_server','replication_user_name','replication_user_password',3306,'<master_bin_log_file>',master_bin_log_position,'<master_ssl_ca>');
- ```
-
-1. To start replication between the source Amazon RDS for MySQL server and the target Azure Database for MySQL server, run the following command:
-
- ```
- Mysql> CALL mysql.az_replication_start;
- ```
-
-1. To check the status of the replication, on the replica server, run the following command:
-
- ```
- Mysql> show slave status\G
- ```
-
- If the state of the `Slave_IO_Running` and `Slave_SQL_Running` parameters is **Yes**, replication has started and is in a running state.
-
-1. Check the value of the `Seconds_Behind_Master` parameter to determine how delayed the target server is.
-
- If the value is **0**, the target has processed all updates from the source server. If the value is anything other than **0**, the target server is still processing updates.
-
-## Ensure a successful cutover
-
-To ensure a successful cutover:
-
-1. Configure the appropriate logins and database-level permissions in the target Azure Database for MySQL server.
-1. Stop writes to the source Amazon RDS for MySQL server.
-1. Ensure that the target Azure Database for MySQL server has caught up with the source server and that the `Seconds_Behind_Master` value is **0** from `show slave status`.
-1. Call the stored procedure `mysql.az_replication_stop` to stop the replication because all changes have been replicated to the target Azure Database for MySQL server.
-1. Call `mysql.az_replication_remove_master` to remove the Data-in Replication configuration.
-1. Redirect clients and client applications to the target Azure Database for MySQL server.
-
-At this point, the migration is complete. Your applications are connected to the server running Azure Database for MySQL.
-
-## Next steps
--- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201). It contains a demo that shows how to migrate MySQL apps to Azure Database for MySQL.
mysql How To Migrate Rds Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-migrate-rds-mysql-workbench.md
- Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench
-description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using the MySQL Workbench Migration Wizard.
----- Previously updated : 05/21/2021--
-# Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench
--
-You can use various utilities, such as MySQL Workbench Export/Import, Azure Database Migration Service (DMS), and MySQL dump and restore, to migrate Amazon RDS for MySQL to Azure Database for MySQL. However, using the MySQL Workbench Migration Wizard provides an easy and convenient way to move your Amazon RDS for MySQL databases to Azure Database for MySQL.
-
-With the Migration Wizard, you can conveniently select which schemas and objects to migrate. It also allows you to view server logs to identify errors and bottlenecks in real time. As a result, you can edit and modify tables or database structures and objects during the migration process when an error is detected, and then resume migration without having to restart from scratch.
-
-> [!NOTE]
-> You can also use the Migration Wizard to migrate other sources, such as Microsoft SQL Server, Oracle, PostgreSQL, MariaDB, etc., which are outside the scope of this article.
-
-## Prerequisites
-
-Before you start the migration process, it's recommended that you ensure that several parameters and features are configured and set up properly, as described below.
--- Make sure the Character set of the source and target databases are the same.-- Set the wait timeout to a reasonable time depending on the amount data or workload you want to import or migrate.-- Set the `max_allowed_packet parameter` to a reasonable amount depending on the size of the database you want to import or migrate.-- Verify that all of your tables use InnoDB, as Azure Database for MySQL Server only supports the InnoDB Storage engine.-- Remove, replace, or modify all triggers, stored procedures, and other functions containing root user or super user definers (Azure Database for MySQL doesnΓÇÖt support the Super user privilege). To replace the definers with the name of the admin user that is running the import process, run the following command:-
- ```
- DELIMITER; ;/*!50003 CREATE*/ /*!50017 DEFINER=`root`@`127.0.0.1`*/ /*!50003
- DELIMITER;
- /* Modified to */
- DELIMITER;
- /*!50003 CREATE*//*!50017 DEFINER=`AdminUserName`@`ServerName`*/ /*!50003
- DELIMITER;
-
- ```
--- If User Defined Functions (UDFs) are running on your database server, you need to delete the privilege for the mysql database. To determine if any UDFs are running on your server, use the following query:-
- ```
- SELECT * FROM mysql.func;
- ```
-
- If you discover that UDFs are running, you can drop the UDFs by using the following query:
-
- ```
- DROP FUNCTION your_UDFunction;
- ```
--- Make sure that the server on which the tool is running, and ultimately the export location, has ample disk space and compute power (vCores, CPU, and Memory) to perform the export operation, especially when exporting a very large database.-- Create a path between the on-premises or AWS instance and Azure Database for MySQL if the workload is behind firewalls or other network security layers.-
-## Begin the migration process
-
-1. To start the migration process, sign in to MySQL Workbench, and then select the home icon.
-2. In the left-hand navigation bar, select the Migration Wizard icon, as shown in the screenshot below.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/begin-the-migration.png" alt-text="MySQL Workbench start screen":::
-
- The **Overview** page of the Migration Wizard is displayed, as shown below.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/migration-wizard-welcome.png" alt-text="MySQL Workbench Migration Wizard welcome page":::
-
-3. Determine if you have an ODBC driver for MySQL Server installed by selecting **Open ODBC Administrator**.
-
- In our case, on the **Drivers** tab, youΓÇÖll notice that there are already two MySQL Server ODBC drivers installed.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/obdc-administrator-page.png" alt-text="ODBC Data Source Administrator page":::
-
- If a MySQL ODBC driver isn't installed, use the MySQL Installer you used to install MySQL Workbench to install the driver. For more information about MySQL ODBC driver installation, see the following resources:
-
- - [MySQL :: MySQL Connector/ODBC Developer Guide :: 4.1 Installing Connector/ODBC on Windows](https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation-binary-windows.html)
- - [ODBC Driver for MySQL: How to Install and Set up Connection (Step-by-step) ΓÇô {coding}Sight (codingsight.com)](https://codingsight.com/install-and-configure-odbc-drivers-for-mysql/)
-
-4. Close the **ODBC Data Source Administrator** dialog box, and then continue with the migration process.
-
-## Configure source database server connection parameters
-
-1. On the **Overview** page, select **Start Migration**.
-
- The **Source Selection** page appears. Use this page to provide information about the RDBMS you're migrating from and the parameters for the connection.
-
-2. In the **Database System** field, select **MySQL**.
-3. In the **Stored Connection** field, select one of the saved connection settings for that RDBMS.
-
- You can save connections by marking the checkbox at the bottom of the page and providing a name of your preference.
-
-4. In the **Connection Method** field, select **Standard TCP/IP**.
-5. In the **Hostname** field, specify the name of your source database server.
-6. In the **Port** field, specify **3306**, and then enter the username and password for connecting to the server.
-7. In the **Database** field, enter the name of the database you want to migrate if you know it; otherwise leave this field blank.
-8. Select **Test Connection** to check the connection to your MySQL Server instance.
-
- If youΓÇÖve entered the correct parameters, a message appears indicating a successful connection attempt.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/source-connection-parameters.png" alt-text="Source database connection parameters page":::
-
-9. Select **Next**.
-
-## Configure target database server connection parameters
-
-1. On the **Target Selection** page, set the parameters to connect to your target MySQL Server instance using a process similar to that for setting up the connection to the source server.
-2. To verify a successful connection, select **Test Connection**.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/target-connection-parameters.png" alt-text="Target database connection parameters page":::
-
-3. Select **Next**.
-
-## Select the schemas to migrate
-
-The Migration Wizard will communicate to your MySQL Server instance and fetch a list of schemas from the source server.
-
-1. Select **Show logs** to view this operation.
-
- The screenshot below shows how the schemas are being retrieved from the source database server.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/retrieve-schemas.png" alt-text="Fetch schemas list page":::
-
-2. Select **Next** to verify that all the schemas were successfully fetched.
-
- The screenshot below shows the list of fetched schemas.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/schemas-selection.png" alt-text="Schemas selection page":::
-
- You can only migrate schemas that appear in this list.
-
-3. Select the schemas that you want to migrate, and then select **Next**.
-
-## Object migration
-
-Next, specify the object(s) that you want to migrate.
-
-1. Select **Show Selection**, and then, under **Available Objects**, select and add the objects that you want to migrate.
-
- When you've added the objects, they'll appear under **Objects to Migrate**, as shown in the screenshot below.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/source-objects.png" alt-text="Source objects selection page":::
-
- In this scenario, weΓÇÖve selected all table objects.
-
-2. Select **Next**.
-
-## Edit data
-
-In this section, you have the option of editing the objects that you want to migrate.
-
-1. On the **Manual Editing** page, notice the **View** drop-down menu in the top-right corner.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/manual-editing.png" alt-text="Manual Editing selection page":::
-
- The **View** drop-down box includes three items:
-
- - **All Objects** ΓÇô Displays all objects. With this option, you can manually edit the generated SQL before applying them to the target database server. To do this, select the object and select Show Code and Messages. You can see (and edit!) the generated MySQL code that corresponds to the selected object.
- - **Migration problems** ΓÇô Displays any problems that occurred during the migration, which you can review and verify.
- - **Column Mapping** ΓÇô Displays column mapping information. You can use this view to edit the name and change column of the target object.
-
-2. Select **Next**.
-
-## Create the target database
-
-1. Select the **Create schema in target RDBMS** check box.
-
- You can also choose to keep already existing schemas, so they won't be modified or updated.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/create-target-database.png" alt-text="Target Creation Options page":::
-
- In this article, we've chosen to create the schema in target RDBMS, but you can also select the **Create a SQL script file** check box to save the file on your local computer or for other purposes.
-
-2. Select **Next**.
-
-## Run the MySQL script to create the database objects
-
-Since we've elected to create schema in the target RDBMS, the migrated SQL script will be executed in the target MySQL server. You can view its progress as shown in the screenshot below:
--
-1. After the creation of the schemas and their objects completes, select **Next**.
-
- On the **Create Target Results** page, youΓÇÖre presented with a list of the objects created and notification of any errors that were encountered while creating them, as shown in the following screenshot.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/create-target-results.png" alt-text="Create Target Results page":::
-
-2. Review the detail on this page to verify that everything completed as intended.
-
- For this article, we donΓÇÖt have any errors. If there's no need to address any error messages, you can edit the migration script.
-
-3. In the **Object** box, select the object that you want to edit.
-4. Under **SQL CREATE script for selected object**, modify your SQL script, and then select **Apply** to save the changes.
-5. Select **Recreate Objects** to run the script including your changes.
-
- If the script fails, you may need to edit the generated script. You can then manually fix the SQL script and run everything again. In this article, weΓÇÖre not changing anything, so weΓÇÖll leave the script as it is.
-
-6. Select **Next**.
-
-## Transfer data
-
-This part of the process moves data from the source MySQL Server database instance into your newly created target MySQL database instance. Use the **Data Transfer Setup** page to configure this process.
--
-This page provides options for setting up the data transfer. For the purposes of this article, weΓÇÖll accept the default values.
-
-1. To begin the actual process of transferring data, select **Next**.
-
- The progress of the data transfer process appears as shown in the following screenshot.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/bulk-data-transfer.png" alt-text="Bulk Data Transfer page":::
-
- > [!NOTE]
- > The duration of the data transfer process is directly related to the size of the database you're migrating. The larger the source database, the longer the process will take, potentially up to a few hours for larger databases.
-
-2. After the transfer completes, select **Next**.
-
- The **Migration Report** page appears, providing a report summarizing the whole process, as shown on the screenshot below:
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/migration-report.png" alt-text="Migration Progress Report page":::
-
-3. Select **Finish** to close the Migration Wizard.
-
- The migration is now successfully completed.
-
-## Verify consistency of the migrated schemas and tables
-
-1. Next, log into your MySQL target database instance to verify that the migrated schemas and tables are consistent with your MySQL source database.
-
- In our case, you can see that all schemas (sakila, moda, items, customer, clothes, world, and world_x) from the Amazon RDS for MySQL: **MyjolieDB** database have been successfully migrated to the Azure Database for MySQL: **azmysql** instance.
-
-2. To verify the table and rows counts, run the following query on both instances:
-
- `SELECT COUNT (*) FROM sakila.actor;`
-
- You can see from the screenshot below that the row count for Amazon RDS MySQL is 200, which matches the Azure Database for MySQL instance.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/table-row-size-source.png" alt-text="Table and Row size source database":::
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/table-row-size-target.png" alt-text="Table and Row size target database":::
-
- While you can run the above query on every single schema and table, that will be quite a bit of work if youΓÇÖre dealing with hundreds of thousands or even millions of tables. You can use the queries below to verify the schema (database) and table size instead.
-
-3. To check the database size, run the following query:
-
- ```
- SELECT table_schema AS "Database",
- ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size (MB)"
- FROM information_schema.TABLES
- GROUP BY table_schema;
- ```
-
-4. To check the table size, run the following query:
-
- ```
- SELECT table_name AS "Table",
- ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)"
- FROM information_schema.TABLES
- WHERE table_schema = "database_name"
- ORDER BY (data_length + index_length) DESC;
- ```
-
- You see from the screenshots below that schema (database) size from the Source Amazon RDS MySQL instance is the same as that of the target Azure Database for MySQL Instance.
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/database-size-source.png" alt-text="Database size source database":::
-
- :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/database-size-target.png" alt-text="Database size target database":::
-
- Since the schema (database) sizes are the same in both instances, it's not really necessary to check individual table sizes. In any case, you can always use the above query to check your table sizes, as necessary.
-
- YouΓÇÖve now confirmed that your migration completed successfully.
-
-## Next steps
--- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201), which contains a demo showing how to migrate MySQL apps to Azure Database for MySQL.
mysql How To Stop Start Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-stop-start-server.md
- Title: Stop/start - Azure portal - Azure Database for MySQL server
-description: This article describes how to stop/start operations in Azure Database for MySQL.
----- Previously updated : 09/21/2020--
-# Stop/Start an Azure Database for MySQL
--
-> [!IMPORTANT]
-> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can choose to **Stop** it again if you are not using the server.
-
-This article provides step-by-step procedure to perform Stop and Start of the single server.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- You must have an Azure Database for MySQL Single Server.-
-> [!NOTE]
-> Refer to the limitation of using [stop/start](concepts-servers.md#limitations-of-stopstart-operation)
-
-## How to stop/start the Azure Database for MySQL using Azure portal
-
-### Stop a running server
-
-1. In the [Azure portal](https://portal.azure.com/), choose your MySQL server that you want to stop.
-
-2. From the **Overview** page, click the **Stop** button in the toolbar.
-
- :::image type="content" source="./media/howto-stop-start-server/mysql-stop-server.png" alt-text="Azure Database for MySQL Stop server":::
-
- > [!NOTE]
- > Once the server is stopped, the other management operations are not available for the single server.
-
-### Start a stopped server
-
-1. In the [Azure portal](https://portal.azure.com/), choose your single server that you want to start.
-
-2. From the **Overview** page, click the **Start** button in the toolbar.
-
- :::image type="content" source="./media/howto-stop-start-server/mysql-start-server.png" alt-text="Azure Database for MySQL start server":::
-
- > [!NOTE]
- > Once the server is started, all management operations are now available for the single server.
-
-## How to stop/start the Azure Database for MySQL using CLI
-
-### Stop a running server
-
-1. In the [Azure portal](https://portal.azure.com/), choose your MySQL server that you want to stop.
-
-2. From the **Overview** page, click the **Stop** button in the toolbar.
-
- ```azurecli-interactive
- az mysql server stop --name <server-name> -g <resource-group-name>
- ```
- > [!NOTE]
- > Once the server is stopped, the other management operations are not available for the single server.
-
-### Start a stopped server
-
-1. In the [Azure portal](https://portal.azure.com/), choose your single server that you want to start.
-
-2. From the **Overview** page, click the **Start** button in the toolbar.
-
- ```azurecli-interactive
- az mysql server start --name <server-name> -g <resource-group-name>
- ```
- > [!NOTE]
- > Once the server is started, all management operations are now available for the single server.
-
-## Next steps
-Learn about [how to create alerts on metrics](howto-alert-on-metric.md).
mysql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-alert-on-metric.md
- Title: Configure metric alerts - Azure portal - Azure Database for MySQL
-description: This article describes how to configure and access metric alerts for Azure Database for MySQL from the Azure portal.
----- Previously updated : 3/18/2020--
-# Use the Azure portal to set up alerts on metrics for Azure Database for MySQL
--
-This article shows you how to set up Azure Database for MySQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services.
-
-The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met.
-
-You can configure an alert to do the following actions when it triggers:
-* Send email notifications to the service administrator and co-administrators
-* Send email to additional emails that you specify.
-* Call a webhook
-
-You can configure and get information about alert rules using:
-* [Azure portal](../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal)
-* [Azure CLI](../azure-monitor/alerts/alerts-metric.md#with-azure-cli)
-* [Azure Monitor REST API](/rest/api/monitor/metricalerts)
-
-## Create an alert rule on a metric from the Azure portal
-1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for MySQL server you want to monitor.
-
-2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown:
-
- :::image type="content" source="./media/howto-alert-on-metric/2-alert-rules.png" alt-text="Select Alert Rules":::
-
-3. Select **Add metric alert** (+ icon).
-
-4. The **Create rule** page opens as shown below. Fill in the required information:
-
- :::image type="content" source="./media/howto-alert-on-metric/4-add-rule-form.png" alt-text="Add metric alert form":::
-
-5. Within the **Condition** section, select **Add condition**.
-
-6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent".
-
- :::image type="content" source="./media/howto-alert-on-metric/6-configure-signal-logic.png" alt-text="Select metric":::
-
-7. Configure the alert logic including the **Condition** (ex. "Greater than"), **Threshold** (ex. 85 percent), **Time Aggregation**, **Period** of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes"), and **Frequency**.
-
- Select **Done** when complete.
-
- :::image type="content" source="./media/howto-alert-on-metric/7-set-threshold-time.png" alt-text="Select metric 2":::
-
-8. Within the **Action Groups** section, select **Create New** to create a new group to receive notifications on the alert.
-
-9. Fill out the "Add action group" form with a name, short name, subscription, and resource group.
-
-10. Configure an **Email/SMS/Push/Voice** action type.
-
- Choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications.
-
- Optionally, provide a valid URI in the **Webhook** field if you want it called when the alert fires.
-
- Select **OK** when completed.
-
- :::image type="content" source="./media/howto-alert-on-metric/10-action-group-type.png" alt-text="Action group":::
-
-11. Specify an Alert rule name, Description, and Severity.
-
- :::image type="content" source="./media/howto-alert-on-metric/11-name-description-severity.png" alt-text="Action group 2":::
-
-12. Select **Create alert rule** to create the alert.
-
- Within a few minutes, the alert is active and triggers as previously described.
-
-## Manage your alerts
-Once you have created an alert, you can select it and do the following actions:
-
-* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
-* **Edit** or **Delete** the alert rule.
-* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications.
--
-## Next steps
-* Learn more about [configuring webhooks in alerts](../azure-monitor/alerts/alerts-webhooks.md).
-* Get an [overview of metrics collection](../azure-monitor/data-platform.md) to make sure your service is available and responsive.
mysql Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-auto-grow-storage-cli.md
- Title: Auto grow storage - Azure CLI - Azure Database for MySQL
-description: This article describes how you can enable auto grow storage using the Azure CLI in Azure Database for MySQL.
----- Previously updated : 3/18/2020 --
-# Auto-grow Azure Database for MySQL storage using the Azure CLI
-
-This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload.
-
-The server [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
-
-## Prerequisites
-
-To complete this how-to guide:
--- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Enable MySQL server storage auto-grow
-
-Enable server auto-grow storage on an existing server with the following command:
-
-```azurecli-interactive
-az mysql server update --name mydemoserver --resource-group myresourcegroup --auto-grow Enabled
-```
-
-Enable server auto-grow storage while creating a new server with the following command:
-
-```azurecli-interactive
-az mysql server create --resource-group myresourcegroup --name mydemoserver --auto-grow Enabled --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7
-```
-
-## Next steps
-
-Learn about [how to create alerts on metrics](howto-alert-on-metric.md).
mysql Howto Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-auto-grow-storage-portal.md
- Title: Auto grow storage - Azure portal - Azure Database for MySQL
-description: This article describes how you can enable auto grow storage for Azure Database for MySQL using Azure portal
----- Previously updated : 3/18/2020-
-# Auto grow storage in Azure Database for MySQL using the Azure portal
-
-This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload.
-
-When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
-
-## Prerequisites
-To complete this how-to guide, you need:
-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)-
-## Enable storage auto grow
-
-Follow these steps to set MySQL server storage auto grow:
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server.
-
-2. On the MySQL server page, under **Settings** heading, click **Pricing tier** to open the Pricing tier page.
-
-3. In the Auto-growth section, select **Yes** to enable storage auto grow.
-
- :::image type="content" source="./media/howto-auto-grow-storage-portal/3-auto-grow.png" alt-text="Azure Database for MySQL - Settings_Pricing_tier - Auto-growth":::
-
-4. Click **OK** to save the changes.
-
-5. A notification will confirm that auto grow was successfully enabled.
-
- :::image type="content" source="./media/howto-auto-grow-storage-portal/5-auto-grow-success.png" alt-text="Azure Database for MySQL - auto-growth success":::
-
-## Next steps
-
-Learn about [how to create alerts on metrics](howto-alert-on-metric.md).
mysql Howto Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-auto-grow-storage-powershell.md
- Title: Auto grow storage - Azure PowerShell - Azure Database for MySQL
-description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for MySQL.
----- Previously updated : 4/28/2020 --
-# Auto grow storage in Azure Database for MySQL server using PowerShell
--
-This article describes how you can configure an Azure Database for MySQL server storage to grow
-without impacting the workload.
-
-Storage auto grow prevents your server from
-[reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit) and
-becoming read-only. For servers with 100 GB or less of provisioned storage, the size is increased by
-5 GB when the free space is below 10%. For servers with more than 100 GB of provisioned storage, the
-size is increased by 5% when the free space is below 10 GB. Maximum storage limits apply as
-specified in the storage section of the
-[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage).
-
-> [!IMPORTANT]
-> Remember that storage can only be scaled up, not down.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
-> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
-
-## Enable MySQL server storage auto grow
-
-Enable server auto grow storage on an existing server with the following command:
-
-```azurepowershell-interactive
-Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -StorageAutogrow Enabled
-```
-
-Enable server auto grow storage while creating a new server with the following command:
-
-```azurepowershell-interactive
-$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
-New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -StorageAutogrow Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to create and manage read replicas in Azure Database for MySQL using PowerShell](howto-read-replicas-powershell.md).
mysql Howto Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-audit-logs-cli.md
- Title: Access audit logs - Azure CLI - Azure Database for MySQL
-description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure CLI.
------ Previously updated : 6/24/2020 --
-# Configure and access audit logs in the Azure CLI
--
-You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) from the Azure CLI.
-
-## Prerequisites
-
-To step through this how-to guide:
--- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Configure audit logging
-
-> [!IMPORTANT]
-> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted.
-
-Enable and configure audit logging using the following steps:
-
-1. Turn on audit logs by setting the **audit_logs_enabled** parameter to "ON".
- ```azurecli-interactive
- az mysql server configuration set --name audit_log_enabled --resource-group myresourcegroup --server mydemoserver --value ON
- ```
-
-2. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter.
- ```azurecli-interactive
- az mysql server configuration set --name audit_log_events --resource-group myresourcegroup --server mydemoserver --value "ADMIN,CONNECTION"
- ```
-
-3. Add any MySQL users to be excluded from logging by updating the **audit_log_exclude_users** parameter. Specify users by providing their MySQL user name.
- ```azurecli-interactive
- az mysql server configuration set --name audit_log_exclude_users --resource-group myresourcegroup --server mydemoserver --value "azure_superuser"
- ```
-
-4. Add any specific MySQL users to be included for logging by updating the **audit_log_include_users** parameter. Specify users by providing their MySQL user name.
-
- ```azurecli-interactive
- az mysql server configuration set --name audit_log_include_users --resource-group myresourcegroup --server mydemoserver --value "sampleuser"
- ```
-
-## Next steps
-- Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MySQL-- Learn how to configure audit logs in the [Azure portal](howto-configure-audit-logs-portal.md)
mysql Howto Configure Audit Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-audit-logs-portal.md
- Title: Access audit logs - Azure portal - Azure Database for MySQL
-description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure portal.
----- Previously updated : 9/29/2020--
-# Configure and access audit logs for Azure Database for MySQL in the Azure portal
--
-You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) and diagnostic settings from the Azure portal.
-
-## Prerequisites
-
-To step through this how-to guide, you need:
--- [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)-
-## Configure audit logging
-
->[!IMPORTANT]
-> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted.
-
-Enable and configure audit logging.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Select your Azure Database for MySQL server.
-
-1. Under the **Settings** section in the sidebar, select **Server parameters**.
- :::image type="content" source="./media/howto-configure-audit-logs-portal/server-parameters.png" alt-text="Server parameters":::
-
-1. Update the **audit_log_enabled** parameter to ON.
- :::image type="content" source="./media/howto-configure-audit-logs-portal/audit-log-enabled.png" alt-text="Enable audit logs":::
-
-1. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter.
- :::image type="content" source="./media/howto-configure-audit-logs-portal/audit-log-events.png" alt-text="Audit log events":::
-
-1. Add any MySQL users to be included or excluded from logging by updating the **audit_log_exclude_users** and **audit_log_include_users** parameters. Specify users by providing their MySQL user name.
- :::image type="content" source="./media/howto-configure-audit-logs-portal/audit-log-exclude-users.png" alt-text="Audit log exclude users":::
-
-1. Once you have changed the parameters, you can click **Save**. Or you can **Discard** your changes.
- :::image type="content" source="./media/howto-configure-audit-logs-portal/save-parameters.png" alt-text="Save":::
-
-## Set up diagnostic logs
-
-1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings**.
-
-1. Click on "+ Add diagnostic setting"
-
-1. Provide a diagnostic setting name.
-
-1. Specify which data sinks to send the audit logs (storage account, event hub, and/or Log Analytics workspace).
-
-1. Select "MySqlAuditLogs" as the log type.
-
-1. Once you've configured the data sinks to pipe the audit logs to, you can click **Save**.
-
-1. Access the audit logs by exploring them in the data sinks you configured. It may take up to 10 minutes for the logs to appear.
-
-## Next steps
--- Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MySQL-- Learn how to configure audit logs in the [Azure CLI](howto-configure-audit-logs-cli.md)
mysql Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-privatelink-cli.md
- Title: Private Link - Azure CLI - Azure Database for MySQL
-description: Learn how to configure private link for Azure Database for MySQL from Azure CLI
------ Previously updated : 01/09/2020--
-# Create and manage Private Link for Azure Database for MySQL using CLI
--
-A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure CLI to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint.
-
-> [!NOTE]
-> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
---- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Create a resource group
-
-Before you can create any resource, you have to create a resource group to host the Virtual Network. Create a resource group with [az group create](/cli/azure/group). This example creates a resource group named *myResourceGroup* in the *westeurope* location:
-
-```azurecli-interactive
-az group create --name myResourceGroup --location westeurope
-```
-
-## Create a Virtual Network
-
-Create a Virtual Network with [az network vnet create](/cli/azure/network/vnet). This example creates a default Virtual Network named *myVirtualNetwork* with one subnet named *mySubnet*:
-
-```azurecli-interactive
-az network vnet create \
- --name myVirtualNetwork \
- --resource-group myResourceGroup \
- --subnet-name mySubnet
-```
-
-## Disable subnet private endpoint policies
-
-Azure deploys resources to a subnet within a virtual network, so you need to create or update the subnet to disable private endpoint [network policies](../private-link/disable-private-endpoint-network-policy.md). Update a subnet configuration named *mySubnet* with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update):
-
-```azurecli-interactive
-az network vnet subnet update \
- --name mySubnet \
- --resource-group myResourceGroup \
- --vnet-name myVirtualNetwork \
- --disable-private-endpoint-network-policies true
-```
-
-## Create the VM
-
-Create a VM with az vm create. When prompted, provide a password to be used as the sign-in credentials for the VM. This example creates a VM named *myVm*:
-
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroup \
- --name myVm \
- --image Win2019Datacenter
-```
-
-> [!Note]
-> The public IP address of the VM. You use this address to connect to the VM from the internet in the next step.
-
-## Create an Azure Database for MySQL server
-
-Create a Azure Database for MySQL with the az mysql server create command. Remember that the name of your MySQL Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
-
-```azurecli-interactive
-# Create a server in the resource group
-
-az mysql server create \
name mydemoserver \resource-group myResourcegroup \location westeurope \admin-user mylogin \admin-password <server_admin_password> \sku-name GP_Gen5_2
-```
-
-> [!NOTE]
-> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
->
-> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Create the Private Endpoint
-
-Create a private endpoint for the MySQL server in your Virtual Network:
-
-```azurecli-interactive
-az network private-endpoint create \
- --name myPrivateEndpoint \
- --resource-group myResourceGroup \
- --vnet-name myVirtualNetwork \
- --subnet mySubnet \
- --private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforMySQL/servers" --query "id" -o tsv) \
- --group-id mysqlServer \
- --connection-name myConnection
- ```
-
-## Configure the Private DNS Zone
-
-Create a Private DNS Zone for MySQL server domain and create an association link with the Virtual Network.
-
-```azurecli-interactive
-az network private-dns zone create --resource-group myResourceGroup \
- --name "privatelink.mysql.database.azure.com"
-az network private-dns link vnet create --resource-group myResourceGroup \
- --zone-name "privatelink.mysql.database.azure.com"\
- --name MyDNSLink \
- --virtual-network myVirtualNetwork \
- --registration-enabled false
-
-# Query for the network interface ID
-$networkInterfaceId=$(az network private-endpoint show --name myPrivateEndpoint --resource-group myResourceGroup --query 'networkInterfaces[0].id' -o tsv)
-
-az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json
-# Copy the content for privateIPAddress and FQDN matching the Azure database for MySQL name
-
-# Create DNS records
-az network private-dns record-set a create --name myserver --zone-name privatelink.mysql.database.azure.com --resource-group myResourceGroup
-az network private-dns record-set a add-record --record-set-name myserver --zone-name privatelink.mysql.database.azure.com --resource-group myResourceGroup -a <Private IP Address>
-```
-
-> [!NOTE]
-> The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../dns/dns-operations-recordsets-portal.md).
-
-## Connect to a VM from the internet
-
-Connect to the VM *myVm* from the internet as follows:
-
-1. In the portal's search bar, enter *myVm*.
-
-1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens.
-
-1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
-
-1. Open the *downloaded.rdp* file.
-
- 1. If prompted, select **Connect**.
-
- 1. Enter the username and password you specified when creating the VM.
-
- > [!NOTE]
- > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
-
-1. Select **OK**.
-
-1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
-
-1. Once the VM desktop appears, minimize it to go back to your local desktop.
-
-## Access the MySQL server privately from the VM
-
-1. In the Remote Desktop ofΓÇ»*myVM*, open PowerShell.
-
-2. Enter ΓÇ»`nslookup mydemomysqlserver.privatelink.mysql.database.azure.com`.
-
- You'll receive a message similar to this:
-
- ```azurepowershell
- Server: UnKnown
- Address: 168.63.129.16
- Non-authoritative answer:
- Name: mydemomysqlserver.privatelink.mysql.database.azure.com
- Address: 10.1.3.4
- ```
-
-3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation.
-
-4. In **New connection**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Connection Name| Select the connection name of your choice.|
- | Hostname | Select *mydemoserver.privatelink.mysql.database.azure.com* |
- | Username | Enter username as *username@servername* which is provided during the MySQL server creation. |
- | Password | Enter a password provided during the MySQL server creation. |
- ||
-
-5. Select Connect.
-
-6. Browse databases from left menu.
-
-7. (Optionally) Create or query information from the MySQL database.
-
-8. Close the remote desktop connection to myVm.
-
-## Clean up resources
-
-When no longer needed, you can use az group delete to remove the resource group and all the resources it has:
-
-```azurecli-interactive
-az group delete --name myResourceGroup --yes
-```
-
-## Next steps
--- Learn more about [What is Azure private endpoint](../private-link/private-endpoint-overview.md)-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Howto Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-privatelink-portal.md
- Title: Private Link - Azure portal - Azure Database for MySQL
-description: Learn how to configure private link for Azure Database for MySQL from Azure portal
----- Previously updated : 01/09/2020--
-# Create and manage Private Link for Azure Database for MySQL using Portal
--
-A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure portal to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-> [!NOTE]
-> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
-
-## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create an Azure VM
-
-In this section, you will create virtual network and the subnet to host the VM that is used to access your Private Link resource (a MySQL server in Azure).
-
-### Create the virtual network
-In this section, you will create a Virtual Network and the subnet to host the VM that is used to access your Private Link resource.
-
-1. On the upper-left side of the screen, select **Create a resource** > **Networking** > **Virtual network**.
-2. In **Create virtual network**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter *MyVirtualNetwork*. |
- | Address space | Enter *10.1.0.0/16*. |
- | Subscription | Select your subscription.|
- | Resource group | Select **Create new**, enter *myResourceGroup*, then select **OK**. |
- | Location | Select **West Europe**.|
- | Subnet - Name | Enter *mySubnet*. |
- | Subnet - Address range | Enter *10.1.0.0/24*. |
- |||
-3. Leave the rest as default and select **Create**.
-
-### Create Virtual Machine
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual Machine**.
-
-2. In **Create a virtual machine - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **PROJECT DETAILS** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this in the previous section. |
- | **INSTANCE DETAILS** | |
- | Virtual machine name | Enter *myVm*. |
- | Region | Select **West Europe**. |
- | Availability options | Leave the default **No infrastructure redundancy required**. |
- | Image | Select **Windows Server 2019 Datacenter**. |
- | Size | Leave the default **Standard DS1 v2**. |
- | **ADMINISTRATOR ACCOUNT** | |
- | Username | Enter a username of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
- | Confirm Password | Reenter password. |
- | **INBOUND PORT RULES** | |
- | Public inbound ports | Leave the default **None**. |
- | **SAVE MONEY** | |
- | Already have a Windows license? | Leave the default **No**. |
- |||
-
-1. Select **Next: Disks**.
-
-1. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**.
-
-1. In **Create a virtual machine - Networking**, select this information:
-
- | Setting | Value |
- | - | -- |
- | Virtual network | Leave the default **MyVirtualNetwork**. |
- | Address space | Leave the default **10.1.0.0/24**.|
- | Subnet | Leave the default **mySubnet (10.1.0.0/24)**.|
- | Public IP | Leave the default **(new) myVm-ip**. |
- | Public inbound ports | Select **Allow selected ports**. |
- | Select inbound ports | Select **HTTP** and **RDP**.|
- |||
--
-1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-1. When you see the **Validation passed** message, select **Create**.
-
-## Create an Azure Database for MySQL
-
-In this section, you will create an Azure Database for MySQL server in Azure.
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **Azure Database for MySQL**.
-
-1. In **Azure Database for MySQL** provide these information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this in the previous section.|
- | **Server details** | |
- |Server name | Enter *myServer*. If this name is taken, create a unique name.|
- | Admin username| Enter an administrator name of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least 8 characters long and meet the defined requirements. |
- | Location | Select an Azure region where you want to want your MySQL Server to reside. |
- |Version | Select the database version of the MySQL server that is required.|
- | Compute + Storage| Select the pricing tier that is needed for the server based on the workload. |
- |||
-
-7. Select **OK**.
-8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-9. When you see the Validation passed message, select **Create**.
-10. When you see the Validation passed message, select Create.
-
-> [!NOTE]
-> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Create a private endpoint
-
-In this section, you will create a MySQL server and add a private endpoint to it.
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Networking** > **Private Link**.
-
-2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**.
-
- :::image type="content" source="media/concepts-data-access-and-security-private-link/privatelink-overview.png" alt-text="Private Link overview":::
-
-1. In **Create a private endpoint - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this in the previous section.|
- | **Instance Details** | |
- | Name | Enter *myPrivateEndpoint*. If this name is taken, create a unique name. |
- |Region|Select **West Europe**.|
- |||
-
-5. Select **Next: Resource**.
-6. In **Create a private endpoint - Resource**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- |Connection method | Select connect to an Azure resource in my directory.|
- | Subscription| Select your subscription. |
- | Resource type | Select **Microsoft.DBforMySQL/servers**. |
- | Resource |Select *myServer*|
- |Target sub-resource |Select *mysqlServer*|
- |||
-7. Select **Next: Configuration**.
-8. In **Create a private endpoint - Configuration**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- |**NETWORKING**| |
- | Virtual network| Select *MyVirtualNetwork*. |
- | Subnet | Select *mySubnet*. |
- |**PRIVATE DNS INTEGRATION**||
- |Integrate with private DNS zone |Select **Yes**. |
- |Private DNS Zone |Select *(New)privatelink.mysql.database.azure.com* |
- |||
-
- > [!Note]
- > Use the predefined private DNS zone for your service or provide your preferred DNS zone name. Refer to the [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md) for details.
-
-1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-2. When you see the **Validation passed** message, select **Create**.
-
- :::image type="content" source="media/concepts-data-access-and-security-private-link/show-mysql-private-link.png" alt-text="Private Link created":::
-
- > [!NOTE]
- > The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../dns/dns-operations-recordsets-portal.md).
-
-## Connect to a VM using Remote Desktop (RDP)
--
-After you've created **myVm**, connect to it from the internet as follows:
-
-1. In the portal's search bar, enter *myVm*.
-
-1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens.
-
-1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
-
-1. Open the *downloaded.rdp* file.
-
- 1. If prompted, select **Connect**.
-
- 1. Enter the username and password you specified when creating the VM.
-
- > [!NOTE]
- > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
-
-1. Select **OK**.
-
-1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
-
-1. Once the VM desktop appears, minimize it to go back to your local desktop.
-
-## Access the MySQL server privately from the VM
-
-1. In the Remote Desktop of *myVM*, open PowerShell.
-
-2. EnterΓÇ»`nslookup myServer.privatelink.mysql.database.azure.com`.
-
- You'll receive a message similar to this:
- ```azurepowershell
- Server: UnKnown
- Address: 168.63.129.16
- Non-authoritative answer:
- Name: myServer.privatelink.mysql.database.azure.com
- Address: 10.1.3.4
- ```
- > [!NOTE]
- > If public access is disabled in the firewall settings in Azure Database for MySQL - Single Server. These ping and telnet tests will succeed regardless of the firewall settings. Those tests will ensure the network connectivity.
-
-3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation.
-
-4. In **New connection**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Server type| Select **MySQL**.|
- | Server name| Select *myServer.privatelink.mysql.database.azure.com* |
- | User name | Enter username as username@servername which is provided during the MySQL server creation. |
- |Password |Enter a password provided during the MySQL server creation. |
- |SSL|Select **Required**.|
- ||
-
-5. Select Connect.
-
-6. Browse databases from left menu.
-
-7. (Optionally) Create or query information from the MySQL server.
-
-8. Close the remote desktop connection to myVm.
-
-## Clean up resources
-When you're done using the private endpoint, MySQL server, and the VM, delete the resource group and all of the resources it contains:
-
-1. Enter *myResourceGroup* in the **Search** box at the top of the portal and select *myResourceGroup* from the search results.
-2. Select **Delete resource group**.
-3. Enter myResourceGroup for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
-
-## Next steps
-
-In this how-to, you created a VM on a virtual network, an Azure Database for MySQL, and a private endpoint for private access. You connected to one VM from the internet and securely communicated to the MySQL server using Private Link. To learn more about private endpoints, see [What is Azure private endpoint](../private-link/private-endpoint-overview.md).
-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Howto Configure Server Logs In Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-server-logs-in-cli.md
- Title: Access slow query logs - Azure CLI - Azure Database for MySQL
-description: This article describes how to access the slow query logs in Azure Database for MySQL by using the Azure CLI.
----- Previously updated : 4/13/2020 --
-# Configure and access slow query logs by using Azure CLI
-
-You can download the Azure Database for MySQL slow query logs by using Azure CLI, the Azure command-line utility.
-
-## Prerequisites
-To step through this how-to guide, you need:
-- [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md)-- The [Azure CLI](/cli/azure/install-azure-cli) or Azure Cloud Shell in the browser-
-## Configure logging
-You can configure the server to access the MySQL slow query log by taking the following steps:
-1. Turn on slow query logging by setting the **slow\_query\_log** parameter to ON.
-2. Select where to output the logs to using **log\_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**. To send logs only to Azure Monitor Logs, select **None**
-3. Adjust other parameters, such as **long\_query\_time** and **log\_slow\_admin\_statements**.
-
-To learn how to set the value of these parameters through Azure CLI, see [How to configure server parameters](howto-configure-server-parameters-using-cli.md).
-
-For example, the following CLI command turns on the slow query log, sets the long query time to 10 seconds, and then turns off the logging of the slow admin statement. Finally, it lists the configuration options for your review.
-```azurecli-interactive
-az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver --value ON
-az mysql server configuration set --name log_output --resource-group myresourcegroup --server mydemoserver --value FILE
-az mysql server configuration set --name long_query_time --resource-group myresourcegroup --server mydemoserver --value 10
-az mysql server configuration set --name log_slow_admin_statements --resource-group myresourcegroup --server mydemoserver --value OFF
-az mysql server configuration list --resource-group myresourcegroup --server mydemoserver
-```
-
-## List logs for Azure Database for MySQL server
-If **log_output** is configured to "File", you can access logs directly from the server's local storage. To list the available slow query log files for your server, run the [az mysql server-logs list](/cli/azure/mysql/server-logs#az-mysql-server-logs-list) command.
-
-You can list the log files for server **mydemoserver.mysql.database.azure.com** under the resource group **myresourcegroup**. Then direct the list of log files to a text file called **log\_files\_list.txt**.
-```azurecli-interactive
-az mysql server-logs list --resource-group myresourcegroup --server mydemoserver > log_files_list.txt
-```
-## Download logs from the server
-If **log_output** is configured to "File", you can download individual log files from your server with the [az mysql server-logs download](/cli/azure/mysql/server-logs#az-mysql-server-logs-download) command.
-
-Use the following example to download the specific log file for the server **mydemoserver.mysql.database.azure.com** under the resource group **myresourcegroup** to your local environment.
-```azurecli-interactive
-az mysql server-logs download --name 20170414-mydemoserver-mysql.log --resource-group myresourcegroup --server mydemoserver
-```
-
-## Next steps
-- Learn about [slow query logs in Azure Database for MySQL](concepts-server-logs.md).
mysql Howto Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-server-logs-in-portal.md
- Title: Access slow query logs - Azure portal - Azure Database for MySQL
-description: This article describes how to configure and access the slow logs in Azure Database for MySQL from the Azure portal.
----- Previously updated : 3/15/2021--
-# Configure and access slow query logs from the Azure portal
--
-You can configure, list, and download the [Azure Database for MySQL slow query logs](concepts-server-logs.md) from the Azure portal.
-
-## Prerequisites
-The steps in this article require that you have [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md).
-
-## Configure logging
-Configure access to the MySQL slow query log.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Select your Azure Database for MySQL server.
-
-3. Under the **Monitoring** section in the sidebar, select **Server logs**.
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/1-select-server-logs-configure.png" alt-text="Screenshot of Server logs options":::
-
-4. To see the server parameters, select **Click here to enable logs and configure log parameters**.
-
-5. Turn **slow_query_log** to **ON**.
-
-6. Select where to output the logs to using **log_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**.
-
-7. Consider setting "long_query_time" which represents query time threshold for the queries that will be collected in the slow query log file, The minimum and default values of long_query_time are 0 and 10, respectively.
-
-8. Adjust other parameters, such as log_slow_admin_statements to log administrative statements. By default, administrative statements are not logged, nor are queries that do not use indexes for lookups.
-
-9. Select **Save**.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/3-save-discard.png" alt-text="Screenshot of slow query log parameters and save.":::
-
-From the **Server Parameters** page, you can return to the list of logs by closing the page.
-
-## View list and download logs
-After logging begins, you can view a list of available slow query logs, and download individual log files.
-
-1. Open the Azure portal.
-
-2. Select your Azure Database for MySQL server.
-
-3. Under the **Monitoring** section in the sidebar, select **Server logs**. The page shows a list of your log files.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/4-server-logs-list.png" alt-text="Screenshot of Server logs page, with list of logs highlighted":::
-
- > [!TIP]
- > The naming convention of the log is **mysql-slow-< your server name>-yyyymmddhh.log**. The date and time used in the file name is the time when the log was issued. Log files are rotated every 24 hours or 7.5 GB, whichever comes first.
-
-4. If needed, use the search box to quickly narrow down to a specific log, based on date and time. The search is on the name of the log.
-
-5. To download individual log files, select the down-arrow icon next to each log file in the table row.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/5-download.png" alt-text="Screenshot of Server logs page, with down-arrow icon highlighted":::
-
-## Set up diagnostic logs
-
-1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings** > **Add diagnostic settings**.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/add-diagnostic-setting.png" alt-text="Screenshot of Diagnostic settings options":::
-
-2. Provide a diagnostic setting name.
-
-3. Specify which data sinks to send the slow query logs (storage account, event hub, or Log Analytics workspace).
-
-4. Select **MySqlSlowLogs** as the log type.
-
-5. After you've configured the data sinks to pipe the slow query logs to, select **Save**.
-
-6. Access the slow query logs by exploring them in the data sinks you configured. It can take up to 10 minutes for the logs to appear.
-
-## Next steps
-- See [Access slow query Logs in CLI](howto-configure-server-logs-in-cli.md) to learn how to download slow query logs programmatically.-- Learn more about [slow query logs](concepts-server-logs.md) in Azure Database for MySQL.-- For more information about the parameter definitions and MySQL logging, see the MySQL documentation on [logs](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html).
mysql Howto Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-server-parameters-using-cli.md
- Title: Configure server parameters - Azure CLI - Azure Database for MySQL
-description: This article describes how to configure the service parameters in Azure Database for MySQL using the Azure CLI command line utility.
----- Previously updated : 10/1/2020 --
-# Configure server parameters in Azure Database for MySQL using the Azure CLI
-
-You can list, show, and update configuration parameters for an Azure Database for MySQL server by using Azure CLI, the Azure command-line utility. A subset of engine configurations is exposed at the server-level and can be modified.
-
->[!Note]
-> Server parameters can be updated globally at the server-level, use the [Azure CLI](./howto-configure-server-parameters-using-cli.md), [PowerShell](./howto-configure-server-parameters-using-powershell.md), or [Azure portal](./howto-server-parameters.md)
-
-## Prerequisites
-To step through this how-to guide, you need:
-- [An Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md)-- [Azure CLI](/cli/azure/install-azure-cli) command-line utility or use the Azure Cloud Shell in the browser.-
-## List server configuration parameters for Azure Database for MySQL server
-To list all modifiable parameters in a server and their values, run the [az mysql server configuration list](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-list) command.
-
-You can list the server configuration parameters for the server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup**.
-```azurecli-interactive
-az mysql server configuration list --resource-group myresourcegroup --server mydemoserver
-```
-For the definition of each of the listed parameters, see the MySQL reference section on [Server System Variables](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html).
-
-## Show server configuration parameter details
-To show details about a particular configuration parameter for a server, run the [az mysql server configuration show](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-show) command.
-
-This example shows details of the **slow\_query\_log** server configuration parameter for server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.**
-```azurecli-interactive
-az mysql server configuration show --name slow_query_log --resource-group myresourcegroup --server mydemoserver
-```
-## Modify a server configuration parameter value
-You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the MySQL server engine. To update the configuration, use the [az mysql server configuration set](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-set) command.
-
-To update the **slow\_query\_log** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.**
-```azurecli-interactive
-az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver --value ON
-```
-If you want to reset the value of a configuration parameter, omit the optional `--value` parameter, and the service applies the default value. For the example above, it would look like:
-```azurecli-interactive
-az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver
-```
-This code resets the **slow\_query\_log** configuration to the default value **OFF**.
-
-## Setting parameters not listed
-If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server.
-
-Update the **init\_connect** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup** to set values such as character set.
-```azurecli-interactive
-az mysql server configuration set --name init_connect --resource-group myresourcegroup --server mydemoserver --value "SET character_set_client=utf8;SET character_set_database=utf8mb4;SET character_set_connection=latin1;SET character_set_results=latin1;"
-```
-
-## Working with the time zone parameter
-
-### Populating the time zone tables
-
-The time zone tables on your server can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench.
-
-> [!NOTE]
-> If you are running the `mysql.az_load_timezone` command from MySQL Workbench, you may need to turn off safe update mode first using `SET SQL_SAFE_UPDATES=0;`.
-
-```sql
-CALL mysql.az_load_timezone();
-```
-
-> [!IMPORTANT]
-> You should restart the server to ensure the time zone tables are properly populated. To restart the server, use the [Azure portal](howto-restart-server-portal.md) or [CLI](howto-restart-server-cli.md).
-
-To view available time zone values, run the following command:
-
-```sql
-SELECT name FROM mysql.time_zone_name;
-```
-
-### Setting the global level time zone
-
-The global level time zone can be set using the [az mysql server configuration set](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-set) command.
-
-The following command updates the **time\_zone** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup** to **US/Pacific**.
-
-```azurecli-interactive
-az mysql server configuration set --name time_zone --resource-group myresourcegroup --server mydemoserver --value "US/Pacific"
-```
-
-### Setting the session level time zone
-
-The session level time zone can be set by running the `SET time_zone` command from a tool like the MySQL command line or MySQL Workbench. The example below sets the time zone to the **US/Pacific** time zone.
-
-```sql
-SET time_zone = 'US/Pacific';
-```
-
-Refer to the MySQL documentation for [Date and Time Functions](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz).
--
-## Next steps
--- How to configure [server parameters in Azure portal](howto-server-parameters.md)
mysql Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-server-parameters-using-powershell.md
- Title: Configure server parameters - Azure PowerShell - Azure Database for MySQL
-description: This article describes how to configure the service parameters in Azure Database for MySQL using PowerShell.
----- Previously updated : 10/1/2020---
-# Configure server parameters in Azure Database for MySQL using PowerShell
--
-You can list, show, and update configuration parameters for an Azure Database for MySQL server using
-PowerShell. A subset of engine configurations is exposed at the server-level and can be modified.
-
->[!Note]
-> Server parameters can be updated globally at the server-level, use the [Azure CLI](./howto-configure-server-parameters-using-cli.md), [PowerShell](./howto-configure-server-parameters-using-powershell.md), or [Azure portal](./howto-server-parameters.md).
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
-> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
--
-## List server configuration parameters for Azure Database for MySQL server
-
-To list all modifiable parameters in a server and their values, run the `Get-AzMySqlConfiguration`
-cmdlet.
-
-The following example lists the server configuration parameters for the server **mydemoserver** in
-resource group **myresourcegroup**.
-
-```azurepowershell-interactive
-Get-AzMySqlConfiguration -ResourceGroupName myresourcegroup -ServerName mydemoserver
-```
-
-For the definition of each of the listed parameters, see the MySQL reference section on
-[Server System Variables](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html).
-
-## Show server configuration parameter details
-
-To show details about a particular configuration parameter for a server, run the
-`Get-AzMySqlConfiguration` cmdlet and specify the **Name** parameter.
-
-This example shows details of the **slow\_query\_log** server configuration parameter for server
-**mydemoserver** under resource group **myresourcegroup**.
-
-```azurepowershell-interactive
-Get-AzMySqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver
-```
-
-## Modify a server configuration parameter value
-
-You can also modify the value of a certain server configuration parameter, which updates the
-underlying configuration value for the MySQL server engine. To update the configuration, use the
-`Update-AzMySqlConfiguration` cmdlet.
-
-To update the **slow\_query\_log** server configuration parameter of server
-**mydemoserver** under resource group **myresourcegroup**.
-
-```azurepowershell-interactive
-Update-AzMySqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver -Value On
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Auto grow storage in Azure Database for MySQL server using PowerShell](howto-auto-grow-storage-powershell.md).
mysql Howto Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-sign-in-azure-ad-authentication.md
- Title: Use Azure Active Directory - Azure Database for MySQL
-description: Learn about how to set up Azure Active Directory (Azure AD) for authentication with Azure Database for MySQL
----- Previously updated : 07/23/2020 ---
-# Use Azure Active Directory for authentication with MySQL
--
-This article will walk you through the steps how to configure Azure Active Directory access with Azure Database for MySQL, and how to connect using an Azure AD token.
-
-> [!IMPORTANT]
-> Azure Active Directory authentication is only available for MySQL 5.7 and newer.
-
-## Setting the Azure AD Admin user
-
-Only an Azure AD Admin user can create/enable users for Azure AD-based authentication. To create an Azure AD Admin user, please follow the following steps
-
-1. In the Azure portal, select the instance of Azure Database for MySQL that you want to enable for Azure AD.
-2. Under Settings, select Active Directory Admin:
-
-![set azure ad administrator][2]
-
-3. Select a valid Azure AD user in the customer tenant to be Azure AD administrator.
-
-> [!IMPORTANT]
-> When setting the administrator, a new user is added to the Azure Database for MySQL server with full administrator permissions.
-
-Only one Azure AD admin can be created per MySQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
-
-After configuring the administrator, you can now sign in:
-
-## Connecting to Azure Database for MySQL using Azure AD
-
-The following high-level diagram summarizes the workflow of using Azure AD authentication with Azure Database for MySQL:
-
-![authentication flow][1]
-
-WeΓÇÖve designed the Azure AD integration to work with common MySQL tools like the mysql CLI, which are not Azure AD aware and only support specifying username and password when connecting to MySQL. We pass the Azure AD token as the password as shown in the picture above.
-
-We currently have tested the following clients:
--- MySQLWorkbench -- MySQL CLI-
-We have also tested most common application drivers, you can see details at the end of this page.
-
-These are the steps that a user/application will need to do authenticate with Azure AD described below:
-
-### Prerequisites
-
-You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine. Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli).
-
-### Step 1: Authenticate with Azure AD
-
-Start by authenticating with Azure AD using the Azure CLI tool. This step is not required in Azure Cloud Shell.
-
-```
-az login
-```
-
-The command will launch a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and the password.
-
-### Step 2: Retrieve Azure AD access token
-
-Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for MySQL.
-
-Example (for Public Cloud):
-
-```azurecli-interactive
-az account get-access-token --resource https://ossrdbms-aad.database.windows.net
-```
-The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
-
-```azurecli-interactive
-az cloud show
-```
-
-For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
-
-```azurecli-interactive
-az account get-access-token --resource-type oss-rdbms
-```
-Using PowerShell, you can use the following command to acquire access token:
-
-```azurepowershell-interactive
-$accessToken = Get-AzAccessToken -ResourceUrl https://ossrdbms-aad.database.windows.net
-$accessToken.Token | out-file C:\temp\MySQLAccessToken.txt
-```
--
-After authentication is successful, Azure AD will return an access token:
-
-```json
-{
- "accessToken": "TOKEN",
- "expiresOn": "...",
- "subscription": "...",
- "tenant": "...",
- "tokenType": "Bearer"
-}
-```
-
-The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for MySQL service.
-
-The access token validity is anywhere between ***5 minutes to 60 minutes***. We recommend you get the access token just before initiating the login to Azure Database for MySQL. You can use the following PowerShell command to see the token validity.
-
-```azurepowershell-interactive
-$accessToken.ExpiresOn.DateTime
-```
-
-### Step 3: Use token as password for logging in with MySQL
-
-When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method described above to retrieve the token.
-
-#### Using MySQL CLI
-When using the CLI, you can use this short-hand to connect:
-
-**Example (Linux/macOS):**
-```
-mysql -h mydb.mysql.database.azure.com \
- --user user@tenant.onmicrosoft.com@mydb \
- --enable-cleartext-plugin \
- --password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken`
-```
-#### Using MySQL Workbench
-* Launch MySQL Workbench and Click the Database option, then click "Connect to database"
-* In the hostname field, enter the MySQL FQDN eg. mydb.mysql.database.azure.com
-* In the username field, enter the MySQL Azure Active Directory administrator name and append this with MySQL server name, not the FQDN e.g. user@tenant.onmicrosoft.com@mydb
-* In the password field, click "Store in Vault" and paste in the access token from file e.g. C:\temp\MySQLAccessToken.txt
-* Click the advanced tab and ensure that you check "Enable Cleartext Authentication Plugin"
-* Click OK to connect to the database
-
-#### Important considerations when connecting:
-
-* `user@tenant.onmicrosoft.com` is the name of the Azure AD user or group you are trying to connect as
-* Always append the server name after the Azure AD user/group name (e.g. `@mydb`)
-* Make sure to use the exact way the Azure AD user or group name is spelled
-* Azure AD user and group names are case sensitive
-* When connecting as a group, use only the group name (e.g. `GroupName@mydb`)
-* If the name contains spaces, use `\` before each space to escape it
-
-Note the ΓÇ£enable-cleartext-pluginΓÇ¥ setting ΓÇô you need to use a similar configuration with other clients to make sure the token gets sent to the server without being hashed.
-
-You are now authenticated to your MySQL server using Azure AD authentication.
-
-## Creating Azure AD users in Azure Database for MySQL
-
-To add an Azure AD user to your Azure Database for MySQL database, perform the following steps after connecting (see later section on how to connect):
-
-1. First ensure that the Azure AD user `<user>@yourtenant.onmicrosoft.com` is a valid user in Azure AD tenant.
-2. Sign in to your Azure Database for MySQL instance as the Azure AD Admin user.
-3. Create user `<user>@yourtenant.onmicrosoft.com` in Azure Database for MySQL.
-
-**Example:**
-
-```sql
-CREATE AADUSER 'user1@yourtenant.onmicrosoft.com';
-```
-
-For user names that exceed 32 characters, it is recommended you use an alias instead, to be used when connecting:
-
-Example:
-
-```sql
-CREATE AADUSER 'userWithLongName@yourtenant.onmicrosoft.com' as 'userDefinedShortName';
-```
-> [!NOTE]
-> 1. MySQL ignores leading and trailing spaces so user name should not have any leading or trailing spaces.
-> 2. Authenticating a user through Azure AD does not give the user any permissions to access objects within the Azure Database for MySQL database. You must grant the user the required permissions manually.
-
-## Creating Azure AD groups in Azure Database for MySQL
-
-To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
-
-**Example:**
-
-```sql
-CREATE AADUSER 'Prod_DB_Readonly';
-```
-
-When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username.
-
-## Token Validation
-
-Azure AD authentication in Azure Database for MySQL ensures that the user exists in the MySQL server, and it checks the validity of the token by validating the contents of the token. The following token validation steps are performed:
--- Token is signed by Azure AD and has not been tampered with-- Token was issued by Azure AD for the tenant associated with the server-- Token has not expired-- Token is for the Azure Database for MySQL resource (and not another Azure resource)-
-## Compatibility with application drivers
-
-Most drivers are supported, however make sure to use the settings for sending the password in clear-text, so the token gets sent without modification.
-
-* C/C++
- * libmysqlclient: Supported
- * mysql-connector-c++: Supported
-* Java
- * Connector/J (mysql-connector-java): Supported, must utilize `useSSL` setting
-* Python
- * Connector/Python: Supported
-* Ruby
- * mysql2: Supported
-* .NET
- * mysql-connector-net: Supported, need to add plugin for mysql_clear_password
- * mysql-net/MySqlConnector: Supported
-* Node.js
- * mysqljs: Not supported (does not send token in cleartext without patch)
- * node-mysql2: Supported
-* Perl
- * DBD::mysql: Supported
- * Net::MySQL: Not supported
-* Go
- * go-sql-driver: Supported, add `?tls=true&allowCleartextPasswords=true` to connection string
-
-## Next steps
-
-* Review the overall concepts for [Azure Active Directory authentication with Azure Database for MySQL](concepts-azure-ad-authentication.md)
-
-<!--Image references-->
-
-[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
-[2]: ./media/concepts-azure-ad-authentication/set-azure-ad-admin.png
mysql Howto Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-ssl.md
- Title: Configure SSL - Azure Database for MySQL
-description: Instructions for how to properly configure Azure Database for MySQL and associated applications to correctly use SSL connections
------ Previously updated : 07/08/2020-
-# Configure SSL connectivity in your application to securely connect to Azure Database for MySQL
--
-Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application.
-
-## Step 1: Obtain SSL certificate
-
-Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive (this tutorial uses c:\ssl for example).
-**For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
-
-See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
-
-## Step 2: Bind SSL
-
-For specific programming language connection strings, please refer to the [sample code](howto-configure-ssl.md#sample-code) below.
-
-### Connecting to server using MySQL Workbench over SSL
-
-Configure MySQL Workbench to connect securely over SSL.
-
-1. From the Setup New Connection dialogue, navigate to the **SSL** tab.
-
-1. Update the **Use SSL** field to "Require".
-
-1. In the **SSL CA File:** field, enter the file location of the **BaltimoreCyberTrustRoot.crt.pem**.
-
- :::image type="content" source="./media/howto-configure-ssl/mysql-workbench-ssl.png" alt-text="Save SSL configuration":::
-
-For existing connections, you can bind SSL by right-clicking on the connection icon and choose edit. Then navigate to the **SSL** tab and bind the cert file.
-
-### Connecting to server using the MySQL CLI over SSL
-
-Another way to bind the SSL certificate is to use the MySQL command-line interface by executing the following commands.
-
-```bash
-mysql.exe -h mydemoserver.mysql.database.azure.com -u Username@mydemoserver -p --ssl-mode=REQUIRED --ssl-ca=c:\ssl\BaltimoreCyberTrustRoot.crt.pem
-```
-
-> [!NOTE]
-> When using the MySQL command-line interface on Windows, you may receive an error `SSL connection error: Certificate signature check failed`. If this occurs, replace the `--ssl-mode=REQUIRED --ssl-ca={filepath}` parameters with `--ssl`.
-
-## Step 3: Enforcing SSL connections in Azure
-
-### Using the Azure portal
-
-Using the Azure portal, visit your Azure Database for MySQL server, and then click **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting, and then click **Save**. Microsoft recommends to always enable the **Enforce SSL connection** setting for enhanced security.
--
-### Using Azure CLI
-
-You can enable or disable the **ssl-enforcement** parameter by using Enabled or Disabled values respectively in Azure CLI.
-
-```azurecli-interactive
-az mysql server update --resource-group myresource --name mydemoserver --ssl-enforcement Enabled
-```
-
-## Step 4: Verify the SSL connection
-
-Execute the mysql **status** command to verify that you have connected to your MySQL server using SSL:
-
-```dos
-mysql> status
-```
-
-Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is AES256-SHA**
-
-## Sample code
-
-To establish a secure connection to Azure Database for MySQL over SSL from your application, refer to the following code samples:
-
-Refer to the list of [compatible drivers](concepts-compatibility.md) supported by the Azure Database for MySQL service.
-
-### PHP
-
-```php
-$conn = mysqli_init();
-mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/BaltimoreCyberTrustRoot.crt.pem", NULL, NULL);
-mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, MYSQLI_CLIENT_SSL);
-if (mysqli_connect_errno($conn)) {
-die('Failed to connect to MySQL: '.mysqli_connect_error());
-}
-```
-
-### PHP (Using PDO)
-
-```phppdo
-$options = array(
- PDO::MYSQL_ATTR_SSL_CA => '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'
-);
-$db = new PDO('mysql:host=mydemoserver.mysql.database.azure.com;port=3306;dbname=databasename', 'username@mydemoserver', 'yourpassword', $options);
-```
-
-### Python (MySQLConnector Python)
-
-```python
-try:
- conn = mysql.connector.connect(user='myadmin@mydemoserver',
- password='yourpassword',
- database='quickstartdb',
- host='mydemoserver.mysql.database.azure.com',
- ssl_ca='/var/www/html/BaltimoreCyberTrustRoot.crt.pem')
-except mysql.connector.Error as err:
- print(err)
-```
-
-### Python (PyMySQL)
-
-```python
-conn = pymysql.connect(user='myadmin@mydemoserver',
- password='yourpassword',
- database='quickstartdb',
- host='mydemoserver.mysql.database.azure.com',
- ssl={'ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'})
-```
-
-### Django (PyMySQL)
-
-```python
-DATABASES = {
- 'default': {
- 'ENGINE': 'django.db.backends.mysql',
- 'NAME': 'quickstartdb',
- 'USER': 'myadmin@mydemoserver',
- 'PASSWORD': 'yourpassword',
- 'HOST': 'mydemoserver.mysql.database.azure.com',
- 'PORT': '3306',
- 'OPTIONS': {
- 'ssl': {'ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'}
- }
- }
-}
-```
-
-### Ruby
-
-```ruby
-client = Mysql2::Client.new(
- :host => 'mydemoserver.mysql.database.azure.com',
- :username => 'myadmin@mydemoserver',
- :password => 'yourpassword',
- :database => 'quickstartdb',
- :sslca => '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'
- )
-```
-
-### Golang
-
-```go
-rootCertPool := x509.NewCertPool()
-pem, _ := ioutil.ReadFile("/var/www/html/BaltimoreCyberTrustRoot.crt.pem")
-if ok := rootCertPool.AppendCertsFromPEM(pem); !ok {
- log.Fatal("Failed to append PEM.")
-}
-mysql.RegisterTLSConfig("custom", &tls.Config{RootCAs: rootCertPool})
-var connectionString string
-connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true&tls=custom","myadmin@mydemoserver" , "yourpassword", "mydemoserver.mysql.database.azure.com", 'quickstartdb')
-db, _ := sql.Open("mysql", connectionString)
-```
-
-### Java (MySQL Connector for Java)
-
-```java
-# generate truststore and keystore in code
-
-String importCert = " -import "+
- " -alias mysqlServerCACert "+
- " -file " + ssl_ca +
- " -keystore truststore "+
- " -trustcacerts " +
- " -storepass password -noprompt ";
-String genKey = " -genkey -keyalg rsa " +
- " -alias mysqlClientCertificate -keystore keystore " +
- " -storepass password123 -keypass password " +
- " -dname CN=MS ";
-sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+"));
-sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+"));
-
-# use the generated keystore and truststore
-
-System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file");
-System.setProperty("javax.net.ssl.keyStorePassword","password");
-System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
-System.setProperty("javax.net.ssl.trustStorePassword","password");
-
-url = String.format("jdbc:mysql://%s/%s?serverTimezone=UTC&useSSL=true", 'mydemoserver.mysql.database.azure.com', 'quickstartdb');
-properties.setProperty("user", 'myadmin@mydemoserver');
-properties.setProperty("password", 'yourpassword');
-conn = DriverManager.getConnection(url, properties);
-```
-
-### Java (MariaDB Connector for Java)
-
-```java
-# generate truststore and keystore in code
-
-String importCert = " -import "+
- " -alias mysqlServerCACert "+
- " -file " + ssl_ca +
- " -keystore truststore "+
- " -trustcacerts " +
- " -storepass password -noprompt ";
-String genKey = " -genkey -keyalg rsa " +
- " -alias mysqlClientCertificate -keystore keystore " +
- " -storepass password123 -keypass password " +
- " -dname CN=MS ";
-sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+"));
-sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+"));
-
-# use the generated keystore and truststore
--
-System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file");
-System.setProperty("javax.net.ssl.keyStorePassword","password");
-System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
-System.setProperty("javax.net.ssl.trustStorePassword","password");
-
-url = String.format("jdbc:mariadb://%s/%s?useSSL=true&trustServerCertificate=true", 'mydemoserver.mysql.database.azure.com', 'quickstartdb');
-properties.setProperty("user", 'myadmin@mydemoserver');
-properties.setProperty("password", 'yourpassword');
-conn = DriverManager.getConnection(url, properties);
-```
-
-### .NET (MySqlConnector)
-
-```csharp
-var builder = new MySqlConnectionStringBuilder
-{
- Server = "mydemoserver.mysql.database.azure.com",
- UserID = "myadmin@mydemoserver",
- Password = "yourpassword",
- Database = "quickstartdb",
- SslMode = MySqlSslMode.VerifyCA,
- SslCa = "BaltimoreCyberTrustRoot.crt.pem",
-};
-using (var connection = new MySqlConnection(builder.ConnectionString))
-{
- connection.Open();
-}
-```
-
-### Node.js
-
-```node
-var fs = require('fs');
-var mysql = require('mysql');
-const serverCa = [fs.readFileSync("/var/www/html/BaltimoreCyberTrustRoot.crt.pem", "utf8")];
-var conn=mysql.createConnection({
- host:"mydemoserver.mysql.database.azure.com",
- user:"myadmin@mydemoserver",
- password:"yourpassword",
- database:"quickstartdb",
- port:3306,
- ssl: {
- rejectUnauthorized: true,
- ca: serverCa
- }
-});
-conn.connect(function(err) {
- if (err) throw err;
-});
-```
-
-## Next steps
-
-* To learn about certificate expiry and rotation, refer [certificate rotation documentation](concepts-certificate-rotation.md)
-* Review various application connectivity options following [Connection libraries for Azure Database for MySQL](concepts-connection-libraries.md)
mysql Howto Connect Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-connect-webapp.md
- Title: Connect to Azure App Service - Azure Database for MySQL
-description: Instructions for how to properly connect an existing Azure App Service to Azure Database for MySQL
----- Previously updated : 3/18/2020--
-# Connect an existing Azure App Service to Azure Database for MySQL server
-
-This topic explains how to connect an existing Azure App Service to your Azure Database for MySQL server.
-
-## Before you begin
-Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Database for MySQL server. For details, refer to [How to create Azure Database for MySQL server from Portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [How to create Azure Database for MySQL server using CLI](quickstart-create-mysql-server-database-using-azure-cli.md).
-
-Currently there are two solutions to enable access from an Azure App Service to an Azure Database for MySQL. Both solutions involve setting up server-level firewall rules.
-
-## Solution 1 - Allow Azure services
-Azure Database for MySQL provides access security using a firewall to protect your data. When connecting from an Azure App Service to Azure Database for MySQL server, keep in mind that the outbound IPs of App Service are dynamic in nature. Choosing the "Allow access to Azure services" option will allow the app service to connect to the MySQL server.
-
-1. On the MySQL server blade, under the Settings heading, click **Connection Security** to open the Connection Security blade for Azure Database for MySQL.
-
- :::image type="content" source="./media/howto-connect-webapp/1-connection-security.png" alt-text="Azure portal - click Connection Security":::
-
-2. Select **ON** in **Allow access to Azure services**, then **Save**.
- :::image type="content" source="./media/howto-connect-webapp/allow-azure.png" alt-text="Azure portal - Allow Azure access":::
-
-## Solution 2 - Create a firewall rule to explicitly allow outbound IPs
-You can explicitly add all the outbound IPs of your Azure App Service.
-
-1. On the App Service Properties blade, view your **OUTBOUND IP ADDRESS**.
-
- :::image type="content" source="./media/howto-connect-webapp/2_1-outbound-ip-address.png" alt-text="Azure portal - View outbound IPs":::
-
-2. On the MySQL Connection security blade, add outbound IPs one by one.
-
- :::image type="content" source="./media/howto-connect-webapp/2_2-add-explicit-ips.png" alt-text="Azure portal - Add explicit IPs":::
-
-3. Remember to **Save** your firewall rules.
-
-Though the Azure App service attempts to keep IP addresses constant over time, there are cases where the IP addresses may change. For example, this can occur when the app recycles or a scale operation occurs, or when new computers are added in Azure regional data centers to increase capacity. When the IP addresses change, the app could experience downtime in the event it can no longer connect to the MySQL server. Keep this consideration in mind when choosing one of the preceding solutions.
-
-## SSL configuration
-Azure Database for MySQL has SSL enabled by default. If your application is not using SSL to connect to the database, then you need to disable SSL on the MySQL server. For details on how to configure SSL, see [Using SSL with Azure Database for MySQL](howto-configure-ssl.md).
-
-### Django (PyMySQL)
-```python
-DATABASES = {
- 'default': {
- 'ENGINE': 'django.db.backends.mysql',
- 'NAME': 'quickstartdb',
- 'USER': 'myadmin@mydemoserver',
- 'PASSWORD': 'yourpassword',
- 'HOST': 'mydemoserver.mysql.database.azure.com',
- 'PORT': '3306',
- 'OPTIONS': {
- 'ssl': {'ssl-ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'}
- }
- }
-}
-```
-
-## Next steps
-For more information about connection strings, refer to [Connection Strings](howto-connection-string.md).
mysql Howto Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-connect-with-managed-identity.md
- Title: Connect with Managed Identity - Azure Database for MySQL
-description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for MySQL
----- Previously updated : 05/19/2020---
-# Connect with Managed Identity to Azure Database for MySQL
--
-This article shows you how to use a user-assigned identity for an Azure Virtual Machine (VM) to access an Azure Database for MySQL server. Managed Service Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code.
-
-You learn how to:
--- Grant your VM access to an Azure Database for MySQL server-- Create a user in the database that represents the VM's user-assigned identity-- Get an access token using the VM identity and use it to query an Azure Database for MySQL server-- Implement the token retrieval in a C# example application-
-> [!IMPORTANT]
-> Connecting with Managed Identity is only available for MySQL 5.7 and newer.
-
-## Prerequisites
--- If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.-- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../articles/role-based-access-control/role-assignments-portal.md).-- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity-- You need an Azure Database for MySQL database server that has [Azure AD authentication](howto-configure-sign-in-azure-ad-authentication.md) configured-- To follow the C# example, first complete the guide how to [Connect using C#](connect-csharp.md)-
-## Creating a user-assigned managed identity for your VM
-
-Create an identity in your subscription using the [az identity create](/cli/azure/identity#az-identity-create) command. You can use the same resource group that your virtual machine runs in, or a different one.
-
-```azurecli-interactive
-az identity create --resource-group myResourceGroup --name myManagedIdentity
-```
-
-To configure the identity in the following steps, use the [az identity show](/cli/azure/identity#az-identity-show) command to store the identity's resource ID and client ID in variables.
-
-```azurecli
-# Get resource ID of the user-assigned identity
-
-resourceID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query id --output tsv)
-
-# Get client ID of the user-assigned identity
--
-clientID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query clientId --output tsv)
-```
-
-We can now assign the user-assigned identity to the VM with the [az vm identity assign](/cli/azure/vm/identity#az-vm-identity-assign) command:
-
-```azurecli
-az vm identity assign --resource-group myResourceGroup --name myVM --identities $resourceID
-```
-
-To finish setup, show the value of the Client ID, which you'll need in the next few steps:
-
-```bash
-echo $clientID
-```
-
-## Creating a MySQL user for your Managed Identity
-
-Now, connect as the Azure AD administrator user to your MySQL database, and run the following SQL statements:
-
-```sql
-SET aad_auth_validate_oids_in_tenant = OFF;
-CREATE AADUSER 'myuser' IDENTIFIED BY 'CLIENT_ID';
-```
-
-The managed identity now has access when authenticating with the username `myuser` (replace with a name of your choice).
-
-## Retrieving the access token from Azure Instance Metadata service
-
-Your application can now retrieve an access token from the Azure Instance Metadata service and use it for authenticating with the database.
-
-This token retrieval is done by making an HTTP request to `http://169.254.169.254/metadata/identity/oauth2/token` and passing the following parameters:
--- `api-version` = `2018-02-01`-- `resource` = `https://ossrdbms-aad.database.windows.net`-- `client_id` = `CLIENT_ID` (that you retrieved earlier)-
-You'll get back a JSON result that contains an `access_token` field - this long text value is the Managed Identity access token, that you should use as the password when connecting to the database.
-
-For testing purposes, you can run the following commands in your shell. Note you need `curl`, `jq`, and the `mysql` client installed.
-
-```bash
-# Retrieve the access token
--
-accessToken=$(curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=CLIENT_ID' -H Metadata:true | jq -r .access_token)
-
-# Connect to the database
--
-mysql -h SERVER --user USER@SERVER --enable-cleartext-plugin --password=$accessToken
-```
-
-You are now connected to the database you've configured earlier.
-
-## Connecting using Managed Identity in C#
-
-This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for MySQL. Azure Database for MySQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to MySQL, you pass the access token in the password field.
-
-Here's a .NET code example of opening a connection to MySQL using an access token. This code must run on the VM to access the VM's user-assigned managed identity's endpoint. .NET Framework 4.6 or higher or .NET Core 2.2 or higher is required to use the access token method. Replace the values of HOST, USER, DATABASE, and CLIENT_ID.
-
-```csharp
-using System;
-using System.Net;
-using System.IO;
-using System.Collections;
-using System.Collections.Generic;
-using System.Text.Json;
-using System.Text.Json.Serialization;
-using System.Threading.Tasks;
-using MySql.Data.MySqlClient;
-
-namespace Driver
-{
- class Script
- {
- // Obtain connection string information from the portal
- //
- private static string Host = "HOST";
- private static string User = "USER";
- private static string Database = "DATABASE";
- private static string ClientId = "CLIENT_ID";
-
- static async Task Main(string[] args)
- {
- //
- // Get an access token for MySQL.
- //
- Console.Out.WriteLine("Getting access token from Azure Instance Metadata service...");
-
- // Azure AD resource ID for Azure Database for MySQL is https://ossrdbms-aad.database.windows.net/
- HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=" + ClientId);
- request.Headers["Metadata"] = "true";
- request.Method = "GET";
- string accessToken = null;
-
- try
- {
- // Call managed identities for Azure resources endpoint.
- HttpWebResponse response = (HttpWebResponse)request.GetResponse();
-
- // Pipe response Stream to a StreamReader and extract access token.
- StreamReader streamResponse = new StreamReader(response.GetResponseStream());
- string stringResponse = streamResponse.ReadToEnd();
- var list = JsonSerializer.Deserialize<Dictionary<string, string>>(stringResponse);
- accessToken = list["access_token"];
- }
- catch (Exception e)
- {
- Console.Out.WriteLine("{0} \n\n{1}", e.Message, e.InnerException != null ? e.InnerException.Message : "Acquire token failed");
- System.Environment.Exit(1);
- }
-
- //
- // Open a connection to the MySQL server using the access token.
- //
- var builder = new MySqlConnectionStringBuilder
- {
- Server = Host,
- Database = Database,
- UserID = User,
- Password = accessToken,
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.Out.WriteLine("Opening connection using access token...");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "SELECT VERSION()";
-
- using (var reader = await command.ExecuteReaderAsync())
- {
- while (await reader.ReadAsync())
- {
- Console.WriteLine("\nConnected!\n\nMySQL version: {0}", reader.GetString(0));
- }
- }
- }
- }
- }
- }
-}
-```
-
-When run, this command will give an output like this:
-
-```
-Getting access token from Azure Instance Metadata service...
-Opening connection using access token...
-
-Connected!
-
-MySQL version: 5.7.27
-```
-
-## Next steps
--- Review the overall concepts for [Azure Active Directory authentication with Azure Database for MySQL](concepts-azure-ad-authentication.md)
mysql Howto Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-connection-string-powershell.md
- Title: Generate a connection string with PowerShell - Azure Database for MySQL
-description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for MySQL.
------ Previously updated : 8/5/2020--
-# How to generate an Azure Database for MySQL connection string with PowerShell
--
-This article demonstrates how to generate a connection string for an Azure Database for MySQL
-server. You can use a connection string to connect to an Azure Database for MySQL from many
-different applications.
-
-## Requirements
-
-This article uses the resources created in the following guide as a starting point:
-
-* [Quickstart: Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md)
-
-## Get the connection string
-
-The `Get-AzMySqlConnectionString` cmdlet is used to generate a connection string for connecting
-applications to Azure Database for MySQL. The following example returns the connection string for a
-PHP client from **mydemoserver**.
-
-```azurepowershell-interactive
-Get-AzMySqlConnectionString -Client PHP -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-```Output
-$con=mysqli_init();mysqli_ssl_set($con, NULL, NULL, {ca-cert filename}, NULL, NULL); mysqli_real_connect($con, "mydemoserver.mysql.database.azure.com", "myadmin@mydemoserver", {your_password}, {your_database}, 3306);
-```
-
-Valid values for the `Client` parameter include:
-
-* ADO&#46;NET
-* JDBC
-* Node.js
-* PHP
-* Python
-* Ruby
-* WebApp
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Customize Azure Database for MySQL server parameters using PowerShell](howto-configure-server-parameters-using-powershell.md)
mysql Howto Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-connection-string.md
- Title: Connection strings - Azure Database for MySQL
-description: This document lists the currently supported connection strings for applications to connect with Azure Database for MySQL, including ADO.NET (C#), JDBC, Node.js, ODBC, PHP, Python, and Ruby.
----- Previously updated : 3/18/2020---
-# How to connect applications to Azure Database for MySQL
-
-This topic lists the connection string types that are supported by Azure Database for MySQL, together with templates and examples. You might have different parameters and settings in your connection string.
--- To obtain the certificate, see [How to configure SSL](./howto-configure-ssl.md).-- {your_host} = \<servername>.mysql.database.azure.com-- {your_user}@{servername} = userID format for authentication correctly. If you only use the userID, the authentication will fail.-
-## ADO.NET
-```ado.net
-Server={your_host};Port={your_port};Database={your_database};Uid={username@servername};Pwd={your_password};[SslMode=Required;]
-```
-
-In this example, the server name is `mydemoserver`, the database name is `wpdb`, the user name is `WPAdmin`, and the password is `mypassword!2`. As a result, the connection string should be:
-
-```ado.net
-Server= "mydemoserver.mysql.database.azure.com"; Port=3306; Database= "wpdb"; Uid= "WPAdmin@mydemoserver"; Pwd="mypassword!2"; SslMode=Required;
-```
-
-## JDBC
-```jdbc
-String url ="jdbc:mysql://%s:%s/%s[?verifyServerCertificate=true&useSSL=true&requireSSL=true]",{your_host},{your_port},{your_database}"; myDbConn = DriverManager.getConnection(url, {username@servername}, {your_password}";
-```
-
-## Node.js
-```node.js
-var conn = mysql.createConnection({host: {your_host}, user: {username@servername}, password: {your_password}, database: {your_database}, Port: {your_port}[, ssl:{ca:fs.readFileSync({ca-cert filename})}}]);
-```
-
-## ODBC
-```odbc
-DRIVER={MySQL ODBC 5.3 UNICODE Driver};Server={your_host};Port={your_port};Database={your_database};Uid={username@servername};Pwd={your_password}; [sslca={ca-cert filename}; sslverify=1; Option=3;]
-```
-
-## PHP
-```php
-$con=mysqli_init(); [mysqli_ssl_set($con, NULL, NULL, {ca-cert filename}, NULL, NULL);] mysqli_real_connect($con, {your_host}, {username@servername}, {your_password}, {your_database}, {your_port});
-```
-
-## Python
-```python
-cnx = mysql.connector.connect(user={username@servername}, password={your_password}, host={your_host}, port={your_port}, database={your_database}[, ssl_ca={ca-cert filename}, ssl_verify_cert=true])
-```
-
-## Ruby
-```ruby
-client = Mysql2::Client.new(username: {username@servername}, password: {your_password}, database: {your_database}, host: {your_host}, port: {your_port}[, sslca:{ca-cert filename}, sslverify:false, sslcipher:'AES256-SHA'])
-```
-
-## Get the connection string details from the Azure portal
-In the [Azure portal](https://portal.azure.com), go to your Azure Database for MySQL server, and then click **Connection strings** to get the string list for your instance:
-
-The string provides details such as the driver, server, and other database connection parameters. Modify these examples to use your own parameters, such as database name, password, and so on. You can then use this string to connect to the server from your code and applications.
-
-## Next steps
-- For more information about connection libraries, see [Concepts - Connection libraries](./concepts-connection-libraries.md).
mysql Howto Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-create-manage-server-portal.md
- Title: Manage server - Azure portal - Azure Database for MySQL
-description: Learn how to manage an Azure Database for MySQL server from the Azure portal.
----- Previously updated : 1/26/2021--
-# Manage an Azure Database for MySQL server using the Azure portal
--
-This article shows you how to manage your Azure Database for MySQL servers. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
-
-## Sign in
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create a server
-
-Visit the [quickstart](quickstart-create-mysql-server-database-using-azure-portal.md) to learn how to create and get started with an Azure Database for MySQL server.
-
-## Scale compute and storage
-
-After server creation you can scale between the General Purpose and Memory Optimized tiers as your needs change. You can also scale compute and memory by increasing or decreasing vCores. Storage can be scaled up (however, you cannot scale storage down).
-
-### Scale between General Purpose and Memory Optimized tiers
-
-You can scale from General Purpose to Memory Optimized and vice-versa. Changing to and from the Basic tier after server creation is not supported.
-
-1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
-
-2. Select **General Purpose** or **Memory Optimized**, depending on what you are scaling to.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/change-pricing-tier.png" alt-text="Screenshot of Azure portal to choose Basic, General Purpose, or Memory Optimized tier in Azure Database for MySQL":::
-
- > [!NOTE]
- > Changing tiers causes a server restart.
-
-3. Select **OK** to save changes.
-
-### Scale vCores up or down
-
-1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
-
-2. Change the **vCore** setting by moving the slider to your desired value.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/scaling-compute.png" alt-text="Screenshot of Azure portal to choose vCore option in Azure Database for MySQL":::
-
- > [!NOTE]
- > Scaling vCores causes a server restart.
-
-3. Select **OK** to save changes.
-
-### Scale storage up
-
-1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
-
-2. Change the **Storage** setting by moving the slider up to your desired value.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/scaling-storage.png" alt-text="Screenshot of Azure portal to choose Storage scale in Azure Database for MySQL":::
-
- > [!NOTE]
- > Storage cannot be scaled down.
-
-3. Select **OK** to save changes.
-
-## Update admin password
-
-You can change the administrator role's password using the Azure portal.
-
-1. Select your server in the Azure portal. In the **Overview** window select **Reset password**.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/overview-reset-password.png" alt-text="Screenshot of Azure portal to reset the password in Azure Database for MySQL":::
-
-2. Enter a new password and confirm the password. The textbox will prompt you about password complexity requirements.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/reset-password.png" alt-text="Screenshot of Azure portal to reset your password and save in Azure Database for MySQL":::
-
-3. Select **OK** to save the new password.
-
-
-> [!IMPORTANT]
-> Resetting server admin password will automatically reset the server admin privileges to default. Consider resetting your server admin password if you accidentally revoked one or more of the server admin privileges.
-
-> [!NOTE]
-> Server admin user has the following privileges by default: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER
-
-## Delete a server
-
-You can delete your server if you no longer need it.
-
-1. Select your server in the Azure portal. In the **Overview** window select **Delete**.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/overview-delete.png" alt-text="Screenshot of Azure portal to Delete the server in Azure Database for MySQL":::
-
-2. Type the name of the server into the input box to confirm that this is the server you want to delete.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/confirm-delete.png" alt-text="Screenshot of Azure portal to confirm the server delete in Azure Database for MySQL":::
-
- > [!NOTE]
- > Deleting a server is irreversible.
-
-3. Select **Delete**.
-
-## Next steps
--- Learn about [backups and server restore](howto-restore-server-portal.md)-- Learn about [tuning and monitoring options in Azure Database for MySQL](concepts-monitoring.md)
mysql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-create-users.md
- Title: How to create users for Azure Database for MySQL
-description: This article describes how to create new user accounts to interact with an Azure Database for MySQL server.
----- Previously updated : 02/17/2022--
-# Create users in Azure Database for MySQL
--
-This article describes how to create users for Azure Database for MySQL.
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-
-When you first created your Azure Database for MySQL server, you provided a server admin user name and password. For more information, see this [Quickstart](quickstart-create-mysql-server-database-using-azure-portal.md). You can determine your server admin user name in the Azure portal.
-
-The server admin user has these privileges:
-
- SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER
-
-After you create an Azure Database for MySQL server, you can use the first server admin account to create more users and grant admin access to them. You can also use the server admin account to create less privileged users that have access to individual database schemas.
-
-> [!NOTE]
-> The SUPER privilege and DBA role aren't supported. Review the [privileges](concepts-limits.md#privileges--data-manipulation-support) in the limitations article to understand what's not supported in the service.
->
-> Password plugins like `validate_password` and `caching_sha2_password` aren't supported by the service.
-
-## Create a database
-
-1. Get the connection information and admin user name.
- To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information on the server **Overview** page or on the **Properties** page in the Azure portal.
-
-2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, or HeidiSQL.
-
-> [!NOTE]
-> If you're not sure how to connect, see [connect and query data for Single Server](./connect-workbench.md) or [connect and query data for Flexible Server](./flexible-server/connect-workbench.md).
-
-3. Edit and run the following SQL code. Replace the placeholder value `db_user` with your intended new user name. Replace the placeholder value `testdb` with your database name.
-
- This SQL code creates a new database named testdb. It then creates a new user in the MySQL service and grants all privileges for the new database schema (testdb.\*) to that user.
-
- ```sql
- CREATE DATABASE testdb;
- ```
-
-## Create a non-admin user
- Now that the database is created , you can create with a non-admin user with the ``` CREATE USER``` MySQL statement.
- ``` sql
- CREATE USER 'db_user'@'%' IDENTIFIED BY 'StrongPassword!';
-
- GRANT ALL PRIVILEGES ON testdb . * TO 'db_user'@'%';
-
- FLUSH PRIVILEGES;
- ```
-
-## Verify the user permissions
-Run the ```SHOW GRANTS``` MySQL statement to view the privileges allowed for user **db_user** on **testdb** database.
-
- ```sql
- USE testdb;
-
- SHOW GRANTS FOR 'db_user'@'%';
- ```
-
-## Connect to the database with new user
-Sign in to the server, specifying the designated database and using the new user name and password. This example shows the mysql command line. When you use this command, you'll be prompted for the user's password. Use your own server name, database name, and user name. See how to connect for Single server and Flexible server below.
-
-| Server type | Usage |
-| -- | -- |
-| Single Server | ```mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user@mydemoserver -p``` |
-| Flexible Server | ``` mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user -p```|
--
-## Limit privileges for user
-To restrict the type of operations a user can run on the database, you need to explicitly add the operations in the **GRANT** statement. See an example below:
-
- ```sql
- CREATE USER 'new_master_user'@'%' IDENTIFIED BY 'StrongPassword!';
-
- GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'new_master_user'@'%' WITH GRANT OPTION;
-
- FLUSH PRIVILEGES;
- ```
-
-## About azure_superuser
-
-All Azure Database for MySQL servers are created with a user called "azure_superuser". This is a system account created by Microsoft to manage the server to conduct monitoring, backups, and other regular maintenance. On-call engineers may also use this account to access the server during an incident with certificate authentication and must request access using just-in-time (JIT) processes.
-
-## Next steps
-
-For more information about user account management, see the MySQL product documentation for [User account management](https://dev.mysql.com/doc/refman/5.7/en/access-control.html), [GRANT syntax](https://dev.mysql.com/doc/refman/5.7/en/grant.html), and [Privileges](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html).
mysql Howto Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-cli.md
- Title: Data encryption - Azure CLI - Azure Database for MySQL
-description: Learn how to set up and manage data encryption for your Azure Database for MySQL by using the Azure CLI.
----- Previously updated : 03/30/2020 ---
-# Data encryption for Azure Database for MySQL by using the Azure CLI
--
-Learn how to use the Azure CLI to set up and manage data encryption for your Azure Database for MySQL.
-
-## Prerequisites for Azure CLI
-
-* You must have an Azure subscription and be an administrator on that subscription.
-* Create a key vault and a key to use for a customer-managed key. Also enable purge protection and soft delete on the key vault.
-
- ```azurecli-interactive
- az keyvault create -g <resource_group> -n <vault_name> --enable-soft-delete true --enable-purge-protection true
- ```
-
-* In the created Azure Key Vault, create the key that will be used for the data encryption of the Azure Database for MySQL.
-
- ```azurecli-interactive
- az keyvault key create --name <key_name> -p software --vault-name <vault_name>
- ```
-
-* In order to use an existing key vault, it must have the following properties to use as a customer-managed key:
-
- * [Soft delete](../key-vault/general/soft-delete-overview.md)
-
- ```azurecli-interactive
- az resource update --id $(az keyvault show --name \ <key_vault_name> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
- ```
-
- * [Purge protected](../key-vault/general/soft-delete-overview.md#purge-protection)
-
- ```azurecli-interactive
- az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true
- ```
- * Retention days set to 90 days
- ```azurecli-interactive
- az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --retention-days 90
- ```
-
-* The key must have the following attributes to use as a customer-managed key:
- * No expiration date
- * Not disabled
- * Perform **get**, **wrap**, **unwrap** operations
- * recoverylevel attribute set to **Recoverable** (this requires soft-delete enabled with retention period set to 90 days)
- * Purge protection enabled
-
-You can verify the above attributes of the key by using the following command:
-
-```azurecli-interactive
-az keyvault key show --vault-name <key_vault_name> -n <key_name>
-```
-* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [data encryption with customer managed keys](concepts-data-encryption-mysql.md#limitations).
-
-## Set the right permissions for key operations
-
-1. There are two ways of getting the managed identity for your Azure Database for MySQL.
-
- ### Create an new Azure Database for MySQL server with a managed identity.
-
- ```azurecli-interactive
- az mysql server create --name -g <resource_group> --location <locations> --storage-size size> -u <user>-p <pwd> --backup-retention <7> --sku-name <sku name> -geo-redundant-backup <Enabled/Disabled> --assign-identity
- ```
-
- ### Update an existing the Azure Database for MySQL server to get a managed identity.
-
- ```azurecli-interactive
- az mysql server update --name <server name> -g <resource_group> --assign-identity
- ```
-
-2. Set the **Key permissions** (**Get**, **Wrap**, **Unwrap**) for the **Principal**, which is the name of the MySQL server.
-
- ```azurecli-interactive
- az keyvault set-policy --name -g <resource_group> --key-permissions get unwrapKey wrapKey --object-id <principal id of the server>
- ```
-
-## Set data encryption for Azure Database for MySQL
-
-1. Enable Data encryption for the Azure Database for MySQL using the key created in the Azure Key Vault.
-
- ```azurecli-interactive
- az mysql server key create ΓÇôname <server name> -g <resource_group> --kid <key url>
- ```
-
- Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>`
-
-## Using Data encryption for restore or replica servers
-
-After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted MySQL server, you can use the following steps to create an encrypted restored server.
-
-### Creating a restored/replica server
-
-* [Create a restore server](howto-restore-server-cli.md)
-* [Create a read replica server](howto-read-replicas-cli.md)
-
-### Once the server is restored, revalidate data encryption the restored server
-
-* Assign identity for the replica server
-```azurecli-interactive
-az mysql server update --name <server name> -g <resoure_group> --assign-identity
-```
-
-* Get the existing key that has to be used for the restored/replica server
-
-```azurecli-interactive
-az mysql server key list --name '<server_name>' -g '<resource_group_name>'
-```
-
-* Set the policy for the new identity for the restored/replica server
-
-```azurecli-interactive
-az keyvault set-policy --name <keyvault> -g <resoure_group> --key-permissions get unwrapKey wrapKey --object-id <principl id of the server returned by the step 1>
-```
-
-* Re-validate the restored/replica server with the encryption key
-
-```azurecli-interactive
-az mysql server key create ΓÇôname <server name> -g <resource_group> --kid <key url>
-```
-
-## Additional capability for the key being used for the Azure Database for MySQL
-
-### Get the Key used
-
-```azurecli-interactive
-az mysql server key show --name <server name> -g <resource_group> --kid <key url>
-```
-
-Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>`
-
-### List the Key used
-
-```azurecli-interactive
-az mysql server key list --name <server name> -g <resource_group>
-```
-
-### Drop the key being used
-
-```azurecli-interactive
-az mysql server key delete -g <resource_group> --kid <key url>
-```
-
-## Using an Azure Resource Manager template to enable data encryption
-
-Apart from the Azure portal, you can also enable data encryption on your Azure Database for MySQL server using Azure Resource Manager templates for new and existing servers.
-
-### For a new server
-
-Use one of the pre-created Azure Resource Manager templates to provision the server with data encryption enabled:
-[Example with Data encryption](https://github.com/Azure/azure-mysql/tree/master/arm-templates/ExampleWithDataEncryption)
-
-This Azure Resource Manager template creates an Azure Database for MySQL server and uses the **KeyVault** and **Key** passed as parameters to enable data encryption on the server.
-
-### For an existing server
-
-Additionally, you can use Azure Resource Manager templates to enable data encryption on your existing Azure Database for MySQL servers.
-
-* Pass the Resource ID of the Azure Key Vault key that you copied earlier under the `Uri` property in the properties object.
-
-* Use *2020-01-01-preview* as the API version.
-
-```json
-{
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string"
- },
- "serverName": {
- "type": "string"
- },
- "keyVaultName": {
- "type": "string",
- "metadata": {
- "description": "Key vault name where the key to use is stored"
- }
- },
- "keyVaultResourceGroupName": {
- "type": "string",
- "metadata": {
- "description": "Key vault resource group name where it is stored"
- }
- },
- "keyName": {
- "type": "string",
- "metadata": {
- "description": "Key name in the key vault to use as encryption protector"
- }
- },
- "keyVersion": {
- "type": "string",
- "metadata": {
- "description": "Version of the key in the key vault to use as encryption protector"
- }
- }
- },
- "variables": {
- "serverKeyName": "[concat(parameters('keyVaultName'), '_', parameters('keyName'), '_', parameters('keyVersion'))]"
- },
- "resources": [
- {
- "type": "Microsoft.DBforMySQL/servers",
- "apiVersion": "2017-12-01",
- "kind": "",
- "location": "[parameters('location')]",
- "identity": {
- "type": "SystemAssigned"
- },
- "name": "[parameters('serverName')]",
- "properties": {
- }
- },
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-05-01",
- "name": "addAccessPolicy",
- "resourceGroup": "[parameters('keyVaultResourceGroupName')]",
- "dependsOn": [
- "[resourceId('Microsoft.DBforMySQL/servers', parameters('serverName'))]"
- ],
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.KeyVault/vaults/accessPolicies",
- "name": "[concat(parameters('keyVaultName'), '/add')]",
- "apiVersion": "2018-02-14-preview",
- "properties": {
- "accessPolicies": [
- {
- "tenantId": "[subscription().tenantId]",
- "objectId": "[reference(resourceId('Microsoft.DBforMySQL/servers/', parameters('serverName')), '2017-12-01', 'Full').identity.principalId]",
- "permissions": {
- "keys": [
- "get",
- "wrapKey",
- "unwrapKey"
- ]
- }
- }
- ]
- }
- }
- ]
- }
- }
- },
- {
- "name": "[concat(parameters('serverName'), '/', variables('serverKeyName'))]",
- "type": "Microsoft.DBforMySQL/servers/keys",
- "apiVersion": "2020-01-01-preview",
- "dependsOn": [
- "addAccessPolicy",
- "[resourceId('Microsoft.DBforMySQL/servers', parameters('serverName'))]"
- ],
- "properties": {
- "serverKeyType": "AzureKeyVault",
- "uri": "[concat(reference(resourceId(parameters('keyVaultResourceGroupName'), 'Microsoft.KeyVault/vaults/', parameters('keyVaultName')), '2018-02-14-preview', 'Full').properties.vaultUri, 'keys/', parameters('keyName'), '/', parameters('keyVersion'))]"
- }
- }
- ]
-}
-
-```
-
-## Next steps
-
-* [Validating data encryption for Azure Database for MySQL](howto-data-encryption-validation.md)
-* [Troubleshoot data encryption in Azure Database for MySQL](howto-data-encryption-troubleshoot.md)
-* [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md).
-
mysql Howto Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-portal.md
- Title: Data encryption - Azure portal - Azure Database for MySQL
-description: Learn how to set up and manage data encryption for your Azure Database for MySQL by using the Azure portal.
----- Previously updated : 01/13/2020 ---
-# Data encryption for Azure Database for MySQL by using the Azure portal
--
-Learn how to use the Azure portal to set up and manage data encryption for your Azure Database for MySQL.
-
-## Prerequisites for Azure CLI
-
-* You must have an Azure subscription and be an administrator on that subscription.
-* In Azure Key Vault, create a key vault and a key to use for a customer-managed key.
-* The key vault must have the following properties to use as a customer-managed key:
- * [Soft delete](../key-vault/general/soft-delete-overview.md)
-
- ```azurecli-interactive
- az resource update --id $(az keyvault show --name \ <key_vault_name> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
- ```
-
- * [Purge protected](../key-vault/general/soft-delete-overview.md#purge-protection)
-
- ```azurecli-interactive
- az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true
- ```
- * Retention days set to 90 days
-
- ```azurecli-interactive
- az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --retention-days 90
- ```
-
-* The key must have the following attributes to use as a customer-managed key:
- * No expiration date
- * Not disabled
- * Perform **get**, **wrap**, **unwrap** operations
- * recoverylevel attribute set to **Recoverable** (this requires soft-delete enabled with retention period set to 90 days)
- * Purge protection enabled
-
- You can verify the above attributes of the key by using the following command:
-
- ```azurecli-interactive
- az keyvault key show --vault-name <key_vault_name> -n <key_name>
- ```
-
-* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [data encryption with customer managed keys](concepts-data-encryption-mysql.md#limitations).
-## Set the right permissions for key operations
-
-1. In Key Vault, select **Access policies** > **Add Access Policy**.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-access-policy-overview.png" alt-text="Screenshot of Key Vault, with Access policies and Add Access Policy highlighted":::
-
-2. Select **Key permissions**, and select **Get**, **Wrap**, **Unwrap**, and the **Principal**, which is the name of the MySQL server. If your server principal can't be found in the list of existing principals, you need to register it. You're prompted to register your server principal when you attempt to set up data encryption for the first time, and it fails.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/access-policy-wrap-unwrap.png" alt-text="Access policy overview":::
-
-3. Select **Save**.
-
-## Set data encryption for Azure Database for MySQL
-
-1. In Azure Database for MySQL, select **Data encryption** to set up the customer-managed key.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/data-encryption-overview.png" alt-text="Screenshot of Azure Database for MySQL, with Data encryption highlighted":::
-
-2. You can either select a key vault and key pair, or enter a key identifier.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/setting-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with data encryption options highlighted":::
-
-3. Select **Save**.
-
-4. To ensure all files (including temp files) are fully encrypted, restart the server.
-
-## Using Data encryption for restore or replica servers
-
-After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted MySQL server, you can use the following steps to create an encrypted restored server.
-
-1. On your server, select **Overview** > **Restore**.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore.png" alt-text="Screenshot of Azure Database for MySQL, with Overview and Restore highlighted":::
-
- Or for a replication-enabled server, under the **Settings** heading, select **Replication**.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/mysql-replica.png" alt-text="Screenshot of Azure Database for MySQL, with Replication highlighted":::
-
-2. After the restore operation is complete, the new server created is encrypted with the primary server's key. However, the features and options on the server are disabled, and the server is inaccessible. This prevents any data manipulation, because the new server's identity hasn't yet been given permission to access the key vault.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with Inaccessible status highlighted":::
-
-3. To make the server accessible, revalidate the key on the restored server. Select **Data encryption** > **Revalidate key**.
-
- > [!NOTE]
- > The first attempt to revalidate will fail, because the new server's service principal needs to be given access to the key vault. To generate the service principal, select **Revalidate key**, which will show an error but generates the service principal. Thereafter, refer to [these steps](#set-the-right-permissions-for-key-operations) earlier in this article.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-revalidate-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with revalidation step highlighted":::
-
- You will have to give the key vault access to the new server. For more information, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-portal).
-
-4. After registering the service principal, revalidate the key again, and the server resumes its normal functionality.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/restore-successful.png" alt-text="Screenshot of Azure Database for MySQL, showing restored functionality":::
-
-## Next steps
-
-* [Validating data encryption for Azure Database for MySQL](howto-data-encryption-validation.md)
-* [Troubleshoot data encryption in Azure Database for MySQL](howto-data-encryption-troubleshoot.md)
-* [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md).
mysql Howto Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-troubleshoot.md
- Title: Troubleshoot data encryption - Azure Database for MySQL
-description: Learn how to troubleshoot data encryption in Azure Database for MySQL
----- Previously updated : 02/13/2020--
-# Troubleshoot data encryption in Azure Database for MySQL
--
-This article describes how to identify and resolve common issues that can occur in Azure Database for MySQL when configured with data encryption using a customer-managed key.
-
-## Introduction
-
-When you configure data encryption to use a customer-managed key in Azure Key Vault, servers require continuous access to the key. If the server loses access to the customer-managed key in Azure Key Vault, it will deny all connections, return the appropriate error message, and change its state to ***Inaccessible*** in the Azure portal.
-
-If you no longer need an inaccessible Azure Database for MySQL server, you can delete it to stop incurring costs. No other actions on the server are permitted until access to the key vault has been restored and the server is available. It's also not possible to change the data encryption option from `Yes`(customer-managed) to `No` (service-managed) on an inaccessible server when it's encrypted with a customer-managed key. You'll have to revalidate the key manually before the server is accessible again. This action is necessary to protect the data from unauthorized access while permissions to the customer-managed key are revoked.
-
-## Common errors that cause the server to become inaccessible
-
-The following misconfigurations cause most issues with data encryption that use Azure Key Vault keys:
--- The key vault is unavailable or doesn't exist:
- - The key vault was accidentally deleted.
- - An intermittent network error causes the key vault to be unavailable.
--- You don't have permissions to access the key vault or the key doesn't exist:
- - The key expired or was accidentally deleted or disabled.
- - The managed identity of the Azure Database for MySQL instance was accidentally deleted.
- - The managed identity of the Azure Database for MySQL instance has insufficient key permissions. For example, the permissions don't include Get, Wrap, and Unwrap.
- - The managed identity permissions to the Azure Database for MySQL instance were revoked or deleted.
-
-## Identify and resolve common errors
-
-### Errors on the key vault
-
-#### Disabled key vault
--- `AzureKeyVaultKeyDisabledMessage`-- **Explanation**: The operation couldn't be completed on server because the Azure Key Vault key is disabled.-
-#### Missing key vault permissions
--- `AzureKeyVaultMissingPermissionsMessage`-- **Explanation**: The server doesn't have the required Get, Wrap, and Unwrap permissions to Azure Key Vault. Grant any missing permissions to the service principal with ID.-
-### Mitigation
--- Confirm that the customer-managed key is present in the key vault.-- Identify the key vault, then go to the key vault in the Azure portal.-- Ensure that the key URI identifies a key that is present.-
-## Next steps
-
-[Use the Azure portal to set up data encryption with a customer-managed key on Azure Database for MySQL](howto-data-encryption-portal.md)
mysql Howto Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-validation.md
- Title: How to ensure validation of the Azure Database for MySQL - Data encryption
-description: Learn how to validate the encryption of the Azure Database for MySQL - Data encryption using the customers managed key.
----- Previously updated : 04/28/2020--
-# Validating data encryption for Azure Database for MySQL
--
-This article helps you validate that data encryption using customer managed key for Azure Database for MySQL is working as expected.
-
-## Check the encryption status
-
-### From portal
-
-1. If you want to verify that the customer's key is used for encryption, follow these steps:
-
- * In the Azure portal, navigate to the **Azure Key Vault** -> **Keys**
- * Select the key used for server encryption.
- * Set the status of the key **Enabled** to **No**.
-
- After some time (**~15 min**), the Azure Database for MySQL server **Status** should be **Inaccessible**. Any I/O operation done against the server will fail which validates that the server is indeed encrypted with customers key and the key is currently not valid.
-
- In order to make the server **Available** against, you can revalidate the key.
-
- * Set the status of the key in the Key Vault to **Yes**.
- * On the server **Data Encryption**, select **Revalidate key**.
- * After the revalidation of the key is successful, the server **Status** changes to **Available**.
-
-2. On the Azure portal, if you can ensure that the encryption key is set, then data is encrypted using the customers key used in the Azure portal.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/byok-validate.png" alt-text="Access policy overview":::
-
-### From CLI
-
-1. We can use *az CLI* command to validate the key resources being used for the Azure Database for MySQL server.
-
- ```azurecli-interactive
- az mysql server key list --name '<server_name>' -g '<resource_group_name>'
- ```
-
- For a server without Data encryption set, this command results in empty set [].
-
-### Azure audit reports
-
-[Audit Reports](https://servicetrust.microsoft.com) can also be reviewed that provides information about the compliance with data protection standards and regulatory requirements.
-
-## Next steps
-
-To learn more about data encryption, see [Azure Database for MySQL data encryption with customer-managed key](concepts-data-encryption-mysql.md).
mysql Howto Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-in-replication.md
- Title: Configure Data-in Replication - Azure Database for MySQL
-description: This article describes how to set up Data-in Replication for Azure Database for MySQL.
----- Previously updated : 04/08/2021--
-# How to configure Azure Database for MySQL Data-in Replication
--
-This article describes how to set up [Data-in Replication](concepts-data-in-replication.md) in Azure Database for MySQL by configuring the source and replica servers. This article assumes that you have some prior experience with MySQL servers and databases.
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
-
-To create a replica in the Azure Database for MySQL service, [Data-in Replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-
-Review the [limitations and requirements](concepts-data-in-replication.md#limitations-and-considerations) of Data-in replication before performing the steps in this article.
-
-## Create an Azure Database for MySQL Single Server instance to use as a replica
-
-1. Create a new instance of Azure Database for MySQL Single Server (for example, `replica.mysql.database.azure.com`). Refer to [Create an Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) for server creation. This server is the "replica" server for Data-in Replication.
-
- > [!IMPORTANT]
- > The Azure Database for MySQL server must be created in the General Purpose or Memory Optimized pricing tiers as data-in replication is only supported in these tiers.
- > GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2).
-
-2. Create the same user accounts and corresponding privileges.
-
- User accounts aren't replicated from the source server to the replica server. If you plan on providing users with access to the replica server, you need to create all accounts and corresponding privileges manually on this newly created Azure Database for MySQL server.
-
-3. Add the source server's IP address to the replica's firewall rules.
-
- Update firewall rules using the [Azure portal](howto-manage-firewall-using-portal.md) or [Azure CLI](howto-manage-firewall-using-cli.md).
-
-4. **Optional** - If you wish to use [GTID-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html) from the source server to the Azure Database for MySQL replica server, you'll need to enable the following server parameters on the Azure Database for MySQL server as shown in the portal image below:
-
- :::image type="content" source="./media/howto-data-in-replication/enable-gtid.png" alt-text="Enable GTID on Azure Database for MySQL server":::
-
-## Configure the source MySQL server
-
-The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in Replication. This server is the "source" for Data-in replication.
-
-1. Review the [source server requirements](concepts-data-in-replication.md#requirements) before proceeding.
-
-2. Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or that it has a fully qualified domain name (FQDN).
-
- Test connectivity to the source server by attempting to connect from a tool such as the MySQL command line hosted on another machine or from the [Azure Cloud Shell](../cloud-shell/overview.md) available in the Azure portal.
-
- If your organization has strict security policies and won't allow all IP addresses on the source server to enable communication from Azure to your source server, you can potentially use the command below to determine the IP address of your MySQL server.
-
- 1. Sign in to your Azure Database for MySQL server using a tool such as the MySQL command line.
-
- 2. Execute the following query.
-
- ```bash
- mysql> SELECT @@global.redirect_server_host;
- ```
-
- Below is some sample output:
-
- ```bash
- +--+
- | @@global.redirect_server_host |
- +--+
- | e299ae56f000.tr1830.westus1-a.worker.database.windows.net |
- +--+
- ```
-
- 3. Exit from the MySQL command line.
- 4. To get the IP address, execute the following command in the ping utility:
-
- ```bash
- ping <output of step 2b>
- ```
-
- For example:
-
- ```bash
- C:\Users\testuser> ping e299ae56f000.tr1830.westus1-a.worker.database.windows.net
- Pinging tr1830.westus1-a.worker.database.windows.net (**11.11.111.111**) 56(84) bytes of data.
- ```
-
- 5. Configure your source server's firewall rules to include the previous step's outputted IP address on port 3306.
-
- > [!NOTE]
- > This IP address may change due to maintenance / deployment operations. This method of connectivity is only for customers who cannot afford to allow all IP address on 3306 port.
-
-3. Turn on binary logging.
-
- Check to see if binary logging has been enabled on the source by running the following command:
-
- ```sql
- SHOW VARIABLES LIKE 'log_bin';
- ```
-
- If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server.
-
- If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the steps below:
- 1. Locate your MySQL configuration file (my.cnf) in the source server. For example: /etc/my.cnf
- 2. Open the configuration file to edit it and locate **mysqld** section in the file.
- 3. In the mysqld section, add following line:
-
- ```bash
- log-bin=mysql-bin.log
- ```
-
- 4. Restart the MySQL source server for the changes to take effect.
- 5. After the server is restarted, verify that binary logging is enabled by running the same query as before:
-
- ```sql
- SHOW VARIABLES LIKE 'log_bin';
- ```
-
-4. Configure the source server settings.
-
- Data-in Replication requires the parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL.
-
- ```sql
- SET GLOBAL lower_case_table_names = 1;
- ```
-
- **Optional** - If you wish to use [GTID-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you'll need to check if GTID is enabled on the source server. You can execute following command against your source MySQL server to see if gtid_mode is ON.
-
- ```sql
- show variables like 'gtid_mode';
- ```
-
- >[!IMPORTANT]
- > All servers have gtid_mode set to the default value OFF. You don't need to enable GTID on the source MySQL server specifically to set up Data-in Replication. If GTID is already enabled on source server, you can optionally use GTID-based replication to set up Data-in Replication too with Azure Database for MySQL Single Server. You can use file-based replication to set up data-in replication for all servers regardless of the gitd_mode configuration on the source server.
-
-5. Create a new replication role and set up permission.
-
- Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
-
- In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
-
- **SQL Command**
-
- *Replication with SSL*
-
- To require SSL for all user connections, use the following command to create a user:
-
- ```sql
- CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
- GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL;
- ```
-
- *Replication without SSL*
-
- If SSL isn't required for all connections, use the following command to create a user:
-
- ```sql
- CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
- GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';
- ```
-
- **MySQL Workbench**
-
- To create the replication role in MySQL Workbench, open the **Users and Privileges** panel from the **Management** panel, and then select **Add Account**.
-
- :::image type="content" source="./media/howto-data-in-replication/users_privileges.png" alt-text="Users and Privileges":::
-
- Type in the username into the **Login Name** field.
-
- :::image type="content" source="./media/howto-data-in-replication/syncuser.png" alt-text="Sync user":::
-
- Select the **Administrative Roles** panel and then select **Replication Slave** from the list of **Global Privileges**. Then select **Apply** to create the replication role.
-
- :::image type="content" source="./media/howto-data-in-replication/replicationslave.png" alt-text="Replication Slave":::
-
-6. Set the source server to read-only mode.
-
- Before starting to dump out the database, the server needs to be placed in read-only mode. While in read-only mode, the source will be unable to process any write transactions. Evaluate the impact to your business and schedule the read-only window in an off-peak time if necessary.
-
- ```sql
- FLUSH TABLES WITH READ LOCK;
- SET GLOBAL read_only = ON;
- ```
-
-7. Get binary log file name and offset.
-
- Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset.
-
- ```sql
- show master status;
- ```
- The results should appear similar to the following. Make sure to note the binary file name for use in later steps.
-
- :::image type="content" source="./media/howto-data-in-replication/masterstatus.png" alt-text="Master Status Results":::
-
-## Dump and restore the source server
-
-1. Determine which databases and tables you want to replicate into Azure Database for MySQL and perform the dump from the source server.
-
- You can use mysqldump to dump databases from your primary server. For details, refer to [Dump & Restore](concepts-migrate-dump-restore.md). It's unnecessary to dump the MySQL library and test library.
-
-2. **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you'll need to identify the GTID of the last transaction executed at the primary. You can use the following command to note the GTID of the last transaction executed on the master server.
-
- ```sql
- show global variables like 'gtid_executed';
- ```
-
-3. Set source server to read/write mode.
-
- After the database has been dumped, change the source MySQL server back to read/write mode.
-
- ```sql
- SET GLOBAL read_only = OFF;
- UNLOCK TABLES;
- ```
-
-4. Restore dump file to new server.
-
- Restore the dump file to the server created in the Azure Database for MySQL service. Refer to [Dump & Restore](concepts-migrate-dump-restore.md) for how to restore a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL server from the virtual machine.
-
-5. **Optional** - Note the GTID of the restored server on Azure Database for MySQL to ensure it is same as the primary server. You can use the following command to note the GTID of the GTID purged value on the Azure Database for MySQL replica server. The value of gtid_purged should be same as gtid_executed on master noted in step 2 for GTID-based replication to work.
-
- ```sql
- show global variables like 'gtid_purged';
- ```
-
-## Link source and replica servers to start Data-in Replication
-
-1. Set the source server.
-
- All Data-in Replication functions are done by stored procedures. You can find all procedures at [Data-in Replication Stored Procedures](./reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench.
-
- To link two servers and start replication, login to the target replica server in the Azure DB for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure DB for MySQL server.
-
- ```sql
- CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>');
- ```
-
- **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html),you will need to use the following command to link the two servers
-
- ```sql
- call mysql.az_replication_change_master_with_gtid('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_ssl_ca>');
- ```
-
- - master_host: hostname of the source server
- - master_user: username for the source server
- - master_password: password for the source server
- - master_port: port number on which source server is listening for connections. (3306 is the default port on which MySQL is listening)
- - master_log_file: binary log file name from running `show master status`
- - master_log_pos: binary log position from running `show master status`
- - master_ssl_ca: CA certificate's context. If not using SSL, pass in empty string.
-
- It's recommended to pass this parameter in as a variable. For more information, see the following examples.
-
- > [!NOTE]
- > If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. For more information, see [Manage firewall rules using the portal](howto-manage-firewall-using-portal.md) .
-
- **Examples**
-
- *Replication with SSL*
-
- The variable `@cert` is created by running the following MySQL commands:
-
- ```sql
- SET @cert = '--BEGIN CERTIFICATE--
- PLACE YOUR PUBLIC KEY CERTIFICATE'`S CONTEXT HERE
- --END CERTIFICATE--'
- ```
-
- Replication with SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica.
-
- ```sql
- CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, @cert);
- ```
-
- *Replication without SSL*
-
- Replication without SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica.
-
- ```sql
- CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, '');
- ```
-
-2. Set up filtering.
-
- If you want to skip replicating some tables from your master, update the `replicate_wild_ignore_table` server parameter on your replica server. You can provide more than one table pattern using a comma-separated list.
-
- Review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table) to learn more about this parameter.
-
- To update the parameter, you can use the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
-
-3. Start replication.
-
- Call the `mysql.az_replication_start` stored procedure to start replication.
-
- ```sql
- CALL mysql.az_replication_start;
- ```
-
-4. Check replication status.
-
- Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
-
- ```sql
- show slave status;
- ```
-
- If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates.
-
-## Other useful stored procedures for Data-in Replication operations
-
-### Stop replication
-
-To stop replication between the source and replica server, use the following stored procedure:
-
-```sql
-CALL mysql.az_replication_stop;
-```
-
-### Remove replication relationship
-
-To remove the relationship between source and replica server, use the following stored procedure:
-
-```sql
-CALL mysql.az_replication_remove_master;
-```
-
-### Skip replication error
-
-To skip a replication error and allow replication to continue, use the following stored procedure:
-
-```sql
-CALL mysql.az_replication_skip_counter;
-```
-
- **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), use the following stored procedure to skip a transaction
-
-```sql
-call mysql. az_replication_skip_gtid_transaction(ΓÇÿ<transaction_gtid>ΓÇÖ)
-```
-
-The procedure can skip the transaction for the given GTID. If the GTID format is not right or the GTID transaction has already been executed, the procedure will fail to execute. The GTID for a transaction can be determined by parsing the binary log to check the transaction events. MySQL provides a utility [mysqlbinlog](https://dev.mysql.com/doc/refman/5.7/en/mysqlbinlog.html) to parse binary logs and display their contents in text format, which can be used to identify GTID of the transaction.
-
->[!Important]
->This procedure can be only used to skip one transaction, and can't be used to skip gtid set or set gtid_purged.
-
-To skip the next transaction after the current replication position, use the following command to identify the GTID of next transaction as shown below.
-
-```sql
-SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos][LIMIT [offset,] row_count]
-```
-
- :::image type="content" source="./media/howto-data-in-replication/show-binary-log.png" alt-text="Show binary log results":::
-
-## Next steps
--- Learn more about [Data-in Replication](concepts-data-in-replication.md) for Azure Database for MySQL.
mysql Howto Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-deny-public-network-access.md
- Title: Deny Public Network Access - Azure portal - Azure Database for MySQL
-description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for MySQL
----- Previously updated : 03/10/2020--
-# Deny Public Network Access in Azure Database for MySQL using Azure portal
--
-This article describes how you can configure an Azure Database for MySQL server to deny all public configurations and allow only connections through private endpoints to further enhance the network security.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
-
-* An [Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md) with General Purpose or Memory Optimized pricing tier
-
-## Set Deny Public Network Access
-
-Follow these steps to set MySQL server Deny Public Network Access:
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server.
-
-1. On the MySQL server page, under **Settings**, click **Connection security** to open the connection security configuration page.
-
-1. In **Deny Public Network Access**, select **Yes** to enable deny public access for your MySQL server.
-
- :::image type="content" source="./media/howto-deny-public-network-access/setting-deny-public-network-access.PNG" alt-text="Azure Database for MySQL Deny network access":::
-
-1. Click **Save** to save the changes.
-
-1. A notification will confirm that connection security setting was successfully enabled.
-
- :::image type="content" source="./media/howto-deny-public-network-access/setting-deny-public-network-access-success.png" alt-text="Azure Database for MySQL Deny network access success":::
-
-## Next steps
-
-Learn about [how to create alerts on metrics](howto-alert-on-metric.md).
mysql Howto Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-double-encryption.md
- Title: Infrastructure double encryption - Azure portal - Azure Database for MySQL
-description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for MySQL.
----- Previously updated : 06/30/2020--
-# Infrastructure double encryption for Azure Database for MySQL
--
-Learn how to use the how set up and manage Infrastructure double encryption for your Azure Database for MySQL.
-
-## Prerequisites
-
-* You must have an Azure subscription and be an administrator on that subscription.
-* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [infrastructure double encryption](concepts-infrastructure-double-encryption.md#limitations).
-
-## Create an Azure Database for MySQL server with Infrastructure Double encryption - Portal
-
-Follow these steps to create an Azure Database for MySQL server with Infrastructure double encryption from Azure portal:
-
-1. Select **Create a resource** (+) in the upper-left corner of the portal.
-
-2. Select **Databases** > **Azure Database for MySQL**. You can also enter **MySQL** in the search box to find the service.
-
- :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/2_navigate-to-mysql.png" alt-text="Azure Database for MySQL option":::
-
-3. Provide the basic information of the server. Select **Additional settings** and enabled the **Infrastructure double encryption** checkbox to set the parameter.
-
- :::image type="content" source="./media/howto-double-encryption/infrastructure-encryption-selected.png" alt-text="Azure Database for MySQL selections":::
-
-4. Select **Review + create** to provision the server.
-
- :::image type="content" source="./media/howto-double-encryption/infrastructure-encryption-summary.png" alt-text="Azure Database for MySQL summary":::
-
-5. Once the server is created you can validate the infrastructure double encryption by checking the status in the **Data encryption** server blade.
-
- :::image type="content" source="./media/howto-double-encryption/infrastructure-encryption-validation.png" alt-text="Azure Database for MySQL validation":::
-
-## Create an Azure Database for MySQL server with Infrastructure Double encryption - CLI
-
-Follow these steps to create an Azure Database for MySQL server with Infrastructure double encryption from CLI:
-
-This example creates a resource group named `myresourcegroup` in the `westus` location.
-
-```azurecli-interactive
-az group create --name myresourcegroup --location westus
-```
-The following example creates a MySQL 5.7 server in West US named `mydemoserver` in your resource group `myresourcegroup` with server admin login `myadmin`. This is a **Gen 4** **General Purpose** server with **2 vCores**. This will also enabled infrastructure double encryption for the server created. Substitute the `<server_admin_password>` with your own value.
-
-```azurecli-interactive
-az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7 --infrastructure-encryption <Enabled/Disabled>
-```
-
-## Next steps
-
- To learn more about data encryption, see [Azure Database for MySQL data Infrastructure double encryption](concepts-Infrastructure-double-encryption.md).
mysql Howto Manage Firewall Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-firewall-using-cli.md
- Title: Manage firewall rules - Azure CLI - Azure Database for MySQL
-description: This article describes how to create and manage Azure Database for MySQL firewall rules using Azure CLI command-line.
----- Previously updated : 3/18/2020 ---
-# Create and manage Azure Database for MySQL firewall rules by using the Azure CLI
-
-Server-level firewall rules can be used to manage access to an Azure Database for MySQL Server from a specific IP address or a range of IP addresses. Using convenient Azure CLI commands, you can create, update, delete, list, and show firewall rules to manage your server. For an overview of Azure Database for MySQL firewalls, see [Azure Database for MySQL server firewall rules](./concepts-firewall-rules.md).
-
-Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure CLI](howto-manage-vnet-using-cli.md).
-
-## Prerequisites
-* [Install Azure CLI](/cli/azure/install-azure-cli).
-* An [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-cli.md).
-
-## Firewall rule commands:
-The **az mysql server firewall-rule** command is used from the Azure CLI to create, delete, list, show, and update firewall rules.
-
-Commands:
-- **create**: Create an Azure MySQL server firewall rule.-- **delete**: Delete an Azure MySQL server firewall rule.-- **list**: List the Azure MySQL server firewall rules.-- **show**: Show the details of an Azure MySQL server firewall rule.-- **update**: Update an Azure MySQL server firewall rule.-
-## Sign in to Azure and list your Azure Database for MySQL Servers
-Securely connect Azure CLI with your Azure account by using the **az login** command.
-
-1. From the command-line, run the following command:
- ```azurecli
- az login
- ```
- This command outputs a code to use in the next step.
-
-2. Use a web browser to open the page [https://aka.ms/devicelogin](https://aka.ms/devicelogin), and then enter the code.
-
-3. At the prompt, sign in using your Azure credentials.
-
-4. After your login is authorized, a list of subscriptions is printed in the console. Copy the ID of the desired subscription to set the current subscription to use. Use the [az account set](/cli/azure/account#az-account-set) command.
- ```azurecli-interactive
- az account set --subscription <your subscription id>
- ```
-
-5. List the Azure Databases for MySQL servers for your subscription and resource group if you are unsure of the names. Use the [az mysql server list](/cli/azure/mysql/server#az-mysql-server-list) command.
-
- ```azurecli-interactive
- az mysql server list --resource-group myresourcegroup
- ```
-
- Note the name attribute in the listing, which you need to specify the MySQL server to work on. If needed, confirm the details for that server and using the name attribute to ensure it is correct. Use the [az mysql server show](/cli/azure/mysql/server#az-mysql-server-show) command.
-
- ```azurecli-interactive
- az mysql server show --resource-group myresourcegroup --name mydemoserver
- ```
-
-## List firewall rules on Azure Database for MySQL Server
-Using the server name and the resource group name, list the existing server firewall rules on the server. Use the [az mysql server firewall list](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-list) command. Notice that the server name attribute is specified in the **--server** switch and not in the **--name** switch.
-```azurecli-interactive
-az mysql server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver
-```
-The output lists the rules, if any, in JSON format (by default). You can use the **--output table** switch to output the results in a more readable table format.
-```azurecli-interactive
-az mysql server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver --output table
-```
-## Create a firewall rule on Azure Database for MySQL Server
-Using the Azure MySQL server name and the resource group name, create a new firewall rule on the server. Use the [az mysql server firewall create](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-create) command. Provide a name for the rule, as well as the start IP and end IP (to provide access to a range of IP addresses) for the rule.
-```azurecli-interactive
-az mysql server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.15
-```
-
-To allow access for a single IP address, provide the same IP address as the Start IP and End IP, as in this example.
-```azurecli-interactive
-az mysql server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 1.1.1.1 --end-ip-address 1.1.1.1
-```
-
-To allow applications from Azure IP addresses to connect to your Azure Database for MySQL server, provide the IP address 0.0.0.0 as the Start IP and End IP, as in this example.
-```azurecli-interactive
-az mysql server firewall-rule create --resource-group myresourcegroup --server mysql --name "AllowAllWindowsAzureIps" --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
-```
-
-> [!IMPORTANT]
-> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
-
-Upon success, each create command output lists the details of the firewall rule you have created, in JSON format (by default). If there is a failure, the output shows error message text instead.
-
-## Update a firewall rule on Azure Database for MySQL server
-Using the Azure MySQL server name and the resource group name, update an existing firewall rule on the server. Use the [az mysql server firewall update](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-update) command. Provide the name of the existing firewall rule as input, as well as the start IP and end IP attributes to update.
-```azurecli-interactive
-az mysql server firewall-rule update --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.1
-```
-Upon success, the command output lists the details of the firewall rule you have updated, in JSON format (by default). If there is a failure, the output shows error message text instead.
-
-> [!NOTE]
-> If the firewall rule does not exist, the rule is created by the update command.
-
-## Show firewall rule details on Azure Database for MySQL Server
-Using the Azure MySQL server name and the resource group name, show the existing firewall rule details from the server. Use the [az mysql server firewall show](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-show) command. Provide the name of the existing firewall rule as input.
-```azurecli-interactive
-az mysql server firewall-rule show --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1
-```
-Upon success, the command output lists the details of the firewall rule you have specified, in JSON format (by default). If there is a failure, the output shows error message text instead.
-
-## Delete a firewall rule on Azure Database for MySQL Server
-Using the Azure MySQL server name and the resource group name, remove an existing firewall rule from the server. Use the [az mysql server firewall delete](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-delete) command. Provide the name of the existing firewall rule.
-```azurecli-interactive
-az mysql server firewall-rule delete --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1
-```
-Upon success, there is no output. Upon failure, error message text displays.
-
-## Next steps
-- Understand more about [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md).-- [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./howto-manage-firewall-using-portal.md).-- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure CLI](howto-manage-vnet-using-cli.md).
mysql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-firewall-using-portal.md
- Title: Manage firewall rules - Azure portal - Azure Database for MySQL
-description: Create and manage Azure Database for MySQL firewall rules using the Azure portal
----- Previously updated : 3/18/2020-
-# Create and manage Azure Database for MySQL firewall rules by using the Azure portal
-
-Server-level firewall rules can be used to manage access to an Azure Database for MySQL Server from a specified IP address or a range of IP addresses.
-
-Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](howto-manage-vnet-using-portal.md).
-
-## Create a server-level firewall rule in the Azure portal
-
-1. On the MySQL server page, under Settings heading, click **Connection Security** to open the Connection Security page for the Azure Database for MySQL.
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security":::
-
-2. Click **Add My IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/2-add-my-ip.png" alt-text="Azure portal - click Add My IP":::
-
-3. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Therefore, you may need to change the Start IP and End IP to make the rule function as expected.
-
- Use a search engine or other online tool to check your own IP address. For example, search "what is my IP address".
-
-4. Add additional address ranges. In the firewall rules for the Azure Database for MySQL, you can specify a single IP address or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the Start IP and End IP fields. Opening the firewall enables administrators, users, and application to access any database on the MySQL server to which they have valid credentials.
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/4-specify-addresses.png" alt-text="Azure portal - firewall rules":::
-
-5. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules is successful.
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/5-save-firewall-rule.png" alt-text="Azure portal - click Save":::
-
-## Connecting from Azure
-To allow applications from Azure to connect to your Azure Database for MySQL server, Azure connections must be enabled. For example, to host an Azure Web Apps application, or an application that runs in an Azure VM, or to connect from an Azure Data Factory data management gateway. The resources do not need to be in the same Virtual Network (VNet) or Resource Group for the firewall rule to enable those connections. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. There are a couple of methods to enable these types of connections. A firewall setting with starting and ending address equal to 0.0.0.0 indicates these connections are allowed. Alternatively, you can set the **Allow access to Azure services** option to **ON** in the portal from the **Connection security** pane and hit **Save**. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL server.
-
-> [!IMPORTANT]
-> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
-
-## Manage existing server-level firewall rules by using the Azure portal
-Repeat the steps to manage the firewall rules.
-* To add the current computer, click **+ Add My IP**. Click **Save** to save the changes.
-* To add additional IP addresses, type in the **RULE NAME**, **START IP**, and **END IP**. Click **Save** to save the changes.
-* To modify an existing rule, click any of the fields in the rule, and then modify. Click **Save** to save the changes.
-* To delete an existing rule, click the ellipsis […], and then click **Delete**. Click **Save** to save the changes.
--
-## Next steps
-- Similarly, you can script to [Create and manage Azure Database for MySQL firewall rules using Azure CLI](howto-manage-firewall-using-cli.md).-- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure portal](howto-manage-vnet-using-portal.md).-- For help in connecting to an Azure Database for MySQL server, see [Connection libraries for Azure Database for MySQL](./concepts-connection-libraries.md).
mysql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-vnet-using-cli.md
- Title: Manage VNet endpoints - Azure CLI - Azure Database for MySQL
-description: This article describes how to create and manage Azure Database for MySQL VNet service endpoints and rules using Azure CLI command line.
----- Previously updated : 02/10/2022 --
-# Create and manage Azure Database for MySQL VNet service endpoints using Azure CLI
-
-Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL.
---
-> [!NOTE]
-> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
-> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server.
-
-## Configure Vnet service endpoints for Azure Database for MySQL
-
-The [az network vnet](/cli/azure/network/vnet) commands are used to configure Virtual Networks.
-
-If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. The account must have the necessary permissions to create a virtual network and service endpoint.
-Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
-
-To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
-
-Learn more about [built-in roles](../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../role-based-access-control/custom-roles.md).
-
-VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource providers registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-> [!IMPORTANT]
-> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
-
-## Sample script
--
-### Run the script
--
-## Clean up resources
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Howto Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-vnet-using-portal.md
- Title: Manage VNet endpoints - Azure portal - Azure Database for MySQL
-description: Create and manage Azure Database for MySQL VNet service endpoints and rules using the Azure portal
----- Previously updated : 3/18/2020-
-# Create and manage Azure Database for MySQL VNet service endpoints and VNet rules by using the Azure portal
-
-Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL.
-
-> [!NOTE]
-> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
-> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server.
--
-## Create a VNet rule and enable service endpoints in the Azure portal
-
-1. On the MySQL server page, under the Settings heading, click **Connection Security** to open the Connection Security pane for Azure Database for MySQL.
-
-2. Ensure that the Allow access to Azure services control is set to **OFF**.
-
-> [!Important]
-> If you leave the control set to ON, your Azure MySQL Database server accepts communication from any subnet. Leaving the control set to ON might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for MySQL, together can reduce your security surface area.
-
-3. Next, click on **+ Adding existing virtual network**. If you do not have an existing VNet you can click **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)
-
- :::image type="content" source="./media/howto-manage-vnet-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security":::
-
-4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then click **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag.
-
- :::image type="content" source="./media/howto-manage-vnet-using-portal/2-configure-vnet.png" alt-text="Azure portal - configure VNet":::
-
- The account must have the necessary permissions to create a virtual network and service endpoint.
-
- Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
-
- To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
-
- Learn more about [built-in roles](../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../role-based-access-control/custom-roles.md).
-
- VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource providers registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
- > [!IMPORTANT]
- > It is highly recommended to read this article about service endpoint configurations and considerations before configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
- >
-
-5. Once enabled, click **OK** and you will see that VNet service endpoints are enabled along with a VNet rule.
-
- :::image type="content" source="./media/howto-manage-vnet-using-portal/3-vnet-service-endpoints-enabled-vnet-rule-created.png" alt-text="VNet service endpoints enabled and VNet rule created":::
-
-## Next steps
-- Similarly, you can script to [Enable VNet service endpoints and create a VNET rule for Azure Database for MySQL using Azure CLI](howto-manage-vnet-using-cli.md).-- For help in connecting to an Azure Database for MySQL server, see [Connection libraries for Azure Database for MySQL](./concepts-connection-libraries.md)-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Howto Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-migrate-online.md
- Title: Minimal-downtime migration - Azure Database for MySQL
-description: This article describes how to perform a minimal-downtime migration of a MySQL database to Azure Database for MySQL.
------ Previously updated : 6/19/2021--
-# Minimal-downtime migration to Azure Database for MySQL
--
-You can perform MySQL migrations to Azure Database for MySQL with minimal downtime by using Data-in replication, which limits the amount of downtime that is incurred by the application.
-
-You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
-
-## Overview
-
-Using Data-in replication, you can configure the source as your primary and the target as your replica, so that there's continuous synching of any new transactions to Azure while the application remains running. After the data catches up on the target Azure side, you stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then update your connection string to point to Azure. When you're finished, your application will be live on Azure!
-
-## Next steps
--- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
mysql Howto Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-migrate-single-flexible-minimum-downtime.md
- Title: "Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime"
-description: This article describes how to perform a minimal-downtime migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server.
----- Previously updated : 06/18/2021--
-# Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime
-
-You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimum downtime to your applications by using a combination of open-source tools such as mydumper/myloader and Data-in replication.
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-
-Data-in replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as ΓÇ£eventsΓÇ¥ to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and to execute the events in the binary log on the replica's local database.
-
-If you set up Data-in replication to synchronize data from one instance of Azure Database for MySQL to another, you can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
-
-In this tutorial, youΓÇÖll use mydumper/myloader and Data-in replication to migrate a sample database ([classicmodels](https://www.mysqltutorial.org/mysql-sample-database.aspx)) from an instance of Azure Database for MySQL - Single Server to an instance of Azure Database for MySQL - Flexible Server, and then synchronize data.
-
-In this tutorial, you learn how to:
-
-* Configure Network Settings for Data-in replication for different scenarios.
-* Configure Data-in replication between the primary and replica.
-* Test the replication.
-* Cutover to complete the migration.
-
-## Prerequisites
-
-To complete this tutorial, you need:
-
-* An instance of Azure Database for MySQL Single Server running version 5.7 or 8.0.
- > [!Note]
- > If you're running Azure Database for MySQL Single Server version 5.6, upgrade your instance to 5.7 and then configure data in replication. To learn more, see [Major version upgrade in Azure Database for MySQL - Single Server](how-to-major-version-upgrade.md).
-* An instance of Azure Database for MySQL Flexible Server. For more information, see the article [Create an instance in Azure Database for MySQL Flexible Server](./flexible-server/quickstart-create-server-portal.md).
- > [!Note]
- > Configuring Data-in replication for zone redundant high availability servers is not supported. If you would like to have zone redundant HA for your target server, then perform these steps:
- >
- > 1. Create the server with Zone redundant HA enabled
- > 2. Disable HA
- > 3. Follow the article to setup data-in replication
- > 4. Post cutover remove the Data-in replication configuration
- > 5. Enable HA
- >
- > *Make sure that **[GTID_Mode](./concepts-read-replicas.md#global-transaction-identifier-gtid)** has the same setting on the source and target servers.*
-
-* To connect and create a database using MySQL Workbench. For more information, see the article [Use MySQL Workbench to connect and query data](./flexible-server/connect-workbench.md).
-* To ensure that you have an Azure VM running Linux in same region (or on the same VNet, with private access) that hosts your source and target databases.
-* To install mysql client or MySQL Workbench (the client tools) on your Azure VM. Ensure that you can connect to both the primary and replica server. For the purposes of this article, mysql client is installed.
-* To install mydumper/myloader on your Azure VM. For more information, see the article [mydumper/myloader](concepts-migrate-mydumper-myloader.md).
-* To download and run the sample database script for the [classicmodels](https://www.mysqltutorial.org/wp-content/uploads/2018/03/mysqlsampledatabase.zip) database on the source server.
-* Configure [binlog_expire_logs_seconds](./concepts-server-parameters.md#binlog_expire_logs_seconds) on the source server to ensure that binlogs arenΓÇÖt purged before the replica commit the changes. Post successful cutover you can reset the value.
-
-## Configure networking requirements
-
-To configure the Data-in replication, you need to ensure that the target can connect to the source over port 3306. Based on the type of endpoint set up on the source, perform the appropriate following steps.
-
-* If a public endpoint is enabled on the source, then ensure that the target can connect to the source by enabling ΓÇ£Allow access to Azure servicesΓÇ¥ in the firewall rule. To learn more, see [Firewall rules - Azure Database for MySQL](./concepts-firewall-rules.md#connecting-from-azure).
-* If a private endpoint and ΓÇ£[Deny public access](concepts-data-access-security-private-link.md#deny-public-access-for-azure-database-for-mysql)ΓÇ¥ is enabled on the source, then install the private link in the same VNet that hosts the target. To learn more, see [Private Link - Azure Database for MySQL](concepts-data-access-security-private-link.md).
-
-## Configure Data-in replication
-
-To configure Data in replication, perform the following steps:
-
-1. Sign in to the Azure VM on which you installed the mysql client tool.
-
-2. Connect to the source and target using the mysql client tool.
-
-3. Use the mysql client tool to determine whether log_bin is enabled on the source by running the following command:
-
- ```sql
- SHOW VARIABLES LIKE 'log_bin';
- ```
-
- > [!Note]
- > With Azure Database for MySQL Single Server with the large storage, which supports up to 16TB, this enabled by default.
-
- > [!Tip]
- > With Azure Database for MySQL Single Server, which supports up to 4TB, this is not enabled by default. However, if you promote a [read replica](howto-read-replicas-portal.md) for the source server and then delete read replica, the parameter will be set to ON.
-
-4. Based on the SSL enforcement for the source server, create a user in the source server with the replication permission by running the appropriate command.
-
- If youΓÇÖre using SSL, run the following command:
-
- ```sql
- CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
- GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL;
- ```
-
- If youΓÇÖre not using SSL, run the following command:
-
- ```sql
- CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
- GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';
- ```
-
-5. To back up the database using mydumper, run the following command on the Azure VM where we installed the mydumper\myloader:
-
- ```bash
- $ mydumper --host=<primary_server>.mysql.database.azure.com --user=<username>@<primary_server> --password=<Password> --outputdir=./backup --rows=100 -G -E -R -z --trx-consistency-only --compress --build-empty-files --threads=16 --compress-protocol --ssl --regex '^(classicmodels\.)' -L mydumper-logs.txt
- ```
-
- > [!Tip]
- > The option **--trx-consistency-only** is a required for transactional consistency while we take backup.
- >
- > * The mydumper equivalent of mysqldumpΓÇÖs --single-transaction.
- > * Useful if all your tables are InnoDB.
- > * The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction.
- > * Offers the shortest duration of global locking
-
- The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction.
-
- The variables in this command are explained below:
-
- * **--host:** Name of the primary server
- * **--user:** Name of a user (in the format username@servername since the primary server is running Azure Database for MySQL - Single Server). You can use server admin or a user having SELECT and RELOAD permissions.
- * **--Password:** Password of the user above
-
- For more information about using mydumper, see [mydumper/myloader](concepts-migrate-mydumper-myloader.md)
-
-6. Read the metadata file to determine the binary log file name and offset by running the following command:
-
- ```bash
- $ cat ./backup/metadata
- ```
-
- In this command, **./backup** refers to the output directory used in the command in the previous step.
-
- The results should appear as shown in the following image:
-
- :::image type="content" source="./media/howto-migrate-single-flexible-minimum-downtime/metadata.png" alt-text="Continuous sync with the Azure Database Migration Service":::
-
- Make sure to note the binary file name for use in later steps.
-
-7. Restore the database using myloader by running the following command:
-
- ```bash
- $ myloader --host=<servername>.mysql.database.azure.com --user=<username> --password=<Password> --directory=./backup --queries-per-transaction=100 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt
- ```
-
- The variables in this command are explained below:
-
- * **--host:** Name of the replica server
- * **--user:** Name of a user. You can use server admin or a user with read\write permission capable of restoring the schemas and data to the database
- * **--Password:** Password for the user above
-
-8. Depending on the SSL enforcement on the primary server, connect to the replica server using the mysql client tool and perform the following the steps.
-
- * If SSL enforcement is enabled, then:
-
- i. Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem).
-
- ii. Open the file in notepad and paste the contents to the section ΓÇ£PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HEREΓÇ£.
-
- ```sql
- SET @cert = '--BEGIN CERTIFICATE--
- PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HERE
- --END CERTIFICATE--'
- ```
-
- iii. To configure Data in replication, run the following command:
-
- ```sql
- CALL mysql.az_replication_change_master('<Primary_server>.mysql.database.azure.com', '<username>@<primary_server>', '<Password>', 3306, '<File_Name>', <Position>, @cert);
- ```
-
- > [!Note]
- > Determine the position and file name from the information obtained in step 6.
-
- * If SSL enforcement isn't enabled, then run the following command:
-
- ```sql
- CALL mysql.az_replication_change_master('<Primary_server>.mysql.database.azure.com', '<username>@<primary_server>', '<Password>', 3306, '<File_Name>', <Position>, ΓÇÿΓÇÖ);
- ```
-
-9. To start replication from the replica server, call the below stored procedure.
-
- ```sql
- call mysql.az_replication_start;
- ```
-
-10. To check the replication status, on the replica server, run the following command:
-
- ```sql
- show slave status \G;
- ```
-
- > [!Note]
- > If you're using MySQL Workbench the \G modifier is not required.
-
- If the state of *Slave_IO_Running* and *Slave_SQL_Running* are Yes and the value of *Seconds_Behind_Master* is 0, then replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value is something other than 0, then the replica is processing updates.
-
-## Testing the replication (optional)
-
-To confirm that Data-in replication is working properly, you can verify that the changes to the tables in primary were replicated to the replica.
-
-1. Identify a table to use for testing, for example the Customers table, and then confirm that the number of entries it contains is the same on the primary and replica servers by running the following command on each:
-
- ```
- select count(*) from customers;
- ```
-
-2. Make a note of the entry count for later comparison.
-
- To test replication, try adding some data to the customer tables on the primary server and see then verify that the new data is replicated. In this case, youΓÇÖll add two rows to a table on the primary server and then confirm that they are replicated on the replica server.
-
-3. In the Customers table on the primary server, insert rows by running the following command:
-
- ```sql
- insert into `customers`(`customerNumber`,`customerName`,`contactLastName`,`contactFirstName`,`phone`,`addressLine1`,`addressLine2`,`city`,`state`,`postalCode`,`country`,`salesRepEmployeeNumber`,`creditLimit`) values
- (<ID>,'name1','name2','name3 ','11.22.5555','54, Add',NULL,'Add1',NULL,'44000','country',1370,'21000.00');
- ```
-
-4. To check the replication status, call the *show slave status \G* to confirm that replication is working as expected.
-
-5. To confirm that the count is the same, on the replica server, run the following command:
-
- ```sql
- select count(*) from customers;
- ```
-
-## Ensure a successful cutover
-
-To ensure a successful cutover, perform the following tasks:
-
-1. Configure the appropriate server-level firewall and virtual network rules to connect to target Server. You can compare the firewall rules for the [source](howto-manage-firewall-using-portal.md) and [target](./flexible-server/how-to-manage-firewall-portal.md#create-a-firewall-rule-after-server-is-created) from the portal.
-2. Configure appropriate logins and database level permissions in the target server. You can run *SELECT * FROM mysql.user;* on the source and target servers to compare.
-3. Make sure that all the incoming connections to Azure Database for MySQL Single Server are stopped.
- > [!Tip]
- > You can set the Azure Database for MySQL Single Server to read only.
-4. Ensure that the replica has caught up with the primary by running *show slave status \G* and confirming that the value for the *Seconds_Behind_Master* parameter is 0.
-5. Redirect clients and client applications to the target instance of Azure Database for MySQL Flexible Server.
-6. Perform the final cutover by running the mysql.az_replication_stop stored procedure, which will stop replication from the replica server.
-7. *Call mysql.az_replication_remove_master* to remove the Data-in replication configuration.
-
-At this point, your applications are connected to the new Azure Database for MySQL Flexible server and changes in the source will no longer replicate to the target.
-
-## Next steps
-
-* Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](./flexible-server/how-to-data-in-replication.md)
-* Learn more about [troubleshooting common errors in Azure Database for MySQL](howto-troubleshoot-common-errors.md).
-* Learn more about [migrating MySQL to Azure Database for MySQL offline using Azure Database Migration Service](../dms/tutorial-mysql-azure-mysql-offline-portal.md).
mysql Howto Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-move-regions-portal.md
- Title: Move Azure regions - Azure portal - Azure Database for MySQL
-description: Move an Azure Database for MySQL server from one Azure region to another using a read replica and the Azure portal.
------ Previously updated : 06/26/2020
-#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
--
-# Move an Azure Database for MySQL server to another region by using the Azure portal
--
-There are various scenarios for moving an existing Azure Database for MySQL server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning.
-
-You can use an Azure Database for MySQL [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
-
-> [!NOTE]
-> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
-
-## Prerequisites
--- The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.--- Make sure that your Azure Database for MySQL source server is in the Azure region that you want to move from.-
-## Prepare to move
-
-To create a cross-region read replica server in the target region using the Azure portal, use the following steps:
-
-1. Sign into the [Azure portal](https://portal.azure.com/).
-1. Select the existing Azure Database for MySQL server that you want to use as the source server. This action opens the **Overview** page.
-1. Select **Replication** from the menu, under **SETTINGS**.
-1. Select **Add Replica**.
-1. Enter a name for the replica server.
-1. Select the location for the replica server. The default location is the same as the source server's. Verify that you've selected the target location where you want the replica to be deployed.
-1. Select **OK** to confirm creation of the replica. During replica creation, data is copied from the source server to the replica. Create time may last several minutes or more, in proportion to the size of the source server.
-
->[!NOTE]
-> When you create a replica, it doesn't inherit the VNet service endpoints of the source server. These rules must be set up independently for the replica.
-
-## Move
-
-> [!IMPORTANT]
-> The standalone server can't be made into a replica again.
-> Before you stop replication on a read replica, ensure the replica has all the data that you require.
-
-Stopping replication to the replica server, causes it to become a standalone server. To stop replication to the replica from the Azure portal, use the following steps:
-
-1. Once the replica has been created, locate and select your Azure Database for MySQL source server.
-1. Select **Replication** from the menu, under **SETTINGS**.
-1. Select the replica server.
-1. Select **Stop replication**.
-1. Confirm you want to stop replication by clicking **OK**.
-
-## Clean up source server
-
-You may want to delete the source Azure Database for MySQL server. To do so, use the following steps:
-
-1. Once the replica has been created, locate and select your Azure Database for MySQL source server.
-1. In the **Overview** window, select **Delete**.
-1. Type in the name of the source server to confirm you want to delete.
-1. Select **Delete**.
-
-## Next steps
-
-In this tutorial, you moved an Azure Database for MySQL server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
--- Learn more about [read replicas](concepts-read-replicas.md)-- Learn more about [managing read replicas in the Azure portal](howto-read-replicas-portal.md)-- Learn more about [business continuity](concepts-business-continuity.md) options
mysql Howto Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-read-replicas-cli.md
- Title: Manage read replicas - Azure CLI, REST API - Azure Database for MySQL
-description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure CLI or REST API.
----- Previously updated : 06/17/2020 ---
-# How to create and manage read replicas in Azure Database for MySQL using the Azure CLI and REST API
--
-In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
-
-## Azure CLI
-You can create and manage read replicas using the Azure CLI.
-
-### Prerequisites
--- [Install Azure CLI 2.0](/cli/azure/install-azure-cli)-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md) that will be used as the source server. -
-> [!IMPORTANT]
-> The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
-
-### Create a read replica
-
-> [!IMPORTANT]
-> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
->
->If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
-
-A read replica server can be created using the following command:
-
-```azurecli-interactive
-az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup
-```
-
-The `az mysql server replica create` command requires the following parameters:
-
-| Setting | Example value | Description  |
-| | | |
-| resource-group |  myresourcegroup |  The resource group where the replica server will be created to.  |
-| name | mydemoreplicaserver | The name of the new replica server that is created. |
-| source-server | mydemoserver | The name or ID of the existing source server to replicate from. |
-
-To create a cross region read replica, use the `--location` parameter. The CLI example below creates the replica in West US.
-
-```azurecli-interactive
-az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup --location westus
-```
-
-> [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
-
-> [!NOTE]
-> * The `az mysql server replica create` command has `--sku-name` argument which allows you to specify the sku (`{pricing_tier}_{compute generation}_{vCores}`) while you create a replica using Azure CLI. </br>
-> * The primary server and read replica should be on same pricing tier (General Purpose or Memory Optimized). </br>
-> * The replica server configuration can also be changed after it has been created. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the master.
--
-### List replicas for a source server
-
-To view all replicas for a given source server, run the following command:
-
-```azurecli-interactive
-az mysql server replica list --server-name mydemoserver --resource-group myresourcegroup
-```
-
-The `az mysql server replica list` command requires the following parameters:
-
-| Setting | Example value | Description  |
-| | | |
-| resource-group |  myresourcegroup |  The resource group where the replica server will be created to.  |
-| server-name | mydemoserver | The name or ID of the source server. |
-
-### Stop replication to a replica server
-
-> [!IMPORTANT]
-> Stopping replication to a server is irreversible. Once replication has stopped between a source and replica, it cannot be undone. The replica server then becomes a standalone server and now supports both read and writes. This server cannot be made into a replica again.
-
-Replication to a read replica server can be stopped using the following command:
-
-```azurecli-interactive
-az mysql server replica stop --name mydemoreplicaserver --resource-group myresourcegroup
-```
-
-The `az mysql server replica stop` command requires the following parameters:
-
-| Setting | Example value | Description  |
-| | | |
-| resource-group |  myresourcegroup |  The resource group where the replica server exists.  |
-| name | mydemoreplicaserver | The name of the replica server to stop replication on. |
-
-### Delete a replica server
-
-Deleting a read replica server can be done by running the **[az mysql server delete](/cli/azure/mysql/server)** command.
-
-```azurecli-interactive
-az mysql server delete --resource-group myresourcegroup --name mydemoreplicaserver
-```
-
-### Delete a source server
-
-> [!IMPORTANT]
-> Deleting a source server stops replication to all replica servers and deletes the source server itself. Replica servers become standalone servers that now support both read and writes.
-
-To delete a source server, you can run the **[az mysql server delete](/cli/azure/mysql/server)** command.
-
-```azurecli-interactive
-az mysql server delete --resource-group myresourcegroup --name mydemoserver
-```
--
-## REST API
-You can create and manage read replicas using the [Azure REST API](/rest/api/azure/).
-
-### Create a read replica
-You can create a read replica by using the [create API](/rest/api/mysql/flexibleserver/servers/create):
-
-```http
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{replicaName}?api-version=2017-12-01
-```
-
-```json
-{
- "location": "southeastasia",
- "properties": {
- "createMode": "Replica",
- "sourceServerId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}"
- }
-}
-```
-
-> [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
-
-A replica is created by using the same compute and storage settings as the master. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and back-up retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
--
-> [!IMPORTANT]
-> Before a source server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
-
-### List replicas
-You can view the list of replicas of a source server using the [replica list API](/rest/api/mysql/flexibleserver/replicas/list-by-server):
-
-```http
-GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}/Replicas?api-version=2017-12-01
-```
-
-### Stop replication to a replica server
-You can stop replication between a source server and a read replica by using the [update API](/rest/api/mysql/flexibleserver/servers/update).
-
-After you stop replication to a source server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
-
-```http
-PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}?api-version=2017-12-01
-```
-
-```json
-{
- "properties": {
- "replicationRole":"None"
- }
-}
-```
-
-### Delete a source or replica server
-To delete a source or replica server, you use the [delete API](/rest/api/mysql/flexibleserver/servers/delete):
-
-When you delete a source server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
-
-```http
-DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{serverName}?api-version=2017-12-01
-```
-
-### Known issue
-
-There are two generations of storage which the servers in General Purpose and Memory Optimized tier use, General purpose storage v1 (Supports up to 4-TB) & General purpose storage v2 (Supports up to 16-TB storage).
-Source server and the replica server should have same storage type. As [General purpose storage v2](./concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) is not available in all regions, please make sure you choose the correct replica region while you use location with the CLI or REST API for read replica creation. On how to identify the storage type of your source server refer to link [How can I determine which storage type my server is running on](./concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on).
-
-If you choose a region where you cannot create a read replica for your source server, you will encounter the issue where the deployment will keep running as shown in the figure below and then will timeout with the error *ΓÇ£The resource provision operation did not complete within the allowed timeout period.ΓÇ¥*
-
-[ :::image type="content" source="media/howto-read-replicas-cli/replcia-cli-known-issue.png" alt-text="Read replica cli error.":::](media/howto-read-replicas-cli/replcia-cli-known-issue.png#lightbox)
-
-## Next steps
--- Learn more about [read replicas](concepts-read-replicas.md)
mysql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-read-replicas-portal.md
- Title: Manage read replicas - Azure portal - Azure Database for MySQL
-description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure portal.
----- Previously updated : 06/17/2020 --
-# How to create and manage read replicas in Azure Database for MySQL using the Azure portal
--
-In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure portal.
-
-## Prerequisites
--- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md) that will be used as the source server.-
-> [!IMPORTANT]
-> The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
-
-## Create a read replica
-
-> [!IMPORTANT]
-> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
->
->If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
-
-A read replica server can be created using the following steps:
-
-1. Sign into the [Azure portal](https://portal.azure.com/).
-
-2. Select the existing Azure Database for MySQL server that you want to use as a master. This action opens the **Overview** page.
-
-3. Select **Replication** from the menu, under **SETTINGS**.
-
-4. Select **Add Replica**.
-
- :::image type="content" source="./media/howto-read-replica-portal/add-replica.png" alt-text="Azure Database for MySQL - Replication":::
-
-5. Enter a name for the replica server.
-
- :::image type="content" source="./media/howto-read-replica-portal/replica-name.png" alt-text="Azure Database for MySQL - Replica name":::
-
-6. Select the location for the replica server. The default location is the same as the source server's.
-
- :::image type="content" source="./media/howto-read-replica-portal/replica-location.png" alt-text="Azure Database for MySQL - Replica location":::
-
- > [!NOTE]
- > To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
-
-7. Select **OK** to confirm creation of the replica.
-
-> [!NOTE]
-> Read replicas are created with the same server configuration as the master. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the master.
-
-Once the replica server has been created, it can be viewed from the **Replication** blade.
-
- :::image type="content" source="./media/howto-read-replica-portal/list-replica.png" alt-text="Azure Database for MySQL - List replicas":::
-
-## Stop replication to a replica server
-
-> [!IMPORTANT]
-> Stopping replication to a server is irreversible. Once replication has stopped between a source and replica, it cannot be undone. The replica server then becomes a standalone server and now supports both read and writes. This server cannot be made into a replica again.
-
-To stop replication between a source and a replica server from the Azure portal, use the following steps:
-
-1. In the Azure portal, select your source Azure Database for MySQL server.
-
-2. Select **Replication** from the menu, under **SETTINGS**.
-
-3. Select the replica server you wish to stop replication for.
-
- :::image type="content" source="./media/howto-read-replica-portal/stop-replication-select.png" alt-text="Azure Database for MySQL - Stop replication select server":::
-
-4. Select **Stop replication**.
-
- :::image type="content" source="./media/howto-read-replica-portal/stop-replication.png" alt-text="Azure Database for MySQL - Stop replication":::
-
-5. Confirm you want to stop replication by clicking **OK**.
-
- :::image type="content" source="./media/howto-read-replica-portal/stop-replication-confirm.png" alt-text="Azure Database for MySQL - Stop replication confirm":::
-
-## Delete a replica server
-
-To delete a read replica server from the Azure portal, use the following steps:
-
-1. In the Azure portal, select your source Azure Database for MySQL server.
-
-2. Select **Replication** from the menu, under **SETTINGS**.
-
-3. Select the replica server you wish to delete.
-
- :::image type="content" source="./media/howto-read-replica-portal/delete-replica-select.png" alt-text="Azure Database for MySQL - Delete replica select server":::
-
-4. Select **Delete replica**
-
- :::image type="content" source="./media/howto-read-replica-portal/delete-replica.png" alt-text="Azure Database for MySQL - Delete replica":::
-
-5. Type the name of the replica and click **Delete** to confirm deletion of the replica.
-
- :::image type="content" source="./media/howto-read-replica-portal/delete-replica-confirm.png" alt-text="Azure Database for MySQL - Delete replica confirm":::
-
-## Delete a source server
-
-> [!IMPORTANT]
-> Deleting a source server stops replication to all replica servers and deletes the source server itself. Replica servers become standalone servers that now support both read and writes.
-
-To delete a source server from the Azure portal, use the following steps:
-
-1. In the Azure portal, select your source Azure Database for MySQL server.
-
-2. From the **Overview**, select **Delete**.
-
- :::image type="content" source="./media/howto-read-replica-portal/delete-master-overview.png" alt-text="Azure Database for MySQL - Delete master":::
-
-3. Type the name of the source server and click **Delete** to confirm deletion of the source server.
-
- :::image type="content" source="./media/howto-read-replica-portal/delete-master-confirm.png" alt-text="Azure Database for MySQL - Delete master confirm":::
-
-## Monitor replication
-
-1. In the [Azure portal](https://portal.azure.com/), select the replica Azure Database for MySQL server you want to monitor.
-
-2. Under the **Monitoring** section of the sidebar, select **Metrics**:
-
-3. Select **Replication lag in seconds** from the dropdown list of available metrics.
-
- :::image type="content" source="./media/howto-read-replica-portal/monitor-select-replication-lag.png" alt-text="Select Replication lag":::
-
-4. Select the time range you wish to view. The image below selects a 30 minute time range.
-
- :::image type="content" source="./media/howto-read-replica-portal/monitor-replication-lag-time-range.png" alt-text="Select time range":::
-
-5. View the replication lag for the selected time range. The image below displays the last 30 minutes.
-
- :::image type="content" source="./media/howto-read-replica-portal/monitor-replication-lag-time-range-thirty-mins.png" alt-text="Select time range 30 minutes":::
-
-## Next steps
--- Learn more about [read replicas](concepts-read-replicas.md)
mysql Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-read-replicas-powershell.md
- Title: Manage read replicas - Azure PowerShell - Azure Database for MySQL
-description: Learn how to set up and manage read replicas in Azure Database for MySQL using PowerShell.
----- Previously updated : 06/17/2020 ---
-# How to create and manage read replicas in Azure Database for MySQL using PowerShell
--
-In this article, you learn how to create and manage read replicas in the Azure Database for MySQL
-service using PowerShell. To learn more about read replicas, see the
-[overview](concepts-read-replicas.md).
-
-## Azure PowerShell
-
-You can create and manage read replicas using PowerShell.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
-> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
--
-> [!IMPORTANT]
-> The read replica feature is only available for Azure Database for MySQL servers in the General
-> Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing
-> tiers.
->
->If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
-
-### Create a read replica
-
-> [!IMPORTANT]
-> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
-
-A read replica server can be created using the following command:
-
-```azurepowershell-interactive
-Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- New-AzMySqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup
-```
-
-The `New-AzMySqlReplica` command requires the following parameters:
-
-| Setting | Example value | Description  |
-| | | |
-| ResourceGroupName |  myresourcegroup |  The resource group where the replica server is created.  |
-| Name | mydemoreplicaserver | The name of the new replica server that is created. |
-
-To create a cross region read replica, use the **Location** parameter. The following example creates
-a replica in the **West US** region.
-
-```azurepowershell-interactive
-Get-AzMySqlServer -Name mrdemoserver -ResourceGroupName myresourcegroup |
- New-AzMySqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -Location westus
-```
-
-> [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
-
-By default, read replicas are created with the same server configuration as the source unless the
-**Sku** parameter is specified.
-
-> [!NOTE]
-> It is recommended that the replica server's configuration should be kept at equal or greater
-> values than the source to ensure the replica is able to keep up with the master.
-
-### List replicas for a source server
-
-To view all replicas for a given source server, run the following command:
-
-```azurepowershell-interactive
-Get-AzMySqlReplica -ResourceGroupName myresourcegroup -ServerName mydemoserver
-```
-
-The `Get-AzMySqlReplica` command requires the following parameters:
-
-| Setting | Example value | Description  |
-| | | |
-| ResourceGroupName |  myresourcegroup |  The resource group where the replica server will be created to.  |
-| ServerName | mydemoserver | The name or ID of the source server. |
-
-### Delete a replica server
-
-Deleting a read replica server can be done by running the `Remove-AzMySqlServer` cmdlet.
-
-```azurepowershell-interactive
-Remove-AzMySqlServer -Name mydemoreplicaserver -ResourceGroupName myresourcegroup
-```
-
-### Delete a source server
-
-> [!IMPORTANT]
-> Deleting a source server stops replication to all replica servers and deletes the source server
-> itself. Replica servers become standalone servers that now support both read and writes.
-
-To delete a source server, you can run the `Remove-AzMySqlServer` cmdlet.
-
-```azurepowershell-interactive
-Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-### Known Issue
-
-There are two generations of storage which the servers in General Purpose and Memory Optimized tier use, General purpose storage v1 (Supports up to 4-TB) & General purpose storage v2 (Supports up to 16-TB storage).
-Source server and the replica server should have same storage type. As [General purpose storage v2](./concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) is not available in all regions, please make sure you choose the correct replica region while you use location with the PowerShell for read replica creation. On how to identify the storage type of your source server refer to link [How can I determine which storage type my server is running on](./concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on).
-
-If you choose a region where you cannot create a read replica for your source server, you will encounter the issue where the deployment will keep running as shown in the figure below and then will timeout with the error *ΓÇ£The resource provision operation did not complete within the allowed timeout period.ΓÇ¥*
-
-[ :::image type="content" source="media/howto-read-replicas-powershell/replcia-ps-known-issue.png" alt-text="Read replica cli error":::](media/howto-read-replicas-powershell/replcia-ps-known-issue.png#lightbox)
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Restart Azure Database for MySQL server using PowerShell](howto-restart-server-powershell.md)
mysql Howto Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-redirection.md
- Title: Connect with redirection - Azure Database for MySQL
-description: This article describes how you can configure you application to connect to Azure Database for MySQL with redirection.
----- Previously updated : 6/8/2020--
-# Connect to Azure Database for MySQL with redirection
--
-This topic explains how to connect an application your Azure Database for MySQL server with redirection mode. Redirection aims to reduce network latency between client applications and MySQL servers by allowing applications to connect directly to backend server nodes.
-
-## Before you begin
-Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Database for MySQL server with engine version 5.6, 5.7, or 8.0.
-
-For details, refer to how to create an Azure Database for MySQL server using the [Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).
-
-> [!IMPORTANT]
-> Redirection is currently not supported with [Private Link for Azure Database for MySQL](concepts-data-access-security-private-link.md).
-
-## Enable redirection
-
-On your Azure Database for MySQL server, configure the `redirect_enabled` parameter to `ON` to allow connections with redirection mode. To update this server parameter, use the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
-
-## PHP
-
-Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft.
-
-The mysqlnd_azure extension is available to add to PHP applications through PECL and it is highly recommended to install and configure the extension through the officially published [PECL package](https://pecl.php.net/package/mysqlnd_azure).
-
-> [!IMPORTANT]
-> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview.
-
-### Redirection logic
-
->[!IMPORTANT]
-> Redirection logic/behavior beginning version 1.1.0 was updated and **it is recommended to use version 1.1.0+**.
-
-The redirection behavior is determined by the value of `mysqlnd_azure.enableRedirect`. The table below outlines the behavior of redirection based on the value of this parameter beginning in **version 1.1.0+**.
-
-If you are using an older version of the mysqlnd_azure extension (version 1.0.0-1.0.3), the redirection behavior is determined by the value of `mysqlnd_azure.enabled`. The valid values are `off` (acts similarly as the behavior outlined in the table below) and `on` (acts like `preferred` in the table below).
-
-|**mysqlnd_azure.enableRedirect value**| **Behavior**|
-|-|-|
-|`off` or `0`|Redirection will not be used. |
-|`on` or `1`|- If the connection does not use SSL on the driver side, no connection will be made. The following error will be returned: *"mysqlnd_azure.enableRedirect is on, but SSL option is not set in connection string. Redirection is only possible with SSL."*<br>- If SSL is used on the driver side, but redirection is not supported on the server, the first connection is aborted and the following error is returned: *"Connection aborted because redirection is not enabled on the MySQL server or the network package doesn't meet redirection protocol."*<br>- If the MySQL server supports redirection, but the redirected connection failed for any reason, also abort the first proxy connection. Return the error of the redirected connection.|
-|`preferred` or `2`<br> (default value)|- mysqlnd_azure will use redirection if possible.<br>- If the connection does not use SSL on the driver side, the server does not support redirection, or the redirected connection fails to connect for any non-fatal reason while the proxy connection is still a valid one, it will fall back to the first proxy connection.|
-
-The subsequent sections of the document will outline how to install the `mysqlnd_azure` extension using PECL and set the value of this parameter.
-
-### Ubuntu Linux
-
-#### Prerequisites
-- PHP versions 7.2.15+ and 7.3.2+-- PHP PEAR -- php-mysql-- Azure Database for MySQL server-
-1. Install [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) with [PECL](https://pecl.php.net/package/mysqlnd_azure). It is recommended to use version 1.1.0+.
-
- ```bash
- sudo pecl install mysqlnd_azure
- ```
-
-2. Locate the extension directory (`extension_dir`) by running the below:
-
- ```bash
- php -i | grep "extension_dir"
- ```
-
-3. Change directories to the returned folder and ensure `mysqlnd_azure.so` is located in this folder.
-
-4. Locate the folder for .ini files by running the below:
-
- ```bash
- php -i | grep "dir for additional .ini files"
- ```
-
-5. Change directories to this returned folder.
-
-6. Create a new .ini file for `mysqlnd_azure`. Make sure the alphabet order of the name is after that of mysqnld, since the modules are loaded according to the name order of the ini files. For example, if `mysqlnd` .ini is named `10-mysqlnd.ini`, name the mysqlnd ini as `20-mysqlnd-azure.ini`.
-
-7. Within the new .ini file, add the following lines to enable redirection.
-
- ```bash
- extension=mysqlnd_azure
- mysqlnd_azure.enableRedirect = on/off/preferred
- ```
-
-### Windows
-
-#### Prerequisites
-- PHP versions 7.2.15+ and 7.3.2+-- php-mysql-- Azure Database for MySQL server-
-1. Determine if you are running a x64 or x86 version of PHP by running the following command:
-
- ```cmd
- php -i | findstr "Thread"
- ```
-
-2. Download the corresponding x64 or x86 version of the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) DLL from [PECL](https://pecl.php.net/package/mysqlnd_azure) that matches your version of PHP. It is recommended to use version 1.1.0+.
-
-3. Extract the zip file and find the DLL named `php_mysqlnd_azure.dll`.
-
-4. Locate the extension directory (`extension_dir`) by running the below command:
-
- ```cmd
- php -i | find "extension_dir"
- ```
-
-5. Copy the `php_mysqlnd_azure.dll` file into the directory returned in step 4.
-
-6. Locate the PHP folder containing the `php.ini` file using the following command:
-
- ```cmd
- php -i | find "Loaded Configuration File"
- ```
-
-7. Modify the `php.ini` file and add the following extra lines to enable redirection.
-
- Under the Dynamic Extensions section:
- ```cmd
- extension=mysqlnd_azure
- ```
-
- Under the Module Settings section:
- ```cmd
- [mysqlnd_azure]
- mysqlnd_azure.enableRedirect = on/off/preferred
- ```
-
-### Confirm redirection
-
-You can also confirm redirection is configured with the below sample PHP code. Create a PHP file called `mysqlConnect.php` and paste the below code. Update the server name, username, and password with your own.
-
- ```php
-<?php
-$host = '<yourservername>.mysql.database.azure.com';
-$username = '<yourusername>@<yourservername>';
-$password = '<yourpassword>';
-$db_name = 'testdb';
- echo "mysqlnd_azure.enableRedirect: ", ini_get("mysqlnd_azure.enableRedirect"), "\n";
- $db = mysqli_init();
- //The connection must be configured with SSL for redirection test
- $link = mysqli_real_connect ($db, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL);
- if (!$link) {
- die ('Connect error (' . mysqli_connect_errno() . '): ' . mysqli_connect_error() . "\n");
- }
- else {
- echo $db->host_info, "\n"; //if redirection succeeds, the host_info will differ from the hostname you used used to connect
- $res = $db->query('SHOW TABLES;'); //test query with the connection
- print_r ($res);
- $db->close();
- }
-?>
- ```
-
-## Next steps
-For more information about connection strings, see [Connection Strings](howto-connection-string.md).
mysql Howto Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restart-server-cli.md
- Title: Restart server - Azure CLI - Azure Database for MySQL
-description: This article describes how you can restart an Azure Database for MySQL server using the Azure CLI.
----- Previously updated : 3/18/2020 ---
-# Restart Azure Database for MySQL server using the Azure CLI
-
-This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation.
-
-The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
-
-The time required to complete a restart depends on the MySQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart.
-
-## Prerequisites
-
-To complete this how-to guide:
--- You need an [Azure Database for MySQL server](quickstart-create-server-up-azure-cli.md).
-
--- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Restart the server
-
-Restart the server with the following command:
-
-```azurecli-interactive
-az mysql server restart --name mydemoserver --resource-group myresourcegroup
-```
-
-## Next steps
-
-Learn about [how to set parameters in Azure Database for MySQL](howto-configure-server-parameters-using-cli.md)
mysql Howto Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restart-server-portal.md
- Title: Restart server - Azure portal - Azure Database for MySQL
-description: This article describes how you can restart an Azure Database for MySQL server using the Azure portal.
----- Previously updated : 3/18/2020--
-# Restart Azure Database for MySQL server using Azure portal
-
-This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation.
-
-The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
-
-The time required to complete a restart depends on the MySQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart.
-
-## Prerequisites
-To complete this how-to guide, you need:
-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)-
-## Perform server restart
-
-The following steps restart the MySQL server:
-
-1. In the Azure portal, select your Azure Database for MySQL server.
-
-2. In the toolbar of the server's **Overview** page, click **Restart**.
-
- :::image type="content" source="./media/howto-restart-server-portal/2-server.png" alt-text="Azure Database for MySQL - Overview - Restart button":::
-
-3. Click **Yes** to confirm restarting the server.
-
- :::image type="content" source="./media/howto-restart-server-portal/3-restart-confirm.png" alt-text="Azure Database for MySQL - Restart confirm":::
-
-4. Observe that the server status changes to "Restarting".
-
- :::image type="content" source="./media/howto-restart-server-portal/4-restarting-status.png" alt-text="Azure Database for MySQL - Restart status":::
-
-5. Confirm server restart is successful.
-
- :::image type="content" source="./media/howto-restart-server-portal/5-restart-success.png" alt-text="Azure Database for MySQL - Restart success":::
-
-## Next steps
-
-[Quickstart: Create Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)
mysql Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restart-server-powershell.md
- Title: Restart server - Azure PowerShell - Azure Database for MySQL
-description: This article describes how you can restart an Azure Database for MySQL server using PowerShell.
----- Previously updated : 4/28/2020 ---
-# Restart Azure Database for MySQL server using PowerShell
--
-This topic describes how you can restart an Azure Database for MySQL server. You may need to restart
-your server for maintenance reasons, which causes a short outage during the operation.
-
-The server restart is blocked if the service is busy. For example, the service may be processing a
-previously requested operation such as scaling vCores.
-
-The amount of time required to complete a restart depends on the MySQL recovery process. To reduce
-the restart time, we recommend you minimize the amount of activity occurring on the server before
-the restart.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
-> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
--
-## Restart the server
-
-Restart the server with the following command:
-
-```azurepowershell-interactive
-Restart-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md)
mysql Howto Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restore-dropped-server.md
- Title: Restore a deleted Azure Database for MySQL server
-description: This article describes how to restore a deleted server in Azure Database for MySQL using the Azure portal.
----- Previously updated : 10/09/2020--
-# Restore a deleted Azure Database for MySQL server
--
-When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MySQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
-
-## Pre-requisites
-To restore a deleted Azure Database for MySQL server, you need following:
-- Azure Subscription name hosting the original server-- Location where the server was created-
-## Steps to restore
-
-1. Go to the [Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade) from Monitor blade in Azure portal.
-
-2. In Activity Log, click on **Add filter** as shown and set following filters for the
-
- - **Subscription** = Your Subscription hosting the deleted server
- - **Resource Type** = Azure Database for MySQL servers (Microsoft.DBforMySQL/servers)
- - **Operation** = Delete MySQL Server (Microsoft.DBforMySQL/servers/delete)
-
- [![Activity log filtered for delete MySQL server operation](./media/howto-restore-dropped-server/activity-log.png)](./media/howto-restore-dropped-server/activity-log.png#lightbox)
-
- 3. Double Click on the Delete MySQL Server event and click on the JSON tab and note the "resourceId" and "submissionTimestamp" attributes in JSON output. The resourceId is in the following format: /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBforMySQL/servers/deletedserver.
-
- 4. Go to [Create Server REST API Page](/rest/api/mysql/singleserver/servers(2017-12-01)/create) and click on "Try It" tab highlighted in green and login in with your Azure account.
-
- 5. Provide the resourceGroupName, serverName (deleted server name), subscriptionId, derived from resourceId attribute captured in Step 3, while api-version is pre-populated as shown in image.
-
- [![Create server using REST API](./media/howto-restore-dropped-server/create-server-from-rest-api.png)](./media/howto-restore-dropped-server/create-server-from-rest-api.png#lightbox)
-
- 6. Scroll below on Request Body section and paste the following:
-
- ```json
- {
- "location": "Dropped Server Location",
- "properties":
- {
- "restorePointInTime": "submissionTimestamp - 15 minutes",
- "createMode": "PointInTimeRestore",
- "sourceServerId": "resourceId"
- }
- }
- ```
-7. Replace the following values in the above request body:
- * "Dropped server Location" with the Azure region where the deleted server was originally created
- * "submissionTimestamp", and "resourceId" with the values captured in Step 3.
- * For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
-
-8. If you see Response Code 201 or 202, the restore request is successfully submitted.
-
-9. The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for
- - **Subscription** = Your Subscription
- - **Resource Type** = Azure Database for MySQL servers (Microsoft.DBforMySQL/servers)
- - **Operation** = Update MySQL Server Create
-
-## Next steps
-- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a deleted server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system. -- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/preventing-the-disaster-of-accidental-deletion-for-your-mysql/ba-p/825222).
mysql Howto Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restore-server-cli.md
- Title: Backup and restore - Azure CLI - Azure Database for MySQL
-description: Learn how to backup and restore a server in Azure Database for MySQL by using the Azure CLI.
----- Previously updated : 3/27/2020 --
-# How to back up and restore a server in Azure Database for MySQL using the Azure CLI
--
-Azure Database for MySQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
-
-## Prerequisites
-
-To complete this how-to guide:
--- You need an [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-cli.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Set backup configuration
-
-You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation.
-
-> [!NOTE]
-> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched.
->
-
-While creating a server via the `az mysql server create` command, the `--geo-redundant-backup` parameter decides your Backup Redundancy Option. If `Enabled`, geo redundant backups are taken. Or if `Disabled` locally redundant backups are taken.
-
-The backup retention period is set by the parameter `--backup-retention`.
-
-For more information about setting these values during create, see the [Azure Database for MySQL server CLI Quickstart](quickstart-create-mysql-server-database-using-azure-cli.md).
-
-The backup retention period of a server can be changed as follows:
-
-```azurecli-interactive
-az mysql server update --name mydemoserver --resource-group myresourcegroup --backup-retention 10
-```
-
-The preceding example changes the backup retention period of mydemoserver to 10 days.
-
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the next section.
-
-## Server point-in-time restore
-You can restore the server to a previous point in time. The restored data is copied to a new server, and the existing server is left as is. For example, if a table is accidentally dropped at noon today, you can restore to the time just before noon. Then, you can retrieve the missing table and data from the restored copy of the server.
-
-To restore the server, use the Azure CLI [az mysql server restore](/cli/azure/mysql/server#az-mysql-server-restore) command.
-
-### Run the restore command
-
-To restore the server, at the Azure CLI command prompt, enter the following command:
-
-```azurecli-interactive
-az mysql server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time 2018-03-13T13:59:00Z --source-server mydemoserver
-```
-
-The `az mysql server restore` command requires the following parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-| resource-group |  myresourcegroup |  The resource group where the source server exists.  |
-| name | mydemoserver-restored | The name of the new server that is created by the restore command. |
-| restore-point-in-time | 2018-03-13T13:59:00Z | Select a point in time to restore to. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as `2018-03-13T05:59:00-08:00`. You can also use the UTC Zulu format, for example, `2018-03-13T13:59:00Z`. |
-| source-server | mydemoserver | The name or ID of the source server to restore from. |
-
-When you restore a server to an earlier point in time, a new server is created. The original server and its databases from the specified point in time are copied to the new server.
-
-The location and pricing tier values for the restored server remain the same as the original server.
-
-After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
-
-Additionally, after the restore operation finishes, there are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation
-* time_zone - This value to set to DEFAULT value **SYSTEM**
-* event_scheduler - The event_scheduler is set to **OFF** on the restored server
-
-You will need to copy over the value from the primary server and set it on the restored server by reconfiguring the [server parameter](howto-server-parameters.md)
-
-The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
-
-## Geo restore
-If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MySQL is available.
-
-To create a server using a geo redundant backup, use the Azure CLI `az mysql server georestore` command.
-
-> [!NOTE]
-> When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated.
->
-
-To geo restore the server, at the Azure CLI command prompt, enter the following command:
-
-```azurecli-interactive
-az mysql server georestore --resource-group myresourcegroup --name mydemoserver-georestored --source-server mydemoserver --location eastus --sku-name GP_Gen5_8
-```
-This command creates a new server called *mydemoserver-georestored* in East US that will belong to *myresourcegroup*. It is a General Purpose, Gen 5 server with 8 vCores. The server is created from the geo-redundant backup of *mydemoserver*, which is also in the resource group *myresourcegroup*
-
-If you want to create the new server in a different resource group from the existing server, then in the `--source-server` parameter you would qualify the server name as in the following example:
-
-```azurecli-interactive
-az mysql server georestore --resource-group newresourcegroup --name mydemoserver-georestored --source-server "/subscriptions/$<subscription ID>/resourceGroups/$<resource group ID>/providers/Microsoft.DBforMySQL/servers/mydemoserver" --location eastus --sku-name GP_Gen5_8
-
-```
-
-The `az mysql server georestore` command requires the following parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-|resource-group| myresourcegroup | The name of the resource group the new server will belong to.|
-|name | mydemoserver-georestored | The name of the new server. |
-|source-server | mydemoserver | The name of the existing server whose geo redundant backups are used. |
-|location | eastus | The location of the new server. |
-|sku-name| GP_Gen5_8 | This parameter sets the pricing tier, compute generation, and number of vCores of the new server. GP_Gen5_8 maps to a General Purpose, Gen 5 server with 8 vCores.|
-
-When creating a new server by a geo restore, it inherits the same storage size and pricing tier as the source server. These values cannot be changed during creation. After the new server is created, its storage size can be scaled up.
-
-After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
-
-The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
-
-## Next steps
-- Learn more about the service's [backups](concepts-backup.md)-- Learn about [replicas](concepts-read-replicas.md)-- Learn more about [business continuity](concepts-business-continuity.md) options
mysql Howto Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restore-server-portal.md
- Title: Backup and restore - Azure portal - Azure Database for MySQL
-description: This article describes how to restore a server in Azure Database for MySQL using the Azure portal.
----- Previously updated : 6/30/2020--
-# How to backup and restore a server in Azure Database for MySQL using the Azure portal
--
-## Backup happens automatically
-Azure Database for MySQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
-
-## Prerequisites
-To complete this how-to guide, you need:
-- An [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-portal.md)-
-## Set backup configuration
-
-You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation, in the **Pricing Tier** window.
-
-> [!NOTE]
-> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched.
->
-
-While creating a server through the Azure portal, the **Pricing Tier** window is where you select either **Locally Redundant** or **Geographically Redundant** backups for your server. This window is also where you select the **Backup Retention Period** - how long (in number of days) you want the server backups stored for.
-
- :::image type="content" source="./media/howto-restore-server-portal/pricing-tier.png" alt-text="Pricing Tier - Choose Backup Redundancy":::
-
-For more information about setting these values during create, see the [Azure Database for MySQL server quickstart](quickstart-create-mysql-server-database-using-azure-portal.md).
-
-The backup retention period can be changed on a server through the following steps:
-1. Sign into the [Azure portal](https://portal.azure.com/).
-2. Select your Azure Database for MySQL server. This action opens the **Overview** page.
-3. Select **Pricing Tier** from the menu, under **SETTINGS**. Using the slider you can change the **Backup Retention Period** to your preference between 7 and 35 days.
-In the screenshot below it has been increased to 34 days.
-
-4. Click **OK** to confirm the change.
-
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
-
-## Point-in-time restore
-Azure Database for MySQL allows you to restore the server back to a point-in-time and into to a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server.
-
-For example, if a table was accidentally dropped at noon today, you could restore to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level.
-
-The following steps restore the sample server to a point-in-time:
-1. In the Azure portal, select your Azure Database for MySQL server.
-
-2. In the toolbar of the server's **Overview** page, select **Restore**.
-
- :::image type="content" source="./media/howto-restore-server-portal/2-server.png" alt-text="Azure Database for MySQL - Overview - Restore button":::
-
-3. Fill out the Restore form with the required information:
-
- :::image type="content" source="./media/howto-restore-server-portal/3-restore.png" alt-text="Azure Database for MySQL - Restore information":::
- - **Restore point**: Select the point-in-time you want to restore to.
- - **Target server**: Provide a name for the new server.
- - **Location**: You cannot select the region. By default it is same as the source server.
- - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
-
-4. Click **OK** to restore the server to restore to a point-in-time.
-
-5. Once the restore finishes, locate the new server that is created to verify the data was restored as expected.
-
-The new server created by point-in-time restore has the same server admin login name and password that was valid for the existing server at the point-in-time chose. You can change the password from the new server's **Overview** page.
-
-Additionally, after the restore operation finishes, there are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation
-* time_zone - This value to set to DEFAULT value **SYSTEM**
-* event_scheduler - The event_scheduler is set to **OFF** on the restored server
-
-You will need to copy over the value from teh primary server and set it on the restored server by reconfiguring the [server parameter](howto-server-parameters.md)
-
-The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
-
-## Geo restore
-If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MySQL is available.
-
-1. Select the **Create a resource** button (+) in the upper-left corner of the portal. Select **Databases** > **Azure Database for MySQL**.
-
- :::image type="content" source="./media/howto-restore-server-portal/1_navigate-to-mysql.png" alt-text="Navigate to Azure Database for MySQL.":::
-
-2. Provide the subscription, resource group, and name of the new server.
-
-3. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo redundant backups enabled.
-
- :::image type="content" source="./media/howto-restore-server-portal/3-geo-restore.png" alt-text="Select data source.":::
-
- > [!NOTE]
- > When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated.
- >
-
-4. Select the **Backup** dropdown.
-
- :::image type="content" source="./media/howto-restore-server-portal/4-geo-restore-backup.png" alt-text="Select backup dropdown.":::
-
-5. Select the source server to restore from.
-
- :::image type="content" source="./media/howto-restore-server-portal/5-select-backup.png" alt-text="Select backup.":::
-
-6. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**.
-
- :::image type="content" source="./media/howto-restore-server-portal/6-accept-backup.png" alt-text="Continue with backup.":::
-
-7. Fill out the rest of the form with your preferences. You can select any **Location**.
-
- After selecting the location, you can select **Configure server** to update the **Compute Generation** (if available in the region you have chosen), number of **vCores**, **Backup Retention Period**, and **Backup Redundancy Option**. Changing **Pricing Tier** (Basic, General Purpose, or Memory Optimized) or **Storage** size during restore is not supported.
-
- :::image type="content" source="./media/howto-restore-server-portal/7-create.png" alt-text="Fill form.":::
-
-8. Select **Review + create** to review your selections.
-
-9. Select **Create** to provision the server. This operation may take a few minutes.
-
-The new server created by geo restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
-
-The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
-
-## Next steps
-- Learn more about the service's [backups](concepts-backup.md)-- Learn about [replicas](concepts-read-replicas.md)-- Learn more about [business continuity](concepts-business-continuity.md) options
mysql Howto Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restore-server-powershell.md
- Title: Backup and restore - Azure PowerShell - Azure Database for MySQL
-description: Learn how to backup and restore a server in Azure Database for MySQL by using Azure PowerShell.
----- Previously updated : 4/28/2020--
-# How to back up and restore an Azure Database for MySQL server using PowerShell
--
-Azure Database for MySQL servers is backed up periodically to enable restore features. Using this
-feature you may restore the server and all its databases to an earlier point-in-time, on a new
-server.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
-> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
--
-## Set backup configuration
-
-At server creation, you make the choice between configuring your server for either locally redundant
-or geographically redundant backups.
-
-> [!NOTE]
-> After a server is created, the kind of redundancy it has, geographically redundant vs locally
-> redundant, can't be changed.
-
-While creating a server via the `New-AzMySqlServer` command, the **GeoRedundantBackup**
-parameter decides your backup redundancy option. If **Enabled**, geo redundant backups are taken. Or
-if **Disabled**, locally redundant backups are taken.
-
-The backup retention period is set by the **BackupRetentionDay** parameter.
-
-For more information about setting these values during server creation, see
-[Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md).
-
-The backup retention period of a server can be changed as follows:
-
-```azurepowershell-interactive
-Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -BackupRetentionDay 10
-```
-
-The preceding example changes the backup retention period of mydemoserver to 10 days.
-
-The backup retention period governs how far back a point-in-time restore can be retrieved, since
-it's based on available backups. Point-in-time restore is described further in the next section.
-
-## Server point-in-time restore
-
-You can restore the server to a previous point-in-time. The restored data is copied to a new server,
-and the existing server is left unchanged. For example, if a table is accidentally dropped, you can
-restore to the time just the drop occurred. Then, you can retrieve the missing table and data from
-the restored copy of the server.
-
-To restore the server, use the `Restore-AzMySqlServer` PowerShell cmdlet.
-
-### Run the restore command
-
-To restore the server, run the following example from PowerShell.
-
-```azurepowershell-interactive
-$restorePointInTime = (Get-Date).AddMinutes(-10)
-Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Restore-AzMySqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore
-```
-
-The **PointInTimeRestore** parameter set of the `Restore-AzMySqlServer` cmdlet requires the
-following parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-| ResourceGroupName |  myresourcegroup |  The resource group where the source server exists.  |
-| Name | mydemoserver-restored | The name of the new server that is created by the restore command. |
-| RestorePointInTime | 2020-03-13T13:59:00Z | Select a point in time to restore. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as **2020-03-13T05:59:00-08:00**. You can also use the UTC Zulu format, for example, **2018-03-13T13:59:00Z**. |
-| UsePointInTimeRestore | `<SwitchParameter>` | Use point-in-time mode to restore. |
-
-When you restore a server to an earlier point-in-time, a new server is created. The original server
-and its databases from the specified point-in-time are copied to the new server.
-
-The location and pricing tier values for the restored server remain the same as the original server.
-
-After the restore process finishes, locate the new server and verify that the data is restored as
-expected. The new server has the same server admin login name and password that was valid for the
-existing server at the time the restore was started. The password can be changed from the new
-server's **Overview** page.
-
-The new server created during a restore does not have the VNet service endpoints that existed on the
-original server. These rules must be set up separately for the new server. Firewall rules from the
-original server are restored.
-
-## Geo restore
-
-If you configured your server for geographically redundant backups, a new server can be created from
-the backup of the existing server. This new server can be created in any region that Azure Database
-for MySQL is available.
-
-To create a server using a geo redundant backup, use the `Restore-AzMySqlServer` command with the
-**UseGeoRestore** parameter.
-
-> [!NOTE]
-> When a server is first created it may not be immediately available for geo restore. It may take a
-> few hours for the necessary metadata to be populated.
-
-To geo restore the server, run the following example from PowerShell:
-
-```azurepowershell-interactive
-Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Restore-AzMySqlServer -Name mydemoserver-georestored -ResourceGroupName myresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore
-```
-
-This example creates a new server called **mydemoserver-georestored** in the East US region that
-belongs to **myresourcegroup**. It is a General Purpose, Gen 5 server with 8 vCores. The server is
-created from the geo-redundant backup of **mydemoserver**, also in the resource group
-**myresourcegroup**.
-
-To create the new server in a different resource group from the existing server, specify the new
-resource group name using the **ResourceGroupName** parameter as shown in the following example:
-
-```azurepowershell-interactive
-Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Restore-AzMySqlServer -Name mydemoserver-georestored -ResourceGroupName newresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore
-```
-
-The **GeoRestore** parameter set of the `Restore-AzMySqlServer` cmdlet requires the following
-parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-|ResourceGroupName | myresourcegroup | The name of the resource group the new server belongs to.|
-|Name | mydemoserver-georestored | The name of the new server. |
-|Location | eastus | The location of the new server. |
-|UseGeoRestore | `<SwitchParameter>` | Use geo mode to restore. |
-
-When creating a new server using geo restore, it inherits the same storage size and pricing tier as
-the source server unless the **Sku** parameter is specified.
-
-After the restore process finishes, locate the new server and verify that the data is restored as
-expected. The new server has the same server admin login name and password that was valid for the
-existing server at the time the restore was started. The password can be changed from the new
-server's **Overview** page.
-
-The new server created during a restore does not have the VNet service endpoints that existed on the
-original server. These rules must be set up separately for this new server. Firewall rules from the
-original server are restored.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to generate an Azure Database for MySQL connection string with PowerShell](howto-connection-string-powershell.md)
mysql Howto Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-server-parameters.md
- Title: Configure server parameters - Azure portal - Azure Database for MySQL
-description: This article describes how to configure MySQL server parameters in Azure Database for MySQL using the Azure portal.
----- Previously updated : 10/1/2020--
-# Configure server parameters in Azure Database for MySQL using the Azure portal
--
-Azure Database for MySQL supports configuration of some server parameters. This article describes how to configure these parameters by using the Azure portal. Not all server parameters can be adjusted.
-
->[!Note]
-> Server parameters can be updated globally at the server-level, use the [Azure CLI](./howto-configure-server-parameters-using-cli.md), [PowerShell](./howto-configure-server-parameters-using-powershell.md), or [Azure portal](./howto-server-parameters.md).
-
-## Configure server parameters
-
-1. Sign in to the [Azure portal](https://portal.azure.com), then locate your Azure Database for MySQL server.
-2. Under the **SETTINGS** section, click **Server parameters** to open the server parameters page for the Azure Database for MySQL server.
-3. Locate any settings you need to adjust. Review the **Description** column to understand the purpose and allowed values.
-4. Click **Save** to save your changes.
-5. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**.
-
-## Setting parameters not listed
-
-If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server.
-
-1. Under the **SETTINGS** section, click **Server parameters** to open the server parameters page for the Azure Database for MySQL server.
-2. Search for `init_connect`
-3. Add the server parameters in the format: `SET parameter_name=YOUR_DESIRED_VALUE` in value the value column.
-
- For example, you can change the character set of your server by setting of `init_connect` to `SET character_set_client=utf8;SET character_set_database=utf8mb4;SET character_set_connection=latin1;SET character_set_results=latin1;`
-4. Click **Save** to save your changes.
-
->[!Note]
-> `init_connect` can be used to change parameters that do not require SUPER privilege(s) at the session level. To verify if you can set the parameter using `init_connect`, execute the `set session parameter_name=YOUR_DESIRED_VALUE;` command and if it errors out with **Access denied; you need SUPER privileges(s)** error, then you cannot set the parameter using `init_connect'.
-
-## Working with the time zone parameter
-
-### Populating the time zone tables
-
-The time zone tables on your server can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench.
-
-> [!NOTE]
-> If you are running the `mysql.az_load_timezone` command from MySQL Workbench, you may need to turn off safe update mode first using `SET SQL_SAFE_UPDATES=0;`.
-
-```sql
-CALL mysql.az_load_timezone();
-```
-
-> [!IMPORTANT]
-> You should restart the server to ensure the time zone tables are properly populated. To restart the server, use the [Azure portal](howto-restart-server-portal.md) or [CLI](howto-restart-server-cli.md).
-
-To view available time zone values, run the following command:
-
-```sql
-SELECT name FROM mysql.time_zone_name;
-```
-
-### Setting the global level time zone
-
-The global level time zone can be set from the **Server parameters** page in the Azure portal. The below sets the global time zone to the value "US/Pacific".
--
-### Setting the session level time zone
-
-The session level time zone can be set by running the `SET time_zone` command from a tool like the MySQL command line or MySQL Workbench. The example below sets the time zone to the **US/Pacific** time zone.
-
-```sql
-SET time_zone = 'US/Pacific';
-```
-
-Refer to the MySQL documentation for [Date and Time Functions](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz).
-
-## Next steps
--- [Connection libraries for Azure Database for MySQL](concepts-connection-libraries.md).
mysql Howto Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-tls-configurations.md
- Title: TLS configuration - Azure portal - Azure Database for MySQL
-description: Learn how to set TLS configuration using Azure portal for your Azure Database for MySQL
----- Previously updated : 06/02/2020--
-# Configuring TLS settings in Azure Database for MySQL using Azure portal
--
-This article describes how you can configure an Azure Database for MySQL server to enforce minimum TLS version allowed for connections to go through and deny all connections with lower TLS version than configured minimum TLS version thereby enhancing the network security.
-
-You can enforce TLS version for connecting to their Azure Database for MySQL. Customers now have a choice to set the minimum TLS version for their database server. For example, setting this Minimum TLS version to 1.0 means you shall allow clients connecting using TLS 1.0,1.1 and 1.2. Alternatively, setting this to 1.2 means that you only allow clients connecting using TLS 1.2+ and all incoming connections with TLS 1.0 and TLS 1.1 will be rejected.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
-
-* An [Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md)
-
-## Set TLS configurations for Azure Database for MySQL
-
-Follow these steps to set MySQL server minimum TLS version:
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server.
-
-1. On the MySQL server page, under **Settings**, click **Connection security** to open the connection security configuration page.
-
-1. In **Minimum TLS version**, select **1.2** to deny connections with TLS version less than TLS 1.2 for your MySQL server.
-
- :::image type="content" source="./media/howto-tls-configurations/setting-tls-value.png" alt-text="Azure Database for MySQL TLS configuration":::
-
-1. Click **Save** to save the changes.
-
-1. A notification will confirm that connection security setting was successfully enabled and in effect immediately. There is **no restart** of the server required or performed. After the changes are saved, all new connections to the server are accepted only if the TLS version is greater than or equal to the minimum TLS version set on the portal.
-
- :::image type="content" source="./media/howto-tls-configurations/setting-tls-value-success.png" alt-text="Azure Database for MySQL TLS configuration success":::
-
-## Next steps
--- Learn about [how to create alerts on metrics](howto-alert-on-metric.md)
mysql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-common-connection-issues.md
- Title: Troubleshoot connection issues - Azure Database for MySQL
-description: Learn how to troubleshoot connection issues to Azure Database for MySQL, including transient errors requiring retries, firewall issues, and outages.
-keywords: mysql connection,connection string,connectivity issues,transient error,connection error
----- Previously updated : 3/18/2020--
-# Troubleshoot connection issues to Azure Database for MySQL
--
-Connection problems may be caused by a variety of things, including:
-
-* Firewall settings
-* Connection time-out
-* Incorrect login information
-* Maximum limit reached on some Azure Database for MySQL resources
-* Issues with the infrastructure of the service
-* Maintenance being performed in the service
-* The compute allocation of the server is changed by scaling the number of vCores or moving to a different service tier
-
-Generally, connection issues to Azure Database for MySQL can be classified as follows:
-
-* Transient errors (short-lived or intermittent)
-* Persistent or non-transient errors (errors that regularly recur)
-
-## Troubleshoot transient errors
-
-Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. The Azure Database for MySQL service has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery.
-
-### Steps to resolve transient connectivity issues
-
-1. Check the [Microsoft Azure Service Dashboard](https://azure.microsoft.com/status) for any known outages that occurred during the time in which the errors were reported by the application.
-2. Applications that connect to a cloud service such as Azure Database for MySQL should expect transient errors and implement retry logic to handle these errors instead of surfacing these as application errors to users. Review [Handling of transient connectivity errors for Azure Database for MySQL](concepts-connectivity.md) for best practices and design guidelines for handling transient errors.
-3. As a server approaches its resource limits, errors can seem to be transient connectivity issue. See [Limitations in Azure Database for MySQL](concepts-limits.md).
-4. If connectivity problems continue, or if the duration for which your application encounters the error exceeds 60 seconds or if you see multiple occurrences of the error in a given day, file an Azure support request by selecting **Get Support** on the [Azure Support](https://azure.microsoft.com/support/options) site.
-
-## Troubleshoot persistent errors
-
-If the application persistently fails to connect to Azure Database for MySQL, it usually indicates an issue with one of the following:
-
-* Server firewall configuration: Make sure that the Azure Database for MySQL server firewall is configured to allow connections from your client, including proxy servers and gateways.
-* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you cannot to must be allowed as well as application names such as MySQL in some firewalls.
-* User error: You might have mistyped connection parameters, such as the server name in the connection string or a missing *\@servername* suffix in the user name.
-
-### Steps to resolve persistent connectivity issues
-
-1. Set up [firewall rules](howto-manage-firewall-using-portal.md) to allow the client IP address. For temporary testing purposes only, set up a firewall rule using 0.0.0.0 as the starting IP address and using 255.255.255.255 as the ending IP address. This will open the server to all IP addresses. If this resolves your connectivity issue, remove this rule and create a firewall rule for an appropriately limited IP address or address range.
-2. On all firewalls between the client and the internet, make sure that port 3306 is open for outbound connections.
-3. Verify your connection string and other connection settings. Review [How to connect applications to Azure Database for MySQL](howto-connection-string.md).
-4. Check the service health in the dashboard. If you think there's a regional outage, see [Overview of business continuity with Azure Database for MySQL](concepts-business-continuity.md) for steps to recover to a new region.
-
-## Next steps
-
-* [Handling of transient connectivity errors for Azure Database for MySQL](concepts-connectivity.md)
mysql Howto Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-common-errors.md
- Title: Troubleshoot common errors - Azure Database for MySQL
-description: Learn how to troubleshoot common migration errors encountered by users new to the Azure Database for MySQL service
------ Previously updated : 5/21/2021--
-# Troubleshoot errors commonly encountered during or post migration to Azure Database for MySQL
--
-Azure Database for MySQL is a fully managed service powered by the community version of MySQL. The MySQL experience in a managed service environment may differ from running MySQL in your own environment. In this article, you'll see some of the common errors users may encounter while migrating to or developing on Azure Database for MySQL for the first time.
-
-## Common Connection Errors
-
-### ERROR 1184 (08S01): Aborted connection 22 to db: 'db-name' user: 'user' host: 'hostIP' (init_connect command failed)
-
-The above error occurs after successful sign-in but before executing any command when session is established. The above message indicates you have set an incorrect value of `init_connect` server parameter, which is causing the session initialization to fail.
-
-There are some server parameters like `require_secure_transport` that aren't supported at the session level, and so trying to change the values of these parameters using `init_connect` can result in Error 1184 while connecting to the MySQL server as shown below:
-
-mysql> show databases;
-ERROR 2006 (HY000): MySQL server has gone away
-No connection. Trying to reconnect...
-Connection id: 64897
-Current database: *** NONE ***
-ERROR 1184 (08S01): Aborted connection 22 to db: 'db-name' user: 'user' host: 'hostIP' (init_connect command failed)
-
-**Resolution**: Reset `init_connect` value in Server parameters tab in Azure portal and set only the supported server parameters using init_connect parameter.
-
-## Errors due to lack of SUPER privilege and DBA role
-
-The SUPER privilege and DBA role aren't supported on the service. As a result, you may encounter some common errors listed below:
-
-### ERROR 1419: You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)
-
-The above error may occur while creating a function, trigger as below or importing a schema. The DDL statements like CREATE FUNCTION or CREATE TRIGGER are written to the binary log, so the secondary replica can execute them. The replica SQL thread has full privileges, which can be exploited to elevate privileges. To guard against this danger for servers that have binary logging enabled, the MySQL engine requires that stored function creators have the SUPER privilege, in addition to the usual CREATE ROUTINE privilege.
-
-```sql
-CREATE FUNCTION f1(i INT)
-RETURNS INT
-DETERMINISTIC
-READS SQL DATA
-BEGIN
- RETURN i;
-END;
-```
-
-**Resolution**: To resolve the error, set `log_bin_trust_function_creators` to 1 from [server parameters](howto-server-parameters.md) blade in portal, execute the DDL statements or import the schema to create the desired objects. You can continue to maintain `log_bin_trust_function_creators` to 1 for your server to avoid the error in future. Our recommendation is to set `log_bin_trust_function_creators` as the security risk highlighted in [MySQL community documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) is minimal in Azure Database for MySQL as bin log isn't exposed to any threats.
-
-#### ERROR 1227 (42000) at line 101: Access denied; you need (at least one of) the SUPER privilege(s) for this operation. Operation failed with exitcode 1
-
-The above error may occur while importing a dump file or creating procedure that contains [definers](https://dev.mysql.com/doc/refman/5.7/en/create-procedure.html).
-
-**Resolution**: To resolve this error, the admin user can grant privileges to create or execute procedures by running GRANT command as in the following examples:
-
-```sql
-GRANT CREATE ROUTINE ON mydb.* TO 'someuser'@'somehost';
-GRANT EXECUTE ON PROCEDURE mydb.myproc TO 'someuser'@'somehost';
-```
-
-Alternately, you can replace the definers with the name of the admin user that is running the import process as shown below.
-
-```sql
-DELIMITER;;
-/*!50003 CREATE*/ /*!50017 DEFINER=`root`@`127.0.0.1`*/ /*!50003
-DELIMITER;;
-
-/* Modified to */
-
-DELIMITER ;;
-/*!50003 CREATE*/ /*!50017 DEFINER=`AdminUserName`@`ServerName`*/ /*!50003
-DELIMITER ;
-```
-
-#### ERROR 1227 (42000) at line 295: Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation
-
-The above error may occur while executing CREATE VIEW with DEFINER statements as part of importing a dump file or running a script. Azure Database for MySQL doesn't allow SUPER privileges or the SET_USER_ID privilege to any user.
-
-**Resolution**:
-
-* Use the definer user to execute CREATE VIEW if possible. It's likely that there are many views with different definers having different permissions, so this may not be feasible. OR
-* Edit the dump file or CREATE VIEW script and remove the DEFINER= statement from the dump file. OR
-* Edit the dump file or CREATE VIEW script and replace the definer values with user with admin permissions who is performing the import or execute the script file.
-
-> [!Tip]
-> Use sed or perl to modify a dump file or SQL script to replace the DEFINER= statement
-
-#### ERROR 1227 (42000) at line 18: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
-
-The above error may occur if you're using trying to import the dump file from MySQL server with GTID enabled to the target Azure Database for MySQL server. Mysqldump adds SET @@SESSION.sql_log_bin=0 statement to a dump file from a server where GTIDs are in use, which disables binary logging while the dump file is being reloaded.
-
-**Resolution**:
-To resolve this error while importing, remove or comment out the below lines in your mysqldump file and run import again to ensure it's successful.
-
-SET @MYSQLDUMP_TEMP_LOG_BIN = @@SESSION.SQL_LOG_BIN;
-SET @@SESSION.SQL_LOG_BIN= 0;
-SET @@GLOBAL.GTID_PURGED='';
-SET @@SESSION.SQL_LOG_BIN = @MYSQLDUMP_TEMP_LOG_BIN;
-
-## Common connection errors for server admin sign-in
-
-When an Azure Database for MySQL server is created, a server admin sign-in is provided by the end user during the server creation. The server admin sign-in allows you to create new databases, add new users and grant permissions. If the server admin sign-in is deleted, its permissions are revoked or its password is changed, you may start to see connections errors in your application while connections. Following are some of the common errors
-
-### ERROR 1045 (28000): Access denied for user 'username'@'IP address' (using password: YES)
-
-The above error occurs if:
-
-* The username doesn't exist.
-* The user username was deleted.
-* its password is changed or reset.
-
-**Resolution**:
-
-* Validate if "username" exists as a valid user in the server or is accidentally deleted. You can execute the following query by logging into the Azure Database for MySQL user:
-
- ```sql
- select user from mysql.user;
- ```
-
-* If you can't sign in to the MySQL to execute the above query itself, we recommend you to [reset the admin password using Azure portal](howto-create-manage-server-portal.md). The reset password option from Azure portal will help recreate the user, reset the password, and restore the admin permissions, which will allow you to sign in using the server admin and perform further operations.
-
-## Next steps
-
-If you didn't find the answer you're looking for, consider the following options:
-
-* Post your question on [Microsoft Q&A question page](/answers/topics/azure-database-mysql.html) or [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
-* Send an email to the Azure Database for MySQL Team [@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com). This email address isn't a technical support alias.
-* Contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
-* To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).
mysql Howto Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-high-cpu-utilization.md
- Title: Troubleshoot high CPU utilization in Azure Database for MySQL
-description: Learn how to troubleshoot high CPU utilization in Azure Database for MySQL.
----- Previously updated : 4/27/2022--
-# Troubleshoot high CPU utilization in Azure Database for MySQL
--
-Azure Database for MySQL provides a range of metrics that you can use to identify resource bottlenecks and performance issues on the server. To determine whether your server is experiencing high CPU utilization, monitor metrics such as ΓÇ£Host CPU percentΓÇ¥, ΓÇ£Total ConnectionsΓÇ¥, ΓÇ£Host Memory PercentΓÇ¥, and ΓÇ£IO PercentΓÇ¥. At times, viewing a combination of these metrics will provide insights into what might be causing the increased CPU utilization on your Azure Database for MySQL server.
-
-For example, consider a sudden surge in connections that initiates surge of database queries that cause CPU utilization to shoot up.
-
-Besides capturing metrics, itΓÇÖs important to also trace the workload to understand if one or more queries are causing the spike in CPU utilization.
-
-## Capturing details of the current workload
-
-The SHOW (FULL) PROCESSLIST command displays a list of all user sessions currently connected to the Azure Database for MySQL server. It also provides details about the current state and activity of each session.
-This command only produces a snapshot of the current session status and doesn't provide information about historical session activity.
-
-LetΓÇÖs take a look at sample output from running this command.
-
-```
-mysql> SHOW FULL PROCESSLIST;
-+-++--++-+--+--++
-| Id | User | Host | db | Command | Time | State | Info |
-+-++--++-+--+--++
-| 1 | event_scheduler | localhost | NULL | Daemon | 13 | Waiting for next activation | NULL |
-| 6 | azure_superuser | 127.0.0.1:33571 | NULL | Sleep | 115 | | NULL
-|
-| 24835 | adminuser | 10.1.1.4:39296 | classicmodels | Query | 7 | Sending data | select * from classicmodels.orderdetails;|
-| 24837 | adminuser | 10.1.1.4:38208 | NULL | Query | 0 | starting | SHOW FULL PROCESSLIST |
-+-++--++-+--+--++
-5 rows in set (0.00 sec)
-```
-
-Notice that there are two sessions owned by customer owned user ΓÇ£adminuserΓÇ¥, both from the same IP address:
-
-* Session 24835 has been executing a SELECT statement for the last seven seconds.
-* Session 24837 is executing ΓÇ£show full processlistΓÇ¥ statement.
-
-When necessary, it may be required to terminate a query, such as a reporting or HTAP query that has caused your production workload CPU usage to spike. However, always consider the potential consequences of terminating a query before taking the action in an attempt to reduce CPU utilization. Other times if there are any long running queries identified that are leading to CPU spikes, tune these queries so the resources are optimally utilized.
-
-## Detailed current workload analysis
-
-You need to use at least two sources of information to obtain accurate information about the status of a session, transaction, and query:
-
-* The serverΓÇÖs process list from the INFORMATION_SCHEMA.PROCESSLIST table, which you can also access by running the SHOW [FULL] PROCESSLIST command.
-* InnoDBΓÇÖs transaction metadata from the INFORMATION_SCHEMA.INNODB_TRX table.
-
-With information from only one of these sources, itΓÇÖs impossible to describe the connection and transaction state. For example, the process list doesnΓÇÖt inform you whether thereΓÇÖs an open transaction associated with any of the sessions. On the other hand, the transaction metadata doesnΓÇÖt show session state and time spent in that state.
-
-An example query that combines process list information with some of the important pieces of InnoDB transaction metadata is shown below:
-
-```
-mysql> select p.id as session_id, p.user, p.host, p.db, p.command, p.time, p.state, substring(p.info, 1, 50) as info, t.trx_started, unix_timestamp(now()) - unix_timestamp(t.trx_started) as trx_age_seconds, t.trx_rows_modified, t.trx_isolation_level from information_schema.processlist p left join information_schema.innodb_trx t on p.id = t.trx_mysql_thread_id \G
-```
-
-An example of the output from this query is shown below:
-
-```
-*************************** 1. row ***************************
- session_id: 11
- user: adminuser
- host: 172.31.19.159:53624
- db: NULL
- command: Sleep
- time: 636
- state: cleaned up
- info: NULL
- trx_started: 2019-08-01 15:25:07
- trx_age_seconds: 2908
- trx_rows_modified: 17825792
-trx_isolation_level: REPEATABLE READ
-*************************** 2. row ***************************
- session_id: 12
- user: adminuser
- host: 172.31.19.159:53622
- db: NULL
- command: Query
- time: 15
- state: executing
- info: select * from classicmodels.orders
- trx_started: NULL
- trx_age_seconds: NULL
- trx_rows_modified: NULL
-trx_isolation_level: NULL
-```
-
-An analysis of this information, by session, is listed in the following table.
-
-| **Area** | **Analysis** |
-|-|-|
-| Session 11 | This session is currently idle (sleeping) with no queries running, and it has been for 636 seconds. Within the session, a transaction thatΓÇÖs been open for 2908 seconds has modified 17,825,792 rows, and it uses REPEATABLE READ isolation. |
-| Session 12 | The session is currently executing a SELECT statement, which has been running for 15 seconds. There's no query running within the session, as indicated by the NULL values for trx_started and trx_age_seconds. The session will continue to hold the garbage collection boundary as long as it runs unless itΓÇÖs using the more relaxed READ COMMITTED isolation. |
-
-Note that if a session is reported as idle, itΓÇÖs no longer executing any statements. At this point, the session has completed any prior work and is waiting for new statements from the client. However, idle sessions are still responsible for some CPU consumption and memory usage.
-
-## Understanding thread states
-
-Transactions that contribute to higher CPU utilization during execution can have threads in various states, as described in the following sections. Use this information to better understand the query lifecycle and various thread states.
-
-### Checking permissions/Opening tables
-
-This state usually means the open table operation is consuming a long time. Usually, you can increase the table cache size to improve the issue. However, tables opening slowly can also be indicative of other issues, such as having too many tables under the same database.
-
-### Sending data
-
-While this state can mean that the thread is sending data through the network, it can also indicate that the query is reading data from the disk or memory. This state can be caused by a sequential table scan. You should check the values of the innodb_buffer_pool_reads and innodb_buffer_pool_read_requests to determine whether a large number of pages are being served from the disk into the memory. For more information, see [Troubleshoot low memory issues in Azure Database for MySQL](howto-troubleshoot-low-memory-issues.md).
-
-### Updating
-
-This state usually means that the thread is performing a write operation. Check the IO-related metric in the Performance Monitor to get a better understanding on what the current sessions are doing.
-
-### Waiting for <lock_type> lock
-
-This state indicates that the thread is waiting for a second lock. In most cases, it may be a metadata lock. You should review all other threads and see who is taking the lock.
-
-## Understanding and analyzing wait events
-
-ItΓÇÖs important to understand the underlying wait events in MySQL engine, because long waits or a large number of waits in a database can lead to increased CPU utilization. The following shows the appropriate command and sample output.
-
-```
-SELECT event_name AS wait_event,
-count_star AS all_occurrences,
-Concat(Round(sum_timer_wait / 1000000000000, 2), ' s') AS total_wait_time,
- Concat(Round(avg_timer_wait / 1000000000, 2), ' ms') AS
-avg_wait_time
-FROM performance_schema.events_waits_summary_global_by_event_name
-WHERE count_star > 0 AND event_name <> 'idle'
-ORDER BY sum_timer_wait DESC LIMIT 10;
-+--+--+--++
-| wait_event | all_occurrences | total_wait_time | avg_wait_time |
-+--+--+--++
-| wait/io/file/sql/binlog | 7090 | 255.54 s | 36.04 ms |
-| wait/io/file/innodb/innodb_log_file | 17798 | 55.43 s | 3.11 ms |
-| wait/io/file/innodb/innodb_data_file | 260227 | 39.67 s | 0.15 ms |
-| wait/io/table/sql/handler | 5548985 | 11.73 s | 0.00 ms |
-| wait/io/file/sql/FRM | 1237 | 7.61 s | 6.15 ms |
-| wait/io/file/sql/dbopt | 28 | 1.89 s | 67.38 ms |
-| wait/io/file/myisam/kfile | 92 | 0.76 s | 8.30 ms |
-| wait/io/file/myisam/dfile | 271 | 0.53 s | 1.95 ms |
-| wait/io/file/sql/file_parser | 18 | 0.32 s | 17.75 ms |
-| wait/io/file/sql/slow_log | 2 | 0.05 s | 25.79 ms |
-+--+--+--++
-10 rows in set (0.00 sec)
-```
-
-## Restrict SELECT Statements execution time
-
-If you donΓÇÖt know about the execution cost and execution time for database operations involving SELECT queries, any long running SELECTs can lead to unpredictability or volatility in the database server. The size of statements and transactions, as well as the associated resource utilization, continues to grow depending on the underlying data set growth. Because of this unbounded growth, end user statements and transactions take longer and longer, consuming increasingly more resources until they overwhelm the database server. When using unbounded SELECT queries, itΓÇÖs recommended to configure the max_execution_time parameter so that any queries exceeding this duration will be aborted.
-
-## Recommendations
-
-* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more CPU cores to accommodate your workload.
-* Avoid large or long-running transactions by breaking them into smaller transactions.
-* Run SELECT statements on read replica servers when possible.
-* Use alerts on ΓÇ£Host CPU PercentΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
-* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
-* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
-
-## Next steps
-
-To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql Howto Troubleshoot Low Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-low-memory-issues.md
- Title: Troubleshoot low memory issues in Azure Database for MySQL
-description: Learn how to troubleshoot low memory issues in Azure Database for MySQL.
----- Previously updated : 4/22/2022--
-# Troubleshoot low memory issues in Azure Database for MySQL
--
-To help ensure that a MySQL database server performs optimally, it's very important to have the appropriate memory allocation and utilization. By default, when you create an instance of Azure Database for MySQL, the available physical memory is dependent on the tier and size you select for your workload. In addition, memory is allocated for buffers and caches to improve database operations. For more information, see [How MySQL Uses Memory](https://dev.mysql.com/doc/refman/5.7/en/memory-use.html).
-
-Note that the Azure Database for MySQL service consumes memory to achieve as much cache hit as possible. As a result, memory utilization can often hover between 80- 90% of the available physical memory of an instance. Unless there's an issue with the progress of the query workload, it isn't a concern. However, you may run into out of memory issues for reasons such as that you have:
-
-* Configured too large buffers.
-* Sub optimal queries running.
-* Queries performing joins and sorting large data sets.
-* Set the maximum connections on a database server too high.
-
-A majority of a serverΓÇÖs memory is used by InnoDBΓÇÖs global buffers and caches, which include components such as **innodb_buffer_pool_size**, **innodb_log_buffer_size**, **key_buffer_size**, and **query_cache_size**.
-
-The value of the **innodb_buffer_pool_size** parameter specifies the area of memory in which InnoDB caches the database tables and index-related data. MySQL tries to accommodate as much table and index-related data in the buffer pool as possible. A larger buffer pool requires fewer I/O operations being diverted to the disk.
-
-## Monitoring memory usage
-
-Azure Database for MySQL provides a range of metrics to gauge the performance of your database instance. To better understand the memory utilization for your database server, view the **Host Memory Percent** or **Memory Percent** metrics.
-
-![Viewing memory utilization metrics](media/howto-troubleshoot-low-memory-issues/avg-host-memory-percentage.png)
-
-If you notice that memory utilization has suddenly increased and that available memory is dropping quickly, monitor other metrics, such as **Host CPU Percent**, **Total Connections**, and **IO Percent**, to determine if a sudden spike in the workload is the source of the issue.
-
-ItΓÇÖs important to note that each connection established with the database server requires the allocation of some amount of memory. As a result, a surge in database connections can cause memory shortages.
-
-## Causes of high memory utilization
-
-LetΓÇÖs look at some more causes of high memory utilization in MySQL. These causes are dependent on the characteristics of the workload.
-
-### An increase in temporary tables
-
-MySQL uses ΓÇ£temporary tablesΓÇ¥, which are a special type of table designed to store a temporary result set. Temporary tables can be reused several times during a session. Since any temporary tables created are local to a session, different sessions can have different temporary tables. In production systems with many sessions performing compilations of large temporary result sets, you should regularly check the global status counter created_tmp_tables, which tracks the number of temporary tables being created during peak hours. A large number of in-memory temporary tables can quickly lead to low available memory in an instance of Azure Database for MySQL.
-
-With MySQL, temporary table size is determined by the values of two parameters, as described in the following table.
-
-| **Parameter** | **Description** |
-|-|-|
-| tmp_table_size | Specifies the maximum size of internal, in-memory temporary tables. |
-| max_heap_table_size | Specifies the maximum size to which user created MEMORY tables can grow. |
-
-> [!NOTE]
-> When determining the maximum size of an internal, in-memory temporary table, MySQL considers the lower of the values set for the tmp_table_size and max_heap_table_size parameters.
->
-
-#### Recommendations
-
-To troubleshoot low memory issues related to temporary tables, consider the following recommendations.
-
-* Before increasing the tmp_table_size value, verify that your database is indexed properly, especially for columns involved in joins and grouped by operations. Using the appropriate indexes on underlying tables limits the number of temporary tables that are created. Increasing the value of this parameter and the max_heap_table_size parameter without verifying your indexes can allow inefficient queries to run without indexes and create more temp tables than are necessary.
-* Tune the values of the max_heap_table_size and tmp_table_size parameters to address the needs of your workload.
-* If the values you set for the max_heap_table_size and tmp_table_size parameters are too low, temporary tables may regularly spill to storage, adding latency to your queries. You can track temporary tables spilling to disk using the global status counter created_tmp_disk_tables. By comparing the values of the created_tmp_disk_tables and created_tmp_tables variables, you view the number of internal, on-disk temporary tables that have been created to the total number of internal temporary tables created.
-
-### Table cache
-
-As a multi-threaded system, MySQL maintains a cache of table file descriptors so that the tables can be concurrently opened independently by multiple sessions. MySQL uses some amount of memory and OS file descriptors to maintain this table cache. The variable table_open_cache defines the size of the table cache.
-
-#### Recommendations
-
-To troubleshoot low memory issues related to the table cache, consider the following recommendations.
-
-* The parameter table_open_cache specifies the number of open tables for all threads. Increasing this value increases the number of file descriptors that mysqld requires. You can check whether you need to increase the table cache by checking the opened_tables status variable in the show global status counter. Increase the value of this parameter in increments to accommodate your workload.
-* Setting table_open_cache too low may cause MySQL to spend more time in opening and closing tables needed for query processing.
-* Setting this value too high may cause usage of more memory and the operating system running of file descriptors leading to refused connections or failing to process queries.
-
-### Other buffers and the query cache
-
-When troubleshooting issues related to low memory, you can work with a few more buffers and a cache to help with the resolution.
-
-#### Net buffer (net_buffer_length)
-
-The net buffer is size for connection and thread buffers for each client thread and can grow to value specified for max_allowed_packet. If a query statement is large, for example, all the inserts/updates have a very large value, then increasing the value of the net_buffer_length parameter will help to improve performance.
-
-#### Join buffer (join_buffer_size)
-
-The join buffer is allocated to cache table rows when a join canΓÇÖt use an index. If your database has many joins performed without indexes, consider adding indexes for faster joins. If you canΓÇÖt add indexes, then consider increasing the value of the join_buffer_size parameter, which specifies the amount of memory allocated per connection.
-
-#### Sort buffer (sort_buffer_size)
-
-The sort buffer is used for performing sorts for some ORDER BY and GROUP BY queries. If you see many Sort_merge_passes per second in the SHOW GLOBAL STATUS output, consider increasing the sort_buffer_size value to speed up ORDER BY or GROUP BY operations that canΓÇÖt be improved using query optimization or better indexing.
-
-Avoid arbitrarily increasing the sort_buffer_size value unless you have related information that indicates otherwise. Memory for this buffer is assigned per connection. In the MySQL documentation, the Server System Variables article calls out that on Linux, there are two thresholds, 256 KB and 2 MB, and that using larger values can significantly slow down memory allocation. As a result, avoid increasing the sort_buffer_size value beyond 2M, as the performance penalty will outweigh any benefits.
-
-#### Query cache (query_cache_size)
-
-The query cache is an area of memory that is used for caching query result sets. The query_cache_size parameter determines the amount of memory that is allocated for caching query results. By default, the query cache is disabled. In addition, the query cache is deprecated in MySQL version 5.7.20 and removed in MySQL version 8.0. If the query cache is currently enabled in your solution, before disabling it, verify that there arenΓÇÖt any queries relying on it.
-
-### Calculating buffer cache hit ratio
-
-Buffer cache hit ratio is important in MySQL environment to understand if the buffer pool can accommodate the workload requests or not, and as a general rule of thumb itΓÇÖs a good practice to always have a buffer pool cache hit ratio more than 99%.
-
-To compute the InnoDB buffer pool hit ratio for read requests, you can run the SHOW GLOBAL STATUS to retrieve counters ΓÇ£Innodb_buffer_pool_read_requestsΓÇ¥ and ΓÇ£Innodb_buffer_pool_readsΓÇ¥ and then compute the value using the formula shown below.
-
-```
-InnoDB Buffer pool hit ratio = Innodb_buffer_pool_read_requests / (Innodb_buffer_pool_read_requests + Innodb_buffer_pool_reads) * 100
-```
-
-Consider the following example.
-
-```
-mysql> show global status like "innodb_buffer_pool_reads";
-+--+-+
-| Variable_name | Value |
-+--+-+
-| Innodb_buffer_pool_reads | 197 |
-+--+-+
-1 row in set (0.00 sec)
-
-mysql> show global status like "innodb_buffer_pool_read_requests";
-+-+-+
-| Variable_name | Value |
-+-+-+
-| Innodb_buffer_pool_read_requests | 22479167 |
-+-+-+
-1 row in set (0.00 sec)
-```
-
-Using the above values, computing the InnoDB buffer pool hit ratio for read requests yields the following result:
-
-```
-InnoDB Buffer pool hit ratio = 22479167/(22479167+197) * 100
-
-Buffer hit ratio = 99.99%
-```
-
-In addition to select statements buffer cache hit ratio, for any DML statements, writes to the InnoDB Buffer Pool happen in the background. However, if it's necessary to read or create a page and no clean pages are available, it's also necessary to wait for pages to be flushed first.
-
-The Innodb_buffer_pool_wait_free counter counts how many times this has happened. Innodb_buffer_pool_wait_free greater than 0 is a strong indicator that the InnoDB Buffer Pool is too small and increase in buffer pool size or instance size is required to accommodate the writes coming into the database.
-
-## Recommendations
-
-* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more physical memory so the buffers and caches to accommodate your workload.
-* Avoid large or long-running transactions by breaking them into smaller transactions.
-* Use alerts ΓÇ£Host Memory PercentΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
-* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
-* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
-
-## Next steps
-
-To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql Howto Troubleshoot Query Performance New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-query-performance-new.md
- Title: Troubleshoot query performance in Azure Database for MySQL
-description: Learn how to troubleshoot query performance in Azure Database for MySQL.
----- Previously updated : 4/22/2022--
-# Troubleshoot query performance in Azure Database for MySQL
--
-Query performance can be impacted by multiple factors, so itΓÇÖs first important to look at the scope of the symptoms youΓÇÖre experiencing in your Azure Database for MySQL server. For example, is query performance slow for:
-
-* All queries running on the Azure Database for MySQL server?
-* A specific set of queries?
-* A specific query?
-
-Also keep in mind that any recent changes to the structure or underlying data of the tables youΓÇÖre querying can affect performance.
-
-## Enabling logging functionality
-
-Before analyzing individual queries, you need to define query benchmarks. With this information, you can implement logging functionality on the database server to trace queries that exceed a threshold you specify based on the needs of the application.
-
-With Azure Database for MySQL, itΓÇÖs recommended to use the slow query log feature to identify queries that take longer than *N* seconds to run. After you've identified the queries from the slow query log, you can use MySQL diagnostics to troubleshoot these queries.
-
-Before you can begin to trace long running queries, you need enable the `slow_query_log` parameter by using the Azure portal or Azure CLI. With this parameter enabled, you should also configure the value of the `long_query_time` parameter to specify the number of seconds that queries can run before being identified as ΓÇ£slow runningΓÇ¥ queries. The default value of the parameter is 10 seconds, but you can adjust the value to address the needs of your applicationΓÇÖs SLA.
-
-[ ![Flexible Server slow query log interface.](media/howto-troubleshoot-query-performance-new/slow-query-log.png) ](media/howto-troubleshoot-query-performance-new/slow-query-log.png#lightbox)
-
-While the slow query log is a great tool for tracing long running queries, there are certain scenarios in which it might not be effective. For example, the slow query log:
-
-* Negatively impacts performance if the number of queries is very high or if the query statement is very large. Adjust the value of the `long_query_time` parameter accordingly.
-* May not be helpful if youΓÇÖve also enabled the `log_queries_not_using_index` parameter, which specifies to log queries expected to retrieve all rows. Queries performing a full index scan take advantage of an index, but theyΓÇÖd be logged because the index doesn't limit the number of rows returned.
-
-## Retrieving information from the logs
-
-Logs are available for up to seven days from their creation. You can list and download slow query logs via the Azure portal or Azure CLI. In the Azure portal, navigate to your server, under **Monitoring**, select **Server logs**, and then select the downward arrow next to an entry to download the logs associated with the date and time youΓÇÖre investigating.
-
-[ ![Flexible Server retrieving data from the logs.](media/howto-troubleshoot-query-performance-new/retrieving-information-logs.png) ](media/howto-troubleshoot-query-performance-new/retrieving-information-logs.png#lightbox)
-
-In addition, if your slow query logs are integrated with Azure Monitor logs through Diagnostic logs, you can run queries in an editor to analyze them further:
-
-```kusto
-AzureDiagnostics
-| where Resource == '<your server name>'
-| where Category == 'MySqlSlowLogs'
-| project TimeGenerated, Resource , event_class_s, start_time_t , query_time_d, sql_text_s
-| where query_time_d > 10
-```
-
-> [!NOTE]
-> For more examples to get you started with diagnosing slow query logs via Diagnostic logs, see [Analyze logs in Azure Monitor Logs](./concepts-server-logs.md#analyze-logs-in-azure-monitor-logs).
->
-
-The following snapshot depicts a sample slow query.
-
-```
-# Time: 2021-11-13T10:07:52.610719Z
-# User@Host: root[root] @ [172.30.209.6] Id: 735026
-# Query_time: 25.314811 Lock_time: 0.000000 Rows_sent: 126 Rows_examined: 443308
-use employees;
-SET timestamp=1596448847;
-select * from titles where DATE(from_date) > DATE('1994-04-05') AND title like '%senior%';;
-```
-
-Notice that the query ran in 26 seconds, examined over 443k rows, and returned 126 rows of results.
-
-Usually, you should focus on queries with high values for Query_time and Rows_examined. However, if you notice queries with a high Query_time but only a few Rows_examined, this often indicates the presence of a resource bottleneck. For these cases, you should check if there's any IO throttle or CPU usage.
-
-## Profiling a query
-
-After youΓÇÖve identified a specific slow running query, you can use the EXPLAIN command and profiling to gather additional detail.
-
-To check the query plan, run the following command:
-
-```
-EXPLAIN <QUERY>
-```
-
-> [!NOTE]
-> For more information about using EXPLAIN statements, see [How to use EXPLAIN to profile query performance in Azure Database for MySQL](./howto-troubleshoot-query-performance.md).
->
-
-In addition to creating an EXPLAIN plan for a query, you can use the SHOW PROFILE command, which allows you to diagnose the execution of statements that have been run within the current session.
-
-To enable profiling and profile a specific query in a session, run the following set of commands:
-
-```
-SET profiling = 1;
-<QUERY>;
-SHOW PROFILES;
-SHOW PROFILE FOR QUERY <X>;
-```
-
-> [!NOTE]
-> Profiling individual queries is only available in a session and historical statements cannot be profiled.
->
-
-LetΓÇÖs take a closer look at using these commands to profile a query. First, enable profiling for the current session, run the `SET PROFILING = 1` command:
-
-```
-mysql> SET PROFILING = 1;
-Query OK, 0 rows affected, 1 warning (0.00 sec)
-```
-
-Next, execute a suboptimal query that performs a full table scan:
-
-```
-mysql> select * from sbtest8 where c like '%99098187165%';
-+-++-+-+
-| id | k | c | pad |
-+-++-+-+
-| 10 | 5035785 | 81674956652-89815953173-84507133182-62502329576-99098187165-62672357237-37910808188-52047270287-89115790749-78840418590 | 91637025586-81807791530-84338237594-90990131533-07427691758 |
-+-++-+-+
-1 row in set (27.60 sec)
-```
-
-Then, display a list of all available query profiles by running the `SHOW PROFILES` command:
-
-```
-mysql> SHOW PROFILES;
-+-+-+-+
-| Query_ID | Duration | Query |
-+-+-+-+
-| 1 | 27.59450000 | select * from sbtest8 where c like '%99098187165%' |
-+-+-+-+
-1 row in set, 1 warning (0.00 sec)
-```
-
-Finally, to display the profile for query 1, run the `SHOW PROFILE FOR QUERY 1` command.
-
-```
-mysql> SHOW PROFILE FOR QUERY 1;
-+-+--+
-| Status | Duration |
-+-+--+
-| starting | 0.000102 |
-| checking permissions | 0.000028 |
-| Opening tables | 0.000033 |
-| init | 0.000035 |
-| System lock | 0.000018 |
-| optimizing | 0.000017 |
-| statistics | 0.000025 |
-| preparing | 0.000019 |
-| executing | 0.000011 |
-| Sending data | 27.594038 |
-| end | 0.000041 |
-| query end | 0.000014 |
-| closing tables | 0.000013 |
-| freeing items | 0.000088 |
-| cleaning up | 0.000020 |
-+-+--+
-15 rows in set, 1 warning (0.00 sec)
-```
-
-## Listing the most used queries on the database server
-
-Whenever you're troubleshooting query performance, itΓÇÖs helpful to understand which queries are most often run on your MySQL server. You can use this information to gauge if any of the top queries are taking longer than usual to run. In addition, a developer or DBA could use this information to identify if any query has a sudden increase in query execution count and duration.
-
-To list the top 10 most executed queries against your Azure Database for MySQL server, run the following query:
-
-```
-SELECT digest_text AS normalized_query,
- count_star AS all_occurrences,
- Concat(Round(sum_timer_wait / 1000000000000, 3), ' s') AS total_time,
- Concat(Round(min_timer_wait / 1000000000000, 3), ' s') AS min_time,
- Concat(Round(max_timer_wait / 1000000000000, 3), ' s') AS max_time,
- Concat(Round(avg_timer_wait / 1000000000000, 3), ' s') AS avg_time,
- Concat(Round(sum_lock_time / 1000000000000, 3), ' s') AS total_locktime,
- sum_rows_affected AS sum_rows_changed,
- sum_rows_sent AS sum_rows_selected,
- sum_rows_examined AS sum_rows_scanned,
- sum_created_tmp_tables,
- sum_select_scan,
- sum_no_index_used,
- sum_no_good_index_used
-FROM performance_schema.events_statements_summary_by_digest
-ORDER BY sum_timer_wait DESC LIMIT 10;
-```
-
-> [!NOTE]
-> Use this query to benchmark the top executed queries in your database server and determine if thereΓÇÖs been a change in the top queries or if any existing queries in the initial benchmark have increased in run duration.
->
-
-## Monitoring InnoDB garbage collection
-
-When InnoDB garbage collection is blocked or delayed, the database can develop a substantial purge lag that can negatively affect storage utilization and query performance.
-
-The InnoDB rollback segment history list length (HLL) measures the number of change records stored in the undo log. A growing HLL value indicates that InnoDBΓÇÖs garbage collection threads (purge threads) arenΓÇÖt keeping up with write workload or that purging is blocked by a long running query or transaction.
-
-Excessive delays in garbage collection can have severe, negative consequences:
-
-* The InnoDB system tablespace will expand, thus accelerating the growth of the underlying storage volume. At times, the system tablespace can swell by several terabytes as a result of a blocked purge.
-* Delete-marked records wonΓÇÖt be removed in a timely fashion. This can cause InnoDB tablespaces to grow and prevents the engine from reusing the storage occupied by these records.
-* The performance of all queries might degrade, and CPU utilization might increase because of the growth of InnoDB storage structures.
-
-As a result, itΓÇÖs important to monitor HLL values, patterns, and trends.
-
-### Finding HLL values
-
-You can find the HLL value by running the show engine innodb status command. The value will be listed in the output, under the TRANSACTIONS heading:
-
-```
-mysql> show engine innodb status\G
-*************************** 1. row ***************************
-
-(...)
-
-
-TRANSACTIONS
-
-Trx id counter 52685768
-Purge done for trx's n:o < 52680802 undo n:o < 0 state: running but idle
-History list length 2964300
-
-(...)
-```
-
-You can also determine the HLL value by querying the information_schema.innodb_metrics table:
-
-```
-mysql> select count from information_schema.innodb_metrics
- -> where name = 'trx_rseg_history_len';
-++
-| count |
-++
-| 2964300 |
-++
-1 row in set (0.00 sec)
-```
-
-### Interpreting HLL values
-
-When interpreting HLL values, consider the guidelines listed in the following table:
-
-| **Value** | **Notes** |
-|||
-| Less than ~10,000 | Normal values, indicating that garbage collection isn't falling behind. |
-| Between ~10,000 and ~1,000,000 | These values indicate a minor lag in garbage collection. Such values may be acceptable if they remain steady and don't increase. |
-| Greater than ~1,000,000 | These values should be investigated and may require corrective actions |
-
-### Addressing excessive HLL values
-
-If the HLL shows large spikes or exhibits a pattern of periodic growth, investigate the queries and transactions running on your Azure Database for MySQL instance immediately. Then you can resolve any workload issues that might be preventing the progress of the garbage collection process. While itΓÇÖs not expected for the database to be free of purge lag, you must not let the lag grow uncontrollably.
-
-To obtain transaction information from the `information_schema.innodb_trx` table, for example, run the following commands:
-
-```
-select * from information_schema.innodb_trx
-order by trx_started asc\G
-```
-
-The detail in the `trx_started` column will help you calculate the transaction age.
-
-```
-mysql> select * from information_schema.innodb_trx
- -> order by trx_started asc\G
-*************************** 1. row ***************************
- trx_id: 8150550
- trx_state: RUNNING
- trx_started: 2021-11-13 20:50:11
- trx_requested_lock_id: NULL
- trx_wait_started: NULL
- trx_weight: 0
- trx_mysql_thread_id: 19
- trx_query: select * from employees where DATE(hire_date) > DATE('1998-04-05') AND first_name like '%geo%';
-(…)
-```
-
-For information about current database sessions, including the time spent in the sessionΓÇÖs current state, check the `information_schema.processlist` table. The following output, for example, shows a session thatΓÇÖs been actively executing a query for the last 1462 seconds:
-
-```
-mysql> select user, host, db, command, time, info
- -> from information_schema.processlist
- -> order by time desc\G
-*************************** 1. row ***************************
- user: test
- host: 172.31.19.159:38004
- db: employees
-command: Query
- time: 1462
- info: select * from employees where DATE(hire_date) > DATE('1998-04-05') AND first_name like '%geo%';
-
-(...)
-```
-
-## Recommendations
-
-* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more CPU cores and additional memory to accommodate your workload.
-* Avoid large or long-running transactions by breaking them into smaller transactions.
-* Configure innodb_purge_threads as per your workload to improve efficiency for background purge operations.
- > [!NOTE]
- > Test any changes to this server variable for each environment to gauge the change in engine behavior.
- >
-
-* Use alerts on ΓÇ£Host CPU PercentΓÇ¥, ΓÇ£Host Memory PercentΓÇ¥ and ΓÇ£Total ConnectionsΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
-* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
-* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
-
-## Next steps
-
-To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql Howto Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-query-performance.md
- Title: Profile query performance - Azure Database for MySQL
-description: Learn how to profile query performance in Azure Database for MySQL by using EXPLAIN.
----- Previously updated : 3/30/2022--
-# Profile query performance in Azure Database for MySQL using EXPLAIN
--
-**EXPLAIN** is a handy tool that can help you optimize queries. You can use an EXPLAIN statement to get information about how SQL statements are run. The following shows example output from running an EXPLAIN statement.
-
-```sql
-mysql> EXPLAIN SELECT * FROM tb1 WHERE id=100\G
-*************************** 1. row ***************************
- id: 1
- select_type: SIMPLE
- table: tb1
- partitions: NULL
- type: ALL
-possible_keys: NULL
- key: NULL
- key_len: NULL
- ref: NULL
- rows: 995789
- filtered: 10.00
- Extra: Using where
-```
-
-In this example, the value of *key* is NULL, which means that MySQL can't locate any indexes optimized for the query. As a result, it performs a full table scan. Let's optimize this query by adding an index on the **ID** column, and then run the EXPLAIN statement again.
-
-```sql
-mysql> ALTER TABLE tb1 ADD KEY (id);
-mysql> EXPLAIN SELECT * FROM tb1 WHERE id=100\G
-*************************** 1. row ***************************
- id: 1
- select_type: SIMPLE
- table: tb1
- partitions: NULL
- type: ref
-possible_keys: id
- key: id
- key_len: 4
- ref: const
- rows: 1
- filtered: 100.00
- Extra: NULL
-```
-
-Now, the output shows that MySQL uses an index to limit the number of rows to 1, which dramatically shortens the search time.
-
-## Covering index
-
-A covering index includes of all columns of a query, which reduces value retrieval from data tables. The following **GROUP BY** statement and related output illustrates this.
-
-```sql
-mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G
-*************************** 1. row ***************************
- id: 1
- select_type: SIMPLE
- table: tb1
- partitions: NULL
- type: ALL
-possible_keys: NULL
- key: NULL
- key_len: NULL
- ref: NULL
- rows: 995789
- filtered: 11.11
- Extra: Using where; Using temporary; Using filesort
-```
-
-The output shows that MySQL doesn't use any indexes, because proper indexes are unavailable. The output also shows *Using temporary; Using filesort*, which indicates that MySQL creates a temporary table to satisfy the **GROUP BY** clause.
-
-Creating an index only on column **c2** makes no difference, and MySQL still needs to create a temporary table:
-
-```sql
-mysql> ALTER TABLE tb1 ADD KEY (c2);
-mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G
-*************************** 1. row ***************************
- id: 1
- select_type: SIMPLE
- table: tb1
- partitions: NULL
- type: ALL
-possible_keys: NULL
- key: NULL
- key_len: NULL
- ref: NULL
- rows: 995789
- filtered: 11.11
- Extra: Using where; Using temporary; Using filesort
-```
-
-In this case, you can create a **covered index** on both **c1** and **c2** by adding the value of **c2**" directly in the index, which will eliminate further data lookup.
-
-```sql
-mysql> ALTER TABLE tb1 ADD KEY covered(c1,c2);
-mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G
-*************************** 1. row ***************************
- id: 1
- select_type: SIMPLE
- table: tb1
- partitions: NULL
- type: index
-possible_keys: covered
- key: covered
- key_len: 108
- ref: NULL
- rows: 995789
- filtered: 11.11
- Extra: Using where; Using index
-```
-
-As the output of the EXPLAIN above shows, MySQL now uses the covered index and avoids having to creating a temporary table.
-
-## Combined index
-
-A combined index consists values from multiple columns and can be considered an array of rows that are sorted by concatenating values of the indexed columns. This method can be useful in a **GROUP BY** statement.
-
-```sql
-mysql> EXPLAIN SELECT c1, c2 from tb1 WHERE c2 LIKE '%100' ORDER BY c1 DESC LIMIT 10\G
-*************************** 1. row ***************************
- id: 1
- select_type: SIMPLE
- table: tb1
- partitions: NULL
- type: ALL
-possible_keys: NULL
- key: NULL
- key_len: NULL
- ref: NULL
- rows: 995789
- filtered: 11.11
- Extra: Using where; Using filesort
-```
-
-MySQL performs a *file sort* operation that is fairly slow, especially when it has to sort many rows. To optimize this query, create a combined index on both of the columns that are being sorted.
-
-```sql
-mysql> ALTER TABLE tb1 ADD KEY my_sort2 (c1, c2);
-mysql> EXPLAIN SELECT c1, c2 from tb1 WHERE c2 LIKE '%100' ORDER BY c1 DESC LIMIT 10\G
-*************************** 1. row ***************************
- id: 1
- select_type: SIMPLE
- table: tb1
- partitions: NULL
- type: index
-possible_keys: NULL
- key: my_sort2
- key_len: 108
- ref: NULL
- rows: 10
- filtered: 11.11
- Extra: Using where; Using index
-```
-
-The output of the EXPLAIN statement now shows that MySQL uses a combined index to avoid additional sorting as the index is already sorted.
-
-## Conclusion
-
-You can increase performance significantly by using EXPLAIN together with different types of indexes. Having an index on a table doesn't necessarily mean that MySQL can use it for your queries. Always validate your assumptions by using EXPLAIN and optimize your queries using indexes.
-
-## Next steps
--- To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql Howto Troubleshoot Replication Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-replication-latency.md
- Title: Troubleshoot replication latency - Azure Database for MySQL
-description: Learn how to troubleshoot replication latency by using Azure Database for MySQL read replicas.
-keywords: mysql, troubleshoot, replication latency in seconds
----- Previously updated : 01/13/2021-
-# Troubleshoot replication latency in Azure Database for MySQL
--
-The [read replica](concepts-read-replicas.md) feature allows you to replicate data from an Azure Database for MySQL server to a read-only replica server. You can scale out workloads by routing read and reporting queries from the application to replica servers. This setup reduces the pressure on the source server. It also improves overall performance and latency of the application as it scales.
-
-Replicas are updated asynchronously by using the MySQL engine's native binary log (binlog) file position-based replication technology. For more information, see [MySQL binlog file position-based replication configuration overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-
-The replication lag on the secondary read replicas depends several factors. These factors include but aren't limited to:
--- Network latency.-- Transaction volume on the source server.-- Compute tier of the source server and secondary read replica server.-- Queries running on the source server and secondary server.-
-In this article, you'll learn how to troubleshoot replication latency in Azure Database for MySQL. You'll also understand some common causes of increased replication latency on replica servers.
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
-
-## Replication concepts
-
-When a binary log is enabled, the source server writes committed transactions into the binary log. The binary log is used for replication. It's turned on by default for all newly provisioned servers that support up to 16 TB of storage. On replica servers, two threads run on each replica server. One thread is the *IO thread*, and the other is the *SQL thread*:
--- The IO thread connects to the source server and requests updated binary logs. This thread receives the binary log updates. Those updates are saved on a replica server, in a local log called the *relay log*.-- The SQL thread reads the relay log and then applies the data changes on replica servers.-
-## Monitoring replication latency
-
-Azure Database for MySQL provides the metric for replication lag in seconds in [Azure Monitor](concepts-monitoring.md). This metric is available only on read replica servers. It's calculated by the seconds_behind_master metric that's available in MySQL.
-
-To understand the cause of increased replication latency, connect to the replica server by using [MySQL Workbench](connect-workbench.md) or [Azure Cloud Shell](https://shell.azure.com). Then run following command.
-
->[!NOTE]
-> In your code, replace the example values with your replica server name and admin username. The admin username requires `@\<servername>` for Azure Database for MySQL.
-
-```azurecli-interactive
-mysql --host=myreplicademoserver.mysql.database.azure.com --user=myadmin@mydemoserver -p
-```
-
-Here's how the experience looks in the Cloud Shell terminal:
-
-```
-Requesting a Cloud Shell.Succeeded.
-Connecting terminal...
-
-Welcome to Azure Cloud Shell
-
-Type "az" to use Azure CLI
-Type "help" to learn about Cloud Shell
-
-user@Azure:~$mysql -h myreplicademoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
-Enter password:
-Welcome to the MySQL monitor. Commands end with ; or \g.
-Your MySQL connection id is 64796
-Server version: 5.6.42.0 Source distribution
-
-Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
-
-Oracle is a registered trademark of Oracle Corporation and/or its
-affiliates. Other names may be trademarks of their respective
-owners.
-
-Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
-mysql>
-```
-
-In the same Cloud Shell terminal, run the following command:
-
-```
-mysql> SHOW SLAVE STATUS;
-```
-
-Here's a typical output:
-
->[!div class="mx-imgBorder"]
-> :::image type="content" source="./media/howto-troubleshoot-replication-latency/show-status.png" alt-text="Monitoring replication latency":::
-
-The output contains a lot of information. Normally, you need to focus on only the rows that the following table describes.
-
-|Metric|Description|
-|||
-|Slave_IO_State| Represents the current status of the IO thread. Normally, the status is "Waiting for master to send event" if the source (master) server is synchronizing. A status such as "Connecting to master" indicates that the replica lost the connection to the source server. Make sure the source server is running, or check to see whether a firewall is blocking the connection.|
-|Master_Log_File| Represents the binary log file to which the source server is writing.|
-|Read_Master_Log_Pos| Indicates where the source server is writing in the binary log file.|
-|Relay_Master_Log_File| Represents the binary log file that the replica server is reading from the source server.|
-|Slave_IO_Running| Indicates whether the IO thread is running. The value should be `Yes`. If the value is `NO`, then the replication is likely broken.|
-|Slave_SQL_Running| Indicates whether the SQL thread is running. The value should be `Yes`. If the value is `NO`, then the replication is likely broken.|
-|Exec_Master_Log_Pos| Indicates the position of the Relay_Master_Log_File that the replica is applying. If there's latency, then this position sequence should be smaller than Read_Master_Log_Pos.|
-|Relay_Log_Space|Indicates the total combined size of all existing relay log files. You can check the upper limit size by querying `SHOW GLOBAL VARIABLES` like `relay_log_space_limit`.|
-|Seconds_Behind_Master| Displays replication latency in seconds.|
-|Last_IO_Errno|Displays the IO thread error code, if any. For more information about these codes, see the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).|
-|Last_IO_Error| Displays the IO thread error message, if any.|
-|Last_SQL_Errno|Displays the SQL thread error code, if any. For more information about these codes, see the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).|
-|Last_SQL_Error|Displays the SQL thread error message, if any.|
-|Slave_SQL_Running_State| Indicates the current SQL thread status. In this state, `System lock` is normal. It's also normal to see a status of `Waiting for dependent transaction to commit`. This status indicates that the replica is waiting for the source server to update committed transactions.|
-
-If Slave_IO_Running is `Yes` and Slave_SQL_Running is `Yes`, then the replication is running fine.
-
-Next, check Last_IO_Errno, Last_IO_Error, Last_SQL_Errno, and Last_SQL_Error. These fields display the error number and error message of the most-recent error that caused the SQL thread to stop. An error number of `0` and an empty message means there's no error. Investigate any nonzero error value by checking the error code in the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).
-
-## Common scenarios for high replication latency
-
-The following sections address scenarios in which high replication latency is common.
-
-### Network latency or high CPU consumption on the source server
-
-If you see the following values, then replication latency is likely caused by high network latency or high CPU consumption on the source server.
-
-```
-Slave_IO_State: Waiting for master to send event
-Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File, e.g. mysql-bin.00020
-Relay_Master_Log_File: the file sequence is smaller than Master_Log_File, e.g. mysql-bin.00010
-```
-
-In this case, the IO thread is running and is waiting on the source server. The source server has already written to binary log file number 20. The replica has received only up to file number 10. The primary factors for high replication latency in this scenario are network speed or high CPU utilization on the source server.
-
-In Azure, network latency within a region can typically be measured milliseconds. Across regions, latency ranges from milliseconds to seconds.
-
-In most cases, the connection delay between IO threads and the source server is caused by high CPU utilization on the source server. The IO threads are processed slowly. You can detect this problem by using Azure Monitor to check CPU utilization and the number of concurrent connections on the source server.
-
-If you don't see high CPU utilization on the source server, the problem might be network latency. If network latency is suddenly abnormally high, check the [Azure status page](https://status.azure.com/status) for known issues or outages.
-
-### Heavy bursts of transactions on the source server
-
-If you see the following values, then a heavy burst of transactions on the source server is likely causing the replication latency.
-
-```
-Slave_IO_State: Waiting for the slave SQL thread to free enough relay log space
-Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File, e.g. mysql-bin.00020
-Relay_Master_Log_File: the file sequence is smaller then Master_Log_File, e.g. mysql-bin.00010
-```
-
-The output shows that the replica can retrieve the binary log behind the source server. But the replica IO thread indicates that the relay log space is full already.
-
-Network speed isn't causing the delay. The replica is trying to catch up. But the updated binary log size exceeds the upper limit of the relay log space.
-
-To troubleshoot this issue, enable the [slow query log](concepts-server-logs.md) on the source server. Use slow query logs to identify long-running transactions on the source server. Then tune the identified queries to reduce the latency on the server.
-
-Replication latency of this sort is commonly caused by the data load on the source server. When source servers have weekly or monthly data loads, replication latency is unfortunately unavoidable. The replica servers eventually catch up after the data load on the source server finishes.
-
-### Slowness on the replica server
-
-If you observe the following values, then the problem might be on the replica server.
-
-```
-Slave_IO_State: Waiting for master to send event
-Master_Log_File: The binary log file sequence equals to Relay_Master_Log_File, e.g. mysql-bin.000191
-Read_Master_Log_Pos: The position of master server written to the above file is larger than Relay_Log_Pos, e.g. 103978138
-Relay_Master_Log_File: mysql-bin.000191
-Slave_IO_Running: Yes
-Slave_SQL_Running: Yes
-Exec_Master_Log_Pos: The position of slave reads from master binary log file is smaller than Read_Master_Log_Pos, e.g. 13468882
-Seconds_Behind_Master: There is latency and the value here is greater than 0
-```
-
-In this scenario, the output shows that both the IO thread and the SQL thread are running well. The replica reads the same binary log file that the source server writes. However, some latency on the replica server reflects the same transaction from the source server.
-
-The following sections describe common causes of this kind of latency.
-
-#### No primary key or unique key on a table
-
-Azure Database for MySQL uses row-based replication. The source server writes events to the binary log, recording changes in individual table rows. The SQL thread then replicates those changes to the corresponding table rows on the replica server. When a table lacks a primary key or unique key, the SQL thread scans all rows in the target table to apply the changes. This scan can cause replication latency.
-
-In MySQL, the primary key is an associated index that ensures fast query performance because it can't include NULL values. If you use the InnoDB storage engine, the table data is physically organized to do ultra-fast lookups and sorts based on the primary key.
-
-We recommend that you add a primary key on tables in the source server before you create the replica server. Add primary keys on the source server and then re-create read replicas to help improve replication latency.
-
-Use the following query to find out which tables are missing a primary key on the source server:
-
-```sql
-select tab.table_schema as database_name, tab.table_name
-from information_schema.tables tab left join
-information_schema.table_constraints tco
-on tab.table_schema = tco.table_schema
-and tab.table_name = tco.table_name
-and tco.constraint_type = 'PRIMARY KEY'
-where tco.constraint_type is null
-and tab.table_schema not in('mysql', 'information_schema', 'performance_schema', 'sys')
-and tab.table_type = 'BASE TABLE'
-order by tab.table_schema, tab.table_name;
-
-```
-
-#### Long-running queries on the replica server
-
-The workload on the replica server can make the SQL thread lag behind the IO thread. Long-running queries on the replica server are one of the common causes of high replication latency. To troubleshoot this problem, enable the [slow query log](concepts-server-logs.md) on the replica server.
-
-Slow queries can increase resource consumption or slow down the server so that the replica can't catch up with the source server. In this scenario, tune the slow queries. Faster queries prevent blockage of the SQL thread and improve replication latency significantly.
-
-#### DDL queries on the source server
-
-On the source server, a data definition language (DDL) command like [`ALTER TABLE`](https://dev.mysql.com/doc/refman/5.7/en/alter-table.html) can take a long time. While the DDL command is running, thousands of other queries might be running in parallel on the source server.
-
-When the DDL is replicated, to ensure database consistency, the MySQL engine runs the DDL in a single replication thread. During this task, all other replicated queries are blocked and must wait until the DDL operation finishes on the replica server. Even online DDL operations cause this delay. DDL operations increase replication latency.
-
-If you enabled the [slow query log](concepts-server-logs.md) on the source server, you can detect this latency problem by checking for a DDL command that ran on the source server. Through index dropping, renaming, and creating, you can use the INPLACE algorithm for the ALTER TABLE. You might need to copy the table data and rebuild the table.
-
-Typically, concurrent DML is supported for the INPLACE algorithm. But you can briefly take an exclusive metadata lock on the table when you prepare and run the operation. So for the CREATE INDEX statement, you can use the clauses ALGORITHM and LOCK to influence the method for table copying and the level of concurrency for reading and writing. You can still prevent DML operations by adding a FULLTEXT index or SPATIAL index.
-
-The following example creates an index by using ALGORITHM and LOCK clauses.
-
-```sql
-ALTER TABLE table_name ADD INDEX index_name (column), ALGORITHM=INPLACE, LOCK=NONE;
-```
-
-Unfortunately, for a DDL statement that requires a lock, you can't avoid replication latency. To reduce the potential effects, do these types of DDL operations during off-peak hours, for instance during the night.
-
-#### Downgraded replica server
-
-In Azure Database for MySQL, read replicas use the same server configuration as the source server. You can change the replica server configuration after it has been created.
-
-If the replica server is downgraded, the workload can consume more resources, which in turn can lead to replication latency. To detect this problem, use Azure Monitor to check the CPU and memory consumption of the replica server.
-
-In this scenario, we recommend that you keep the replica server's configuration at values equal to or greater than the values of the source server. This configuration allows the replica to keep up with the source server.
-
-#### Improving replication latency by tuning the source server parameters
-
-In Azure Database for MySQL, by default, replication is optimized to run with parallel threads on replicas. When high-concurrency workloads on the source server cause the replica server to fall behind, you can improve the replication latency by configuring the parameter binlog_group_commit_sync_delay on the source server.
-
-The binlog_group_commit_sync_delay parameter controls how many microseconds the binary log commit waits before synchronizing the binary log file. The benefit of this parameter is that instead of immediately applying every committed transaction, the source server sends the binary log updates in bulk. This delay reduces IO on the replica and helps improve performance.
-
-It might be useful to set the binlog_group_commit_sync_delay parameter to 1000 or so. Then monitor the replication latency. Set this parameter cautiously, and use it only for high-concurrency workloads.
-
-> [!IMPORTANT]
-> In replica server, binlog_group_commit_sync_delay parameter is recommended to be 0. This is recommended because unlike source server, the replica server won't have high-concurrency and increasing the value for binlog_group_commit_sync_delay on replica server could inadvertently cause replication lag to increase.
-
-For low-concurrency workloads that include many singleton transactions, the binlog_group_commit_sync_delay setting can increase latency. Latency can increase because the IO thread waits for bulk binary log updates even if only a few transactions are committed.
-
-## Next steps
-
-Check out the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
mysql Howto Troubleshoot Sys Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-sys-schema.md
- Title: Use the sys_schema - Azure Database for MySQL
-description: Learn how to use the sys_schema to find performance issues and maintain databases in Azure Database for MySQL.
----- Previously updated : 3/10/2022--
-# Tune performance and maintain databases in Azure Database for MySQL using the sys_schema
--
-The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, and tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL version 5.7.
--
-There are 52 views in the sys_schema, and each view has one of the following prefixes:
--- Host_summary or IO: I/O related latencies.-- InnoDB: InnoDB buffer status and locks.-- Memory: Memory usage by the host and users.-- Schema: Schema-related information, such as auto increment, indexes, etc.-- Statement: Information on SQL statements; it can be statement that resulted in full table scan, or long query time.-- User: Resources consumed and grouped by users. Examples are file I/Os, connections, and memory.-- Wait: Wait events grouped by host or user.-
-Now let's look at some common usage patterns of the sys_schema. To begin with, we'll group the usage patterns into two categories: **Performance tuning** and **Database maintenance**.
-
-## Performance tuning
-
-### *sys.user_summary_by_file_io*
-
-IO is the most expensive operation in the database. We can find out the average IO latency by querying the *sys.user_summary_by_file_io* view. With the default 125 GB of provisioned storage, my IO latency is about 15 seconds.
--
-Because Azure Database for MySQL scales IO with respect to storage, after increasing my provisioned storage to 1 TB, my IO latency reduces to 571 ms.
--
-### *sys.schema_tables_with_full_table_scans*
-
-Despite careful planning, many queries can still result in full table scans. For additional information about the types of indexes and how to optimize them, you can refer to this article: [How to troubleshoot query performance](./howto-troubleshoot-query-performance.md). Full table scans are resource-intensive and degrade your database performance. The quickest way to find tables with full table scan is to query the *sys.schema_tables_with_full_table_scans* view.
--
-### *sys.user_summary_by_statement_type*
-
-To troubleshoot database performance issues, it may be beneficial to identify the events happening inside of your database, and using the *sys.user_summary_by_statement_type* view may just do the trick.
--
-In this example Azure Database for MySQL spent 53 minutes flushing the slog query log 44579 times. That's a long time and many IOs. You can reduce this activity by either disabling your slow query log or decreasing the frequency of slow query login to the Azure portal.
-
-## Database maintenance
-
-### *sys.innodb_buffer_stats_by_table*
-
-[!IMPORTANT]
-> Querying this view can impact performance. It is recommended to perform this troubleshooting during off-peak business hours.
-
-The InnoDB buffer pool resides in memory and is the main cache mechanism between the DBMS and storage. The size of the InnoDB buffer pool is tied to the performance tier and canΓÇÖt be changed unless a different product SKU is chosen. As with memory in your operating system, old pages are swapped out to make room for fresher data. To find out which tables consume most of the InnoDB buffer pool memory, you can query the *sys.innodb_buffer_stats_by_table* view.
--
-In the graphic above, it's apparent that other than system tables and views, each table in the mysqldatabase033 database, which hosts one of my WordPress sites, occupies 16 KB, or 1 page, of data in memory.
-
-### *Sys.schema_unused_indexes* & *sys.schema_redundant_indexes*
-
-Indexes are great tools to improve read performance, but they do incur additional costs for inserts and storage. *Sys.schema_unused_indexes* and *sys.schema_redundant_indexes* provide insights into unused or duplicate indexes.
---
-## Conclusion
-
-In summary, the sys_schema is a great tool for both performance tuning and database maintenance. Make sure to take advantage of this feature in your Azure Database for MySQL.
-
-## Next steps
--- To find peer answers to your most concerned questions or post a new question/answer, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/overview.md
- Title: Overview - Azure Database for MySQL
-description: Learn about the Azure Database for MySQL service, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
------ Previously updated : 3/18/2020--
-# What is Azure Database for MySQL?
--
-Azure Database for MySQL is a relational database service in the Microsoft cloud based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine, versions 5.6 (retired), 5.7, and 8.0. Azure Database for MySQL delivers:
--- Zone redundant and same zone high availability.-- Maximum control with ability to select your scheduled maintenance window.-- Data protection using automatic backups and point-in-time-restore for up to 35 days.-- Automated patching and maintenance for underlying hardware, operating system and database engine to keep the service secure and up to date.-- Predictable performance, using inclusive pay-as-you-go pricing.-- Elastic scaling within seconds.-- Cost optimization controls with low cost burstable SKU and ability to stop/start server.-- Enterprise grade security, industry-leading compliance, and privacy to protect sensitive data at-rest and in-motion.-- Monitoring and automation to simplify management and monitoring for large-scale deployments.-- Industry-leading support experience.-
-These capabilities require almost no administration and all are provided at no additional cost. They allow you to focus on rapid app development and accelerating your time to market rather than allocating precious time and resources to managing virtual machines and infrastructure. In addition, you can continue to develop your application with the open-source tools and platform of your choice to deliver with the speed and efficiency your business demands, all without having to learn new skills.
--
-## Deployment models
-
-Azure Database for MySQL powered by the MySQL community edition is available in two deployment modes:
-- Flexible Server-- Single Server -
-### Azure Database for MySQL - Flexible Server
-
-Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that do not need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](flexible-server/overview.md#azure-regions).
-
-The Flexible Server deployment option offers three compute tiers: Burstable, General Purpose, and Memory Optimized. Each tier offers different compute and memory capacity to support your database workloads. You can build your first app on a burstable tier for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Compute and Storage](flexible-server/concepts-compute-storage.md) for details.
-
-Flexible servers are best suited for
-- Ease of deployments, simplified scaling and low database management overhead for functions like backups, high availability, security and monitoring-- Application developments requiring community version of MySQL with better control and customizations-- Production workloads with same-zone, zone redundant high availability and managed maintenance windows-- Simplified development experience -- Enterprise grade security-
-For detailed overview of flexible server deployment mode, refer [flexible server overview](flexible-server/overview.md). For latest updates on Flexible Server, refer to [What's new in Azure Database for MySQL - Flexible Server](flexible-server/whats-new.md).
-
-### Azure Database for MySQL - Single Server
-
-Azure Database for MySQL Single Server is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
-
-Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. To learn about the differences between Flexible Server and Single Server deployment options, refer [select the right deployment option for you](select-right-deployment-type.md) documentation.
-
-For detailed overview of single server deployment mode, refer [single server overview](single-server-overview.md). For latest updates on Flexible Server, refer to [What's new in Azure Database for MySQL - Single Server](single-server-whats-new.md).
-
-## Contacts
-For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address is not a technical support alias.
-
-In addition, consider the following points of contact as appropriate:
--- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).-- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.-- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).-
-## Next steps
-
-Learn more about the two deployment modes for Azure Database for MySQL and choose the right options based on your needs.
--- [Single Server](single-server/index.yml)-- [Flexible Server](flexible-server/index.yml)-- [Choose the right MySQL deployment option for your workload](select-right-deployment-type.md)
mysql Partners Migration Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/partners-migration-mysql.md
- Title: Azure Database for MySQL Migration Partners | Microsoft Docs
-description: Lists of third-party migration partners with solutions that support Azure Database for MySQL.
----- Previously updated : 08/18/2021--
-# Azure Database for MySQL migration partners
--
-To broadly support your Azure Database for MySQL solution, you can choose from a wide variety of industry-leading partners and tools. This article highlights Microsoft partners with migration solutions that support Azure Database for MySQL.
-
-## Migration partners
-
-| Partner | Description | Links | Videos |
-||-|-|--|
-| ![Devart][1] |**Devart**<br>Founded in 1997, Devart is one of the leading developers of database management software, ALM solutions, and data providers for most popular database servers. dbForge Studio for MySQL provides functionality to transfer the data to a testing server or to completely migrate the entire database to a new production server.|[Website][devart_website]<br>[Twitter][devart_twitter]<br>[YouTube][devart_youtube]<br>[Contact][devart_contact] | |
-| ![SNP Technologies][2] |**SNP Technologies**<br>SNP Technologies is a cloud-only service provider, building secure and reliable solutions for businesses of the future. The company believes in generating real value for your business. From thought to execution, SNP Technologies shares a common purpose with clients, to turn their investment into an advantage.|[Website][snp_website]<br>[Twitter][snp_twitter]<br>[Contact][snp_contact] | |
-| ![Pragmatic Works][3] |**Pragmatic Works**<br>Pragmatic Works is a training and consulting company with deep expertise in data management and performance, Business Intelligence, Big Data, Power BI, and Azure. They focus on data optimization and improving the efficiency of SQL Server and cloud management.|[Website][pragmatic-works_website]<br>[Twitter][pragmatic-works_twitter]<br>[YouTube][pragmatic-works_youtube]<br>[Contact][pragmatic-works_contact] | |
-| ![Infosys][4] |**Infosys**<br>Infosys is a global leader in the latest digital services and consulting. With over three decades of experience managing the systems of global enterprises, Infosys expertly steers clients through their digital journey by enabling organizations with an AI-powered core. Doing so helps prioritize the execution of change. Infosys also provides businesses with agile digital at scale to deliver unprecedented levels of performance and customer delight.|[Website][infosys_website]<br>[Twitter][infosys_twitter]<br>[YouTube][infosys_youtube]<br>[Contact][infosys_contact] | |
-| ![credativ][5] |**credativ**<br>credativ is an independent consulting and services company. Since 1999, they have offered comprehensive services and technical support for the implementation and operation of Open Source software in business applications. Their comprehensive range of services includes strategic consulting, sound technical advice, qualified training, and personalized support up to 24 hours per day for all your IT needs.|[Marketplace][credativ_marketplace]<br>[Website][credativ_website]<br>[Twitter][credative_twitter]<br>[YouTube][credativ_youtube]<br>[Contact][credativ_contact] | |
-| ![Pactera][6] |**Pactera**<br>Pactera is a global company offering consulting, digital, technology, and operations services to the worldΓÇÖs leading enterprises. From their roots in engineering to the latest in digital transformation, they give customers a competitive edge. Their proven methodologies and tools ensure your data is secure, authentic, and accurate.|[Website][pactera_website]<br>[Twitter][pactera_twitter]<br>[Contact][pactera_contact] | |
-
-## Next steps
-
-To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/).
-
-<!--Image references-->
-[1]: ./media/partner-migration-mysql/devart-logo.png
-[2]: ./media/partner-migration-mysql/SNP_Logo.png
-[3]: ./media/partner-migration-mysql/PW-logo-text-CMYK1000.png
-[4]: ./media/partner-migration-mysql/InfosysLogo.png
-[5]: ./media/partner-migration-mysql/credativ_round_logo2.png
-[6]: ./media/partner-migration-mysql/Pactera_logo_small2.png
-
-<!--Website links -->
-[devart_website]:https://www.devart.com//
-[snp_website]:https://www.snp.com//
-[pragmatic-works_website]:https://pragmaticworks.com//
-[infosys_website]:https://www.infosys.com/
-[credativ_website]:https://www.credativ.com/postgresql-competence-center/microsoft-azure
-[pactera_website]:https://en.pactera.com/
-
-<!--Get Started Links-->
-<!--Datasheet Links-->
-<!--Marketplace Links -->
-[credativ_marketplace]:https://azuremarketplace.microsoft.com/de-de/marketplace/apps?search=credativ&page=1
-
-<!--Press links-->
-
-<!--YouTube links-->
-[devart_youtube]:https://www.youtube.com/user/DevartSoftware
-[pragmatic-works_youtube]:https://www.youtube.com/user/PragmaticWorks
-[infosys_youtube]:https://www.youtube.com/user/Infosys
-[credativ_youtube]:https://www.youtube.com/channel/UCnSnr6_TcILUQQvAwlYFc8A
-
-<!--Twitter links-->
-[devart_twitter]:https://twitter.com/DevartSoftware
-[snp_twitter]:https://twitter.com/snptechnologies
-[pragmatic-works_twitter]:https://twitter.com/PragmaticWorks
-[infosys_twitter]:https://twitter.com/infosys
-[credative_twitter]:https://twitter.com/credativ
-[pactera_twitter]:https://twitter.com/Pactera?s=17
-
-<!--Contact links-->
-[devart_contact]:https://www.devart.com/company/contact.html
-[snp_contact]:mailto:sachin@snp.com
-[pragmatic-works_contact]:mailto:marketing@pragmaticworks.com
-[infosys_contact]:https://www.infosys.com/contact/
-[credativ_contact]:mailto:info@credativ.com
-[pactera_contact]:mailto:shushi.gaur@pactera.com
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/policy-reference.md
- Title: Built-in policy definitions for Azure Database for MySQL
-description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources.
Previously updated : 05/12/2022------
-# Azure Policy built-in definitions for Azure Database for MySQL
--
-This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
-definitions for Azure Database for MySQL. For additional Azure Policy built-ins for other services,
-see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
-
-The name of each built-in policy definition links to the policy definition in the Azure portal. Use
-the link in the **Version** column to view the source on the
-[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
-
-## Azure Database for MySQL
--
-## Next steps
--- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-arm-template.md
- Title: 'Quickstart: Create an Azure DB for MySQL - ARM template'
-description: In this Quickstart, learn how to create an Azure Database for MySQL server with virtual network integration, by using an Azure Resource Manager template.
------ Previously updated : 05/19/2020--
-# Quickstart: Use an ARM template to create an Azure Database for MySQL server
--
-Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for MySQL server with virtual network integration. You can create the server in the Azure portal, Azure CLI, or Azure PowerShell.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.dbformysql%2Fmanaged-mysql-with-vnet%2Fazuredeploy.json)
-
-## Prerequisites
-
-# [Portal](#tab/azure-portal)
-
-An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-
-# [PowerShell](#tab/PowerShell)
-
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-* If you want to run the code locally, [Azure PowerShell](/powershell/azure/).
-
-# [CLI](#tab/CLI)
-
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-* If you want to run the code locally, [Azure CLI](/cli/azure/).
---
-## Review the template
-
-You create an Azure Database for MySQL server with a defined set of compute and storage resources. To learn more, see [Azure Database for MySQL pricing tiers](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../azure-resource-manager/management/overview.md).
-
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-mysql-with-vnet/).
--
-The template defines five Azure resources:
-
-* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
-* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets)
-* [**Microsoft.DBforMySQL/servers**](/azure/templates/microsoft.dbformysql/servers)
-* [**Microsoft.DBforMySQL/servers/virtualNetworkRules**](/azure/templates/microsoft.dbformysql/servers/virtualnetworkrules)
-* [**Microsoft.DBforMySQL/servers/firewallRules**](/azure/templates/microsoft.dbformysql/servers/firewallrules)
-
-More Azure Database for MySQL template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Dbformysql&pageNumber=1&sort=Popular).
-
-## Deploy the template
-
-# [Portal](#tab/azure-portal)
-
-Select the following link to deploy the Azure Database for MySQL server template in the Azure portal:
-
-[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.dbformysql%2Fmanaged-mysql-with-vnet%2Fazuredeploy.json)
-
-On the **Deploy Azure Database for MySQL with VNet** page:
-
-1. For **Resource group**, select **Create new**, enter a name for the new resource group, and select **OK**.
-
-2. If you created a new resource group, select a **Location** for the resource group and the new server.
-
-3. Enter a **Server Name**, **Administrator Login**, and **Administrator Login Password**.
-
- :::image type="content" source="./media/quickstart-create-mysql-server-database-using-arm-template/deploy-azure-database-for-mysql-with-vnet.png" alt-text="Deploy Azure Database for MySQL with VNet window, Azure quickstart template, Azure portal":::
-
-4. Change the other default settings if you want:
-
- * **Subscription**: the Azure subscription you want to use for the server.
- * **Sku Capacity**: the vCore capacity, which can be *2* (the default), *4*, *8*, *16*, *32*, or *64*.
- * **Sku Name**: the SKU tier prefix, SKU family, and SKU capacity, joined by underscores, such as *B_Gen5_1*, *GP_Gen5_2* (the default), or *MO_Gen5_32*.
- * **Sku Size MB**: the storage size, in megabytes, of the Azure Database for MySQL server (default *5120*).
- * **Sku Tier**: the deployment tier, such as *Basic*, *GeneralPurpose* (the default), or *MemoryOptimized*.
- * **Sku Family**: *Gen4* or *Gen5* (the default), which indicates hardware generation for server deployment.
- * **Mysql Version**: the version of MySQL server to deploy, such as *5.6* or *5.7* (the default).
- * **Backup Retention Days**: the desired period for geo-redundant backup retention, in days (default *7*).
- * **Geo Redundant Backup**: *Enabled* or *Disabled* (the default), depending on geo-disaster recovery (Geo-DR) requirements.
- * **Virtual Network Name**: the name of the virtual network (default *azure_mysql_vnet*).
- * **Subnet Name**: the name of the subnet (default *azure_mysql_subnet*).
- * **Virtual Network Rule Name**: the name of the virtual network rule allowing the subnet (default *AllowSubnet*).
- * **Vnet Address Prefix**: the address prefix for the virtual network (default *10.0.0.0/16*).
- * **Subnet Prefix**: the address prefix for the subnet (default *10.0.0.0/16*).
-
-5. Read the terms and conditions, and then select **I agree to the terms and conditions stated above**.
-
-6. Select **Purchase**.
-
-# [PowerShell](#tab/PowerShell)
-
-Use the following interactive code to create a new Azure Database for MySQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password.
-
-To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
-
-```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for MySQL server"
-$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist"
-$location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
-$adminUser = Read-Host -Prompt "Enter the Azure Database for MySQL server's administrator account name"
-$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString
-
-New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment
-New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
- -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json `
- -serverName $serverName `
- -administratorLogin $adminUser `
- -administratorLoginPassword $adminPassword
-
-Read-Host -Prompt "Press [ENTER] to continue ..."
-```
-
-# [CLI](#tab/CLI)
-
-Use the following interactive code to create a new Azure Database for MySQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password.
-
-To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
-
-```azurecli-interactive
-echo "Enter a name for the new Azure Database for MySQL server:" &&
-read serverName &&
-echo "Enter a name for the new resource group where the server will exist:" &&
-read resourceGroupName &&
-echo "Enter an Azure region (for example, centralus) for the resource group:" &&
-read location &&
-echo "Enter the Azure Database for MySQL server's administrator account name:" &&
-read adminUser &&
-echo "Enter the administrator password:" &&
-read adminPassword &&
-params='serverName='$serverName' administratorLogin='$adminUser' administratorLoginPassword='$adminPassword &&
-az group create --name $resourceGroupName --location $location &&
-az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json &&
-echo "Press [ENTER] to continue ..."
-```
---
-## Review deployed resources
-
-# [Portal](#tab/azure-portal)
-
-Follow these steps to see an overview of your new Azure Database for MySQL server:
-
-1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for MySQL servers**.
-
-2. In the database list, select your new server. The **Overview** page for your new Azure Database for MySQL server appears.
-
-# [PowerShell](#tab/PowerShell)
-
-Run the following interactive code to view details about your Azure Database for MySQL server. You'll have to enter the name of the new server.
-
-```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter the name of your Azure Database for MySQL server"
-Get-AzResource -ResourceType "Microsoft.DBforMySQL/servers" -Name $serverName | ft
-Write-Host "Press [ENTER] to continue..."
-```
-
-# [CLI](#tab/CLI)
-
-Run the following interactive code to view details about your Azure Database for MySQL server. You'll have to enter the name and the resource group of the new server.
-
-```azurecli-interactive
-echo "Enter your Azure Database for MySQL server name:" &&
-read serverName &&
-echo "Enter the resource group where the Azure Database for MySQL server exists:" &&
-read resourcegroupName &&
-az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DbForMySQL/servers"
-```
---
-## Exporting ARM template from the portal
-You can [export an ARM template](../azure-resource-manager/templates/export-template-portal.md) from the Azure portal. There are two ways to export a template:
--- [Export from resource group or resource](../azure-resource-manager/templates/export-template-portal.md#export-template-from-a-resource). This option generates a new template from existing resources. The exported template is a "snapshot" of the current state of the resource group. You can export an entire resource group or specific resources within that resource group.-- [Export before deployment or from history](../azure-resource-manager/templates/export-template-portal.md#download-template-before-deployment). This option retrieves an exact copy of a template used for deployment.-
-When exporting the template, in the ```"properties":{ }``` section of the MySQL server resource you will notice that ```administratorLogin``` and ```administratorLoginPassword``` will not be included for security reasons. You **MUST** add these parameters to your template before deploying the template or the template will fail.
-
-```
-"resources": [
- {
- "type": "Microsoft.DBforMySQL/servers",
- "apiVersion": "2017-12-01",
- "name": "[parameters('servers_name')]",
- "location": "southcentralus",
- "sku": {
- "name": "B_Gen5_1",
- "tier": "Basic",
- "family": "Gen5",
- "capacity": 1
- },
- "properties": {
- "administratorLogin": "[parameters('administratorLogin')]",
- "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
-```
-
-## Clean up resources
-
-When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
-
-# [Portal](#tab/azure-portal)
-
-1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
-
-2. In the resource group list, choose the name of your resource group.
-
-3. In the **Overview** page of your resource group, select **Delete resource group**.
-
-4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-Write-Host "Press [ENTER] to continue..."
-```
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
---
-## Next steps
-
-For a step-by-step tutorial that guides you through the process of creating an ARM template, see:
-
-> [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
mysql Quickstart Create Mysql Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-azure-cli.md
- Title: 'Quickstart: Create a server - Azure CLI - Azure Database for MySQL'
-description: This quickstart describes how to use the Azure CLI to create an Azure Database for MySQL server in an Azure resource group.
----- Previously updated : 07/15/2020---
-# Quickstart: Create an Azure Database for MySQL server using Azure CLI
--
-> [!TIP]
-> Consider using the simpler [az mysql up](/cli/azure/mysql#az-mysql-up) Azure CLI command (currently in preview). Try out the [quickstart](./quickstart-create-server-up-azure-cli.md).
-
-This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create an Azure Database for MySQL server in five minutes.
-----
- ```azurecli
- az account set --subscription <subscription id>
- ```
-
-## Create an Azure Database for MySQL server
-Create an [Azure resource group](../azure-resource-manager/management/overview.md) using the [az group create](/cli/azure/group) command and then create your MySQL server inside this resource group. You should provide a unique name. The following example creates a resource group named `myresourcegroup` in the `westus` location.
-
-```azurecli-interactive
-az group create --name myresourcegroup --location westus
-```
-
-Create an Azure Database for MySQL server with the [az mysql server create](/cli/azure/mysql/server#az-mysql-server-create) command. A server can contain multiple databases.
-
-```azurecli
-az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2
-```
-
-Here are the details for arguments above :
-
-**Setting** | **Sample value** | **Description**
-||
-name | mydemoserver | Enter a unique name for your Azure Database for MySQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
-resource-group | myresourcegroup | Provide the name of the Azure resource group.
-location | westus | The Azure location for the server.
-admin-user | myadmin | The username for the administrator login. It cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
-admin-password | *secure password* | The password of the administrator user. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
-sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information.
-
->[!IMPORTANT]
->- The default MySQL version on your server is 5.7 . We currently have 5.6 and 8.0 versions also available.
->- To view all the arguments for **az mysql server create** command, see this [reference document](/cli/azure/mysql/server#az-mysql-server-create).
->- SSL is enabled by default on your server . For more infroamtion on SSL, see [Configure SSL connectivity](howto-configure-ssl.md)
-
-## Configure a server-level firewall rule
-By default the new server created is protected with firewall rules and not accessible publicly. You can configure the firewall rule on your server using the [az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule) command. This will allow you to connect to the server locally.
-
-The following example creates a firewall rule called `AllowMyIP` that allows connections from a specific IP address, 192.168.0.1. Replace the IP address you will be connecting from. You can use an range of IP addresses if needed. Don't know how to look for your IP, then go to [https://whatismyipaddress.com/](https://whatismyipaddress.com/) to get your IP address.
-
-```azurecli-interactive
-az mysql server firewall-rule create --resource-group myresourcegroup --server mydemoserver --name AllowMyIP --start-ip-address 192.168.0.1 --end-ip-address 192.168.0.1
-```
-
-> [!NOTE]
-> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from within a corporate network, outbound traffic over port 3306 might not be allowed. If this is the case, you can't connect to your server unless your IT department opens port 3306.
-
-## Get the connection information
-
-To connect to your server, you need to provide host information and access credentials.
-
-```azurecli-interactive
-az mysql server show --resource-group myresourcegroup --name mydemoserver
-```
-
-The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**.
-```json
-{
- "administratorLogin": "myadmin",
- "earliestRestoreDate": null,
- "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver",
- "location": "westus",
- "name": "mydemoserver",
- "resourceGroup": "myresourcegroup",
- "sku": {
- "capacity": 2,
- "family": "Gen5",
- "name": "GP_Gen5_2",
- "size": null,
- "tier": "GeneralPurpose"
- },
- "sslEnforcement": "Enabled",
- "storageProfile": {
- "backupRetentionDays": 7,
- "geoRedundantBackup": "Disabled",
- "storageMb": 5120
- },
- "tags": null,
- "type": "Microsoft.DBforMySQL/servers",
- "userVisibleState": "Ready",
- "version": "5.7"
-}
-```
-
-## Connect to Azure Database for MySQL server using mysql command-line client
-You can connect to your server using a popular client tool, **[mysql.exe](https://dev.mysql.com/downloads/)** command-line tool with [Azure Cloud Shell](../cloud-shell/overview.md). Alternatively, you can use mysql command line on your local environment.
-```bash
- mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
-```
-
-## Clean up resources
-If you don't need these resources for another quickstart/tutorial, you can delete them by doing the following command:
-
-```azurecli-interactive
-az group delete --name myresourcegroup
-```
-
-If you would just like to delete the one newly created server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
-
-```azurecli-interactive
-az mysql server delete --resource-group myresourcegroup --name mydemoserver
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
->[Build a PHP app on Windows with MySQL](../app-service/tutorial-php-mysql-app.md)
mysql Quickstart Create Mysql Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-azure-portal.md
- Title: 'Quickstart: Create a server - Azure portal - Azure Database for MySQL'
-description: This article walks you through using the Azure portal to create a sample Azure Database for MySQL server in about five minutes.
------ Previously updated : 11/04/2020--
-# Quickstart: Create an Azure Database for MySQL server by using the Azure portal
--
-Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. This quickstart shows you how to use the Azure portal to create an Azure Database for MySQL single server. It also shows you how to connect to the server.
-
-## Prerequisites
-An Azure subscription is required. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
-
-## Create an Azure Database for MySQL single server
-1. Go to the [Azure portal](https://portal.azure.com/) to create a MySQL Single Server database. Search for and select **Azure Database for MySQL**:
-
- >[!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/find-azure-mysql-in-portal.png" alt-text="Find Azure Database for MySQL":::
-
-1. Select **Add**.
-
-2. On the **Select Azure Database for MySQL deployment option** page, select **Single server**:
- >[!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/choose-singleserver.png" alt-text="Screenshot that shows the Single server option.":::
-
-3. Enter the basic settings for a new single server:
-
- >[!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/4-create-form.png" alt-text="Screenshot that shows the Create MySQL server page.":::
-
- **Setting** | **Suggested value** | **Description**
- ||
- Subscription | Your subscription | Select the desired Azure subscription.
- Resource group | **myresourcegroup** | Enter a new resource group or an existing one from your subscription.
- Server name | **mydemoserver** | Enter a unique name. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters.
- Data source |**None** | Select **None** to create a new server from scratch. Select **Backup** only if you're restoring from a geo-backup of an existing server.
- Location |Your desired location | Select a location from the list.
- Version | The latest major version| Use the latest major version. See [all supported versions](concepts-supported-versions.md).
- Compute + storage | Use the defaults| The default pricing tier is **General Purpose** with **4 vCores** and **100 GB** storage. Backup retention is set to **7 days**, with the **Geographically Redundant** backup option.<br/>Review the [pricing](https://azure.microsoft.com/pricing/details/mysql/) page, and update the defaults if you need to.
- Admin username | **mydemoadmin** | Enter your server admin user name. You can't use **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public** for the admin user name.
- Password | A password | A new password for the server admin user. The password must be 8 to 128 characters long and contain a combination of uppercase or lowercase letters, numbers, and non-alphanumeric characters (!, $, #, %, and so on).
-
-
- > [!NOTE]
- > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier can't later be scaled to General Purpose or Memory Optimized.
-
-4. Select **Review + create** to provision the server.
-
-5. Wait for the portal page to display **Your deployment is complete**. Select **Go to resource** to go to the newly created server page:
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/deployment-complete.png" alt-text="Screenshot that shows the Your deployment is complete message.":::
-
-[Having problems? Let us know.](https://aka.ms/mysql-doc-feedback)
-
-## Configure a server-level firewall rule
-
-By default, the new server is protected with a firewall. To connect, you must provide access to your IP by completing these steps:
-
-1. Go to **Connection security** from the left pane for your server resource. If you don't know how to find your resource, see [How to open a resource](../azure-resource-manager/management/manage-resources-portal.md#open-resources).
-
- >[!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/add-current-ip-firewall.png" alt-text="Screenshot that shows the Connection security > Firewall rules page.":::
-
-2. Select **Add current client IP address**, and then select **Save**.
-
- > [!NOTE]
- > To avoid connectivity problems, check if your network allows outbound traffic over port 3306, which is used by Azure Database for MySQL.
-
-You can add more IPs or provide an IP range to connect to your server from those IPs. For more information, see [How to manage firewall rules on an Azure Database for MySQL server](./concepts-firewall-rules.md).
--
-[Having problems? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Connect to the server by using mysql.exe
-You can use either [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) or [MySQL Workbench](./connect-workbench.md) to connect to the server from your local environment. In this quickstart, we'll use mysql.exe in [Azure Cloud Shell](../cloud-shell/overview.md) to connect to the server.
--
-1. Open Azure Cloud Shell in the portal by selecting the first button on the toolbar, as shown in the following screenshot. Note the server name, server admin name, and subscription for your new server in the **Overview** section, as shown in the screenshot.
-
- > [!NOTE]
- > If you're opening Cloud Shell for the first time, you'll be prompted to create a resource group and storage account. This is a one-time step. It will be automatically attached for all sessions.
-
- >[!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/use-in-cloud-shell.png" alt-text="Screenshot that shows Cloud Shell in the Azure portal.":::
-2. Run the following command in the Azure Cloud Shell terminal. Replace the values shown here with your actual server name and admin user name. For Azure Database for MySQL, you need to add `@\<servername>` to the admin user name, as shown here:
-
- ```azurecli-interactive
- mysql --host=mydemoserver.mysql.database.azure.com --user=myadmin@mydemoserver -p
- ```
-
- Here's what it looks like in the Cloud Shell terminal:
-
- ```
- Requesting a Cloud Shell.Succeeded.
- Connecting terminal...
-
- Welcome to Azure Cloud Shell
-
- Type "az" to use Azure CLI
- Type "help" to learn about Cloud Shell
-
- user@Azure:~$mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
- Enter password:
- Welcome to the MySQL monitor. Commands end with ; or \g.
- Your MySQL connection id is 64796
- Server version: 5.6.42.0 Source distribution
-
- Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
- Oracle is a registered trademark of Oracle Corporation and/or its
- affiliates. Other names may be trademarks of their respective
- owners.
-
- Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
- mysql>
- ```
-3. In the same Azure Cloud Shell terminal, create a database named `guest`:
- ```
- mysql> CREATE DATABASE guest;
- Query OK, 1 row affected (0.27 sec)
- ```
-4. Switch to the `guest` database:
- ```
- mysql> USE guest;
- Database changed
- ```
-5. Enter `quit`, and then select **Enter** to quit mysql.
-
-[Having problems? Let us know.](https://aka.ms/mysql-doc-feedback)
-
-## Clean up resources
-You have now created an Azure Database for MySQL server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the MySQL server. To delete the resource group, complete these steps:
-1. In the Azure portal, search for and select **Resource groups**.
-2. In the list of resource groups, select the name of your resource group.
-3. On the **Overview** page for your resource group, select **Delete resource group**.
-4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
-
-To delete the server, you can select **Delete** on the **Overview** page for your server, as shown here:
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="media/quickstart-create-mysql-server-database-using-azure-portal/delete-server.png" alt-text="Screenshot that shows the Delete button on the server overview page.":::
-
-## Next steps
-> [!div class="nextstepaction"]
->[Build a PHP app on Windows with MySQL](../app-service/tutorial-php-mysql-app.md) <br/>
-
-> [!div class="nextstepaction"]
->[Build PHP app on Linux with MySQL](../app-service/tutorial-php-mysql-app.md?pivots=platform-linux%3fpivots%3dplatform-linux)<br/><br/>
-
-[Can't find what you're looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Quickstart Create Mysql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-azure-powershell.md
- Title: 'Quickstart: Create a server - Azure PowerShell - Azure Database for MySQL'
-description: This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an Azure resource group.
----- Previously updated : 04/28/2020---
-# Quickstart: Create an Azure Database for MySQL server using PowerShell
--
-This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an
-Azure resource group. You can use PowerShell to create and manage Azure resources interactively or
-in scripts.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
-
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information
-about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
-
-> [!IMPORTANT]
-> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
-> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If this is your first time using the Azure Database for MySQL service, you must register the
-**Microsoft.DBforMySQL** resource provider.
-
-```azurepowershell-interactive
-Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL
-```
--
-If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
-should be billed. Select a specific subscription ID using the
-[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
-
-```azurepowershell-interactive
-Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
-```
-
-## Create a resource group
-
-Create an [Azure resource group](../azure-resource-manager/management/overview.md)
-using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. A
-resource group is a logical container in which Azure resources are deployed and managed as a group.
-
-The following example creates a resource group named **myresourcegroup** in the **West US** region.
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name myresourcegroup -Location westus
-```
-
-## Create an Azure Database for MySQL server
-
-Create an Azure Database for MySQL server with the `New-AzMySqlServer` cmdlet. A server can manage
-multiple databases. Typically, a separate database is used for each project or for each user.
-
-The following table contains a list of commonly used parameters and sample values for the
-`New-AzMySqlServer` cmdlet.
-
-| **Setting** | **Sample value** | **Description** |
-| -- | - | - |
-| Name | mydemoserver | Choose a globally unique name in Azure that identifies your Azure Database for MySQL server. The server name can only contain letters, numbers, and the hyphen (-) character. Any uppercase characters that are specified are automatically converted to lowercase during the creation process. It must contain from 3 to 63 characters. |
-| ResourceGroupName | myresourcegroup | Provide the name of the Azure resource group. |
-| Sku | GP_Gen5_2 | The name of the SKU. Follows the convention **pricing-tier\_compute-generation\_vCores** in shorthand. For more information about the Sku parameter, see the information following this table. |
-| BackupRetentionDay | 7 | How long a backup should be retained. Unit is days. Range is 7-35. |
-| GeoRedundantBackup | Enabled | Whether geo-redundant backups should be enabled for this server or not. This value cannot be enabled for servers in the basic pricing tier and it cannot be changed after the server is created. Allowed values: Enabled, Disabled. |
-| Location | westus | The Azure region for the server. |
-| SslEnforcement | Enabled | Whether SSL should be enabled or not for this server. Allowed values: Enabled, Disabled. |
-| StorageInMb | 51200 | The storage capacity of the server (unit is megabytes). Valid StorageInMb is a minimum of 5120 MB and increases in 1024 MB increments. For more information about storage size limits, see [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md). |
-| Version | 5.7 | The MySQL major version. |
-| AdministratorUserName | myadmin | The username for the administrator login. It cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**. |
-| AdministratorLoginPassword | `<securestring>` | The password of the administrator user in the form of a secure string. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. |
-
-The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as
-shown in the following examples.
--- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.-- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.-- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.-
-For information about valid **Sku** values by region and for tiers, see
-[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md).
-
-The following example creates a MySQL server in the **West US** region named **mydemoserver** in the
-**myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5 server in
-the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document the
-password used in the first line of the example as this is the password for the MySQL server admin
-account.
-
-> [!TIP]
-> A server name maps to a DNS name and must be globally unique in Azure.
-
-```azurepowershell-interactive
-$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
-New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
-```
-
-Consider using the basic pricing tier if light compute and I/O are adequate for your workload.
-
-> [!IMPORTANT]
-> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory-
-> optimized and cannot be geo-replicated.
-
-## Configure a firewall rule
-
-Create an Azure Database for MySQL server-level firewall rule using the `New-AzMySqlFirewallRule`
-cmdlet. A server-level firewall rule allows an external application, such as the `mysql`
-command-line tool or MySQL Workbench to connect to your server through the Azure Database for MySQL
-service firewall.
-
-The following example creates a firewall rule named **AllowMyIP** that allows connections from a
-specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond
-to the location that you are connecting from.
-
-```azurepowershell-interactive
-New-AzMySqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1
-```
-
-> [!NOTE]
-> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from
-> within a corporate network, outbound traffic over port 3306 might not be allowed. In this
-> scenario, you can only connect to the server if your IT department opens port 3306.
-
-## Configure SSL settings
-
-By default, SSL connections between your server and client applications are enforced. This default
-ensures the security of _in-motion_ data by encrypting the data stream over the Internet. For this
-quickstart, disable SSL connections for your server. For more information, see
-[Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./howto-configure-ssl.md).
-
-> [!WARNING]
-> Disabling SSL is not recommended for production servers.
-
-The following example disables SSL on your Azure Database for MySQL server.
-
-```azurepowershell-interactive
-Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -SslEnforcement Disabled
-```
-
-## Get the connection information
-
-To connect to your server, you need to provide host information and access credentials. Use the
-following example to determine the connection information. Make a note of the values for
-**FullyQualifiedDomainName** and **AdministratorLogin**.
-
-```azurepowershell-interactive
-Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Select-Object -Property FullyQualifiedDomainName, AdministratorLogin
-```
-
-```Output
-FullyQualifiedDomainName AdministratorLogin
-
-mydemoserver.mysql.database.azure.com myadmin
-```
-
-## Connect to the server using the mysql command-line tool
-
-Connect to your server using the `mysql` command-line tool. To download and install the command-line
-tool, see [MySQL Community Downloads](https://dev.mysql.com/downloads/shell/). You can also access a
-pre-installed version of the `mysql` command-line tool in Azure Cloud Shell by selecting the **Try
-It** button on a code sample in this article. Other ways to access Azure Cloud Shell are to select
-the **>_** button on the upper-right toolbar in the Azure portal or by visiting
-[shell.azure.com](https://shell.azure.com/).
-
-1. Connect to the server using the `mysql` command-line tool.
-
- ```azurepowershell-interactive
- mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
- ```
-
-1. View server status.
-
- ```sql
- mysql> status
- ```
-
- ```Output
- C:\Users\>mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
- Enter password: *************
- Welcome to the MySQL monitor. Commands end with ; or \g.
- Your MySQL connection id is 65512
- Server version: 5.6.42.0 MySQL Community Server (GPL)
-
- Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
-
- Oracle is a registered trademark of Oracle Corporation and/or its
- affiliates. Other names may be trademarks of their respective
- owners.
-
- Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
-
- mysql> status
- --
- mysql Ver 14.14 Distrib 5.7.29, for Win64 (x86_64)
-
- Connection id: 65512
- Current database:
- Current user: myadmin@myipaddress
- SSL: Not in use
- Using delimiter: ;
- Server version: 5.6.42.0 MySQL Community Server (GPL)
- Protocol version: 10
- Connection: mydemoserver.mysql.database.azure.com via TCP/IP
- Server characterset: latin1
- Db characterset: latin1
- Client characterset: utf8
- Conn. characterset: utf8
- TCP port: 3306
- Uptime: 1 hour 2 min 12 sec
-
- Threads: 7 Questions: 952 Slow queries: 0 Opens: 66 Flush tables: 3 Open tables: 16 Queries per second avg: 0.255
- --
-
- mysql>
- ```
-
-For additional commands, see [MySQL 5.7 Reference Manual - Chapter 4.5.1](https://dev.mysql.com/doc/refman/5.7/en/mysql.html).
-
-## Connect to the server using MySQL Workbench
-
-1. Launch the MySQL Workbench application on your client computer. To download and install MySQL
- Workbench, see [Download MySQL Workbench](https://dev.mysql.com/downloads/workbench/).
-
-1. In the **Setup New Connection** dialog box, enter the following information on the **Parameters**
- tab:
-
- :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-powershell/setup-new-connection.png" alt-text="setup new connection":::
-
- | **Setting** | **Suggested Value** | **Description** |
- | -- | | - |
- | Connection Name | My Connection | Specify a label for this connection |
- | Connection Method | Standard (TCP/IP) | Use TCP/IP protocol to connect to Azure Database for MySQL |
- | Hostname | `mydemoserver.mysql.database.azure.com` | Server name you previously noted |
- | Port | 3306 | The default port for MySQL |
- | Username | myadmin@mydemoserver | The server admin login you previously noted |
- | Password | ************* | Use the admin account password you configured earlier |
-
-1. To test if the parameters are configured correctly, click the **Test Connection** button.
-
-1. Select the connection to connect to the server.
-
-## Clean up resources
-
-If the resources created in this quickstart aren't needed for another quickstart or tutorial, you
-can delete them by running the following example.
-
-> [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this quickstart exist in the specified resource group, they will
-> also be deleted.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myresourcegroup
-```
-
-To delete only the server created in this quickstart without deleting the resource group, use the
-`Remove-AzMySqlServer` cmdlet.
-
-```azurepowershell-interactive
-Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Design an Azure Database for MySQL using PowerShell](tutorial-design-database-using-powershell.md)
mysql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-server-up-azure-cli.md
- Title: 'Quickstart: Create Azure Database for MySQL using az mysql up'
-description: Quickstart guide to create Azure Database for MySQL server using Azure CLI (command line interface) up command.
----- Previously updated : 3/18/2020---
-# Quickstart: Create an Azure Database for MySQL using a simple Azure CLI command - az mysql up (preview)
--
-> [!IMPORTANT]
-> The [az mysql up](/cli/azure/mysql#az-mysql-up) Azure CLI command is in preview.
-
-Azure Database for MySQL is a managed service that enables you to run, manage, and scale highly available MySQL databases in the cloud. The Azure CLI is used to create and manage Azure resources from the command-line or in scripts. This quickstart shows you how to use the [az mysql up](/cli/azure/mysql#az-mysql-up) command to create an Azure Database for MySQL server using the Azure CLI. In addition to creating the server, the `az mysql up` command creates a sample database, a root user in the database, opens the firewall for Azure services, and creates default firewall rules for the client computer. This helps to expedite the development process.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-You'll need to login to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name.
-
-```azurecli
-az login
-```
-
-If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account) command. Substitute the **subscription ID** property from the **az login** output for your subscription into the subscription ID placeholder.
-
-```azurecli
-az account set --subscription <subscription id>
-```
-
-## Create an Azure Database for MySQL server
-
-To use the commands, install the [db-up](/cli/azure/ext/db-up/mysql
-) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
-
-```azurecli
-az extension add --name db-up
-```
-
-Create an Azure Database for MySQL server using the following command:
-
-```azurecli
-az mysql up
-```
-
-The server is created with the following default values (unless you manually override them):
-
-**Setting** | **Default value** | **Description**
-||
-server-name | System generated | A unique name that identifies your Azure Database for MySQL server.
-resource-group | System generated | A new Azure resource group.
-sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. See our [pricing page](https://azure.microsoft.com/pricing/details/mysql/) for more information about the tiers.
-backup-retention | 7 | How long a backup should be retained. Unit is days.
-geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not.
-location | westus2 | The Azure location for the server.
-ssl-enforcement | Enabled | Whether SSL should be enabled or not for this server.
-storage-size | 5120 | The storage capacity of the server (unit is megabytes).
-version | 5.7 | The MySQL major version.
-admin-user | System generated | The username for the administrator login.
-admin-password | System generated | The password of the administrator user.
-
-> [!NOTE]
-> For more information about the `az mysql up` command and its additional parameters, see the [Azure CLI documentation](/cli/azure/mysql#az-mysql-up).
-
-Once your server is created, it comes with the following settings:
--- A firewall rule called "devbox" is created. The Azure CLI attempts to detect the IP address of the machine the `az mysql up` command is run from and allows that IP address.-- "Allow access to Azure services" is set to ON. This setting configures the server's firewall to accept connections from all Azure resources, including resources not in your subscription.-- The `wait_timeout` parameter is set to 8 hours-- An empty database named "sampledb" is created-- A new user named "root" with privileges to "sampledb" is created-
-> [!NOTE]
-> Azure Database for MySQL communicates over port 3306. When connecting from within a corporate network, outbound traffic over port 3306 may not be allowed by your network's firewall. Have your IT department open port 3306 to connect to your server.
-
-## Get the connection information
-
-After the `az mysql up` command is completed, a list of connection strings for popular programming languages is returned to you. These connection strings are pre-configured with the specific attributes of your newly created Azure Database for MySQL server.
-
-You can use the [az mysql show-connection-string](/cli/azure/mysql#az-mysql-show-connection-string) command to list these connection strings again.
-
-## Clean up resources
-
-Clean up all resources you created in the quickstart using the following command. This command deletes the Azure Database for MySQL server and the resource group.
-
-```azurecli
-az mysql down --delete-group
-```
-
-If you would just like to delete the newly created server, you can run [az mysql down](/cli/azure/mysql#az-mysql-down) command.
-
-```azurecli
-az mysql down
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Design a MySQL Database with Azure CLI](./tutorial-design-database-using-cli.md)
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-mysql-github-actions.md
- Title: 'Quickstart: Connect to Azure MySQL with GitHub Actions'
-description: Use Azure MySQL from a GitHub Actions workflow
----- Previously updated : 05/09/2022---
-# Quickstart: Use GitHub Actions to connect to Azure MySQL
--
-Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a workflow to deploy database updates to [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/).
-
-## Prerequisites
-
-You'll need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A GitHub account. If you don't have a GitHub account, [sign up for free](https://github.com/join).-- A GitHub repository with sample data (`data.sql`).-
- > [!IMPORTANT]
- > This quickstart assumes that you have cloned a GitHub repository to your computer so that you can add the associated IP address to a firewall rule, if necessary.
--- An Azure Database for MySQL server.
- - [Quickstart: Create an Azure Database for MySQL server in the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)
-
-## Workflow file overview
-
-A GitHub Actions workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
-
-The file has two sections:
-
-|Section |Tasks |
-|||
-|**Authentication** | 1. Generate deployment credentials. |
-|**Deploy** | 1. Deploy the database. |
-
-## Generate deployment credentials
-# [Service principal](#tab/userlevel)
-
-You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac&preserve-view=true) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-
-Replace the placeholders `server-name` with the name of your MySQL server hosted on Azure. Replace the `subscription-id` and `resource-group` with the subscription ID and resource group connected to your MySQL server.
-
-```azurecli-interactive
- az ad sp create-for-rbac --name {server-name} --role contributor \
- --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
- --sdk-auth
-```
-
-The output is a JSON object with the role assignment credentials that provide access to your database similar to below. Copy this output JSON object for later.
-
-```output
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
-```
-
-> [!IMPORTANT]
-> It's always a good practice to grant minimum access. The scope in the previous example is limited to the specific server and not the entire resource group.
-
-# [OpenID Connect](#tab/openid)
-
-You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-
-1. Open your GitHub repository and go to **Settings**.
-
-1. Select **Settings > Secrets > New secret**.
-
-1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
-
- |GitHub Secret | Active Directory Application |
- |||
- |AZURE_CLIENT_ID | Application (client) ID |
- |AZURE_TENANT_ID | Directory (tenant) ID |
- |AZURE_SUBSCRIPTION_ID | Subscription ID |
-
-1. Save each secret by selecting **Add secret**.
---
-## Copy the MySQL connection string
-
-In the Azure portal, go to your Azure Database for MySQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string will look similar to the following.
-
-> [!IMPORTANT]
->
-> - For Single server use **Uid=adminusername@servername**. Note the **@servername** is required.
-> - For Flexible server , use **Uid= adminusername** without the @servername.
-
-```output
- Server=my-mysql-server.mysql.database.azure.com; Port=3306; Database={your_database}; Uid=adminname@my-mysql-server; Pwd={your_password}; SslMode=Preferred;
-```
-
-You'll use the connection string as a GitHub secret.
-
-## Configure GitHub secrets
-# [Service principal](#tab/userlevel)
-
-1. In [GitHub](https://github.com/), browse your repository.
-
-2. Select **Settings > Secrets > New secret**.
-
-3. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
-
- When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
-
- ```yaml
- - uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
- ```
-
-4. Select **New secret** again.
-
-5. Paste the connection string value into the secret's value field. Give the secret the name `AZURE_MYSQL_CONNECTION_STRING`.
-
-# [OpenID Connect](#tab/openid)
-
-You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-
-1. Open your GitHub repository and go to **Settings**.
-
-1. Select **Settings > Secrets > New secret**.
-
-1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
-
- |GitHub Secret | Active Directory Application |
- |||
- |AZURE_CLIENT_ID | Application (client) ID |
- |AZURE_TENANT_ID | Directory (tenant) ID |
- |AZURE_SUBSCRIPTION_ID | Subscription ID |
-
-1. Save each secret by selecting **Add secret**.
---
-## Add your workflow
-
-1. Go to **Actions** for your GitHub repository.
-
-2. Select **Set up your workflow yourself**.
-
-3. Delete everything after the `on:` section of your workflow file. For example, your remaining workflow may look like this.
-
- ```yaml
- name: CI
-
- on:
- push:
- branches: [ main ]
- pull_request:
- branches: [ main ]
- ```
-
-4. Rename your workflow `MySQL for GitHub Actions` and add the checkout and login actions. These actions will check out your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
-
- # [Service principal](#tab/userlevel)
-
- ```yaml
- name: MySQL for GitHub Actions
-
- on:
- push:
- branches: [ main ]
- pull_request:
- branches: [ main ]
-
- jobs:
- build:
- runs-on: windows-latest
- steps:
- - uses: actions/checkout@v1
- - uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
- ```
-
- # [OpenID Connect](#tab/openid)
-
- ```yaml
- name: MySQL for GitHub Actions
-
- on:
- push:
- branches: [ main ]
- pull_request:
- branches: [ main ]
-
- jobs:
- build:
- runs-on: windows-latest
- steps:
- - uses: actions/checkout@v1
- - uses: azure/login@v1
- with:
- client-id: ${{ secrets.AZURE_CLIENT_ID }}
- tenant-id: ${{ secrets.AZURE_TENANT_ID }}
- subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- ```
-
- ___
-
-5. Use the Azure MySQL Deploy action to connect to your MySQL instance. Replace `MYSQL_SERVER_NAME` with the name of your server. You should have a MySQL data file named `data.sql` at the root level of your repository.
-
- ```yaml
- - uses: azure/mysql@v1
- with:
- server-name: MYSQL_SERVER_NAME
- connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
- sql-file: './data.sql'
- ```
-
-6. Complete your workflow by adding an action to sign out of Azure. Here's the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
-
- # [Service principal](#tab/userlevel)
-
- ```yaml
- name: MySQL for GitHub Actions
-
- on:
- push:
- branches: [ main ]
- pull_request:
- branches: [ main ]
- jobs:
- build:
- runs-on: windows-latest
- steps:
- - uses: actions/checkout@v1
- - uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
-
- - uses: azure/mysql@v1
- with:
- server-name: MYSQL_SERVER_NAME
- connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
- sql-file: './data.sql'
-
- # Azure logout
- - name: logout
- run: |
- az logout
- ```
- # [OpenID Connect](#tab/openid)
-
- ```yaml
- name: MySQL for GitHub Actions
-
- on:
- push:
- branches: [ main ]
- pull_request:
- branches: [ main ]
- jobs:
- build:
- runs-on: windows-latest
- steps:
- - uses: actions/checkout@v1
- - uses: azure/login@v1
- with:
- client-id: ${{ secrets.AZURE_CLIENT_ID }}
- tenant-id: ${{ secrets.AZURE_TENANT_ID }}
- subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- - uses: azure/mysql@v1
- with:
- server-name: MYSQL_SERVER_NAME
- connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
- sql-file: './data.sql'
-
- # Azure logout
- - name: logout
- run: |
- az logout
- ```
- ___
-
-## Review your deployment
-
-1. Go to **Actions** for your GitHub repository.
-
-2. Open the first result to see detailed logs of your workflow's run.
-
- :::image type="content" source="media/quickstart-mysql-github-actions/github-actions-run-mysql.png" alt-text="Log of GitHub actions run":::
-
-## Clean up resources
-
-When your Azure MySQL database and repository are no longer needed, clean up the resources you deployed by deleting the resource group and your GitHub repository.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about Azure and GitHub integration](/azure/developer/github/)
mysql Reference Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/reference-stored-procedures.md
- Title: Management stored procedures - Azure Database for MySQL
-description: Learn which stored procedures in Azure Database for MySQL are useful to help you configure data-in replication, set the timezone, and kill queries.
----- Previously updated : 3/18/2020--
-# Azure Database for MySQL management stored procedures
--
-Stored procedures are available on Azure Database for MySQL servers to help manage your MySQL server. This includes managing your server's connections, queries, and setting up Data-in Replication.
-
-## Data-in Replication stored procedures
-
-Data-in Replication allows you to synchronize data from a MySQL server running on-premises, in virtual machines, or database services hosted by other cloud providers into the Azure Database for MySQL service.
-
-The following stored procedures are used to set up or remove Data-in Replication between a source and replica.
-
-|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**|
-|--|--|--|--|
-|*mysql.az_replication_change_master*|master_host<br/>master_user<br/>master_password<br/>master_port<br/>master_log_file<br/>master_log_pos<br/>master_ssl_ca|N/A|To transfer data with SSL mode, pass in the CA certificate's context into the master_ssl_ca parameter. </br><br>To transfer data without SSL, pass in an empty string into the master_ssl_ca parameter.|
-|*mysql.az_replication _start*|N/A|N/A|Starts replication.|
-|*mysql.az_replication _stop*|N/A|N/A|Stops replication.|
-|*mysql.az_replication _remove_master*|N/A|N/A|Removes the replication relationship between the source and replica.|
-|*mysql.az_replication_skip_counter*|N/A|N/A|Skips one replication error.|
-
-To set up Data-in Replication between a source and a replica in Azure Database for MySQL, refer to [how to configure Data-in Replication](howto-data-in-replication.md).
-
-## Other stored procedures
-
-The following stored procedures are available in Azure Database for MySQL to manage your server.
-
-|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**|
-|--|--|--|--|
-|*mysql.az_kill*|processlist_id|N/A|Equivalent to [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the connection associated with the provided processlist_id after terminating any statement the connection is executing.|
-|*mysql.az_kill_query*|processlist_id|N/A|Equivalent to [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the statement the connection is currently executing. Leaves the connection itself alive.|
-|*mysql.az_load_timezone*|N/A|N/A|Loads time zone tables to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").|
-
-## Next steps
-- Learn how to set up [Data-in Replication](howto-data-in-replication.md)-- Learn how to use the [time zone tables](howto-server-parameters.md#working-with-the-time-zone-parameter)
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/sample-scripts-azure-cli.md
- Title: Azure CLI samples - Azure Database for MySQL | Microsoft Docs
-description: This article lists the Azure CLI code samples available for interacting with Azure Database for MySQL.
------ Previously updated : 09/17/2021
-keywords: azure cli samples, azure cli code samples, azure cli script samples
-
-# Azure CLI samples for Azure Database for MySQL
-
-The following table includes links to sample Azure CLI scripts for Azure Database for MySQL.
-
-| Sample link | Description |
-|||
-|**Create a server**||
-| [Create a server and firewall rule](./scripts/sample-create-server-and-firewall-rule.md) | Azure CLI script that creates a single Azure Database for MySQL server and configures a server-level firewall rule. |
-|**Scale a server**||
-| [Scale a server](./scripts/sample-scale-server.md) | Azure CLI script that scales a single Azure Database for MySQL server up or down to allow for changing performance needs. |
-|**Change server configurations**||
-| [Change server configurations](./scripts/sample-change-server-configuration.md) | Azure CLI script that change configurations of a single Azure Database for MySQL server. |
-|**Restore a server**||
-| [Restore a server](./scripts/sample-point-in-time-restore.md) | Azure CLI script that restores a single Azure Database for MySQL server to a previous point in time. |
-|**Manipulate with server logs**||
-| [Enable server logs](./scripts/sample-server-logs.md) | Azure CLI script that enables server logs of a single Azure Database for MySQL server. |
-|||
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/sample-scripts-java-connection-pooling.md
- Title: Java samples to illustrate connection pooling
-description: This article lists java samples to illustrate connection pooling.
------ Previously updated : 02/28/2018-
-# Java sample to illustrate connection pooling
--
-The below sample code illustrates connection pooling in Java.
-
-```java
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.HashSet;
-import java.util.Set;
-import java.util.Stack;
-
-public class MySQLConnectionPool {
- private String databaseUrl;
- private String userName;
- private String password;
- private int maxPoolSize = 10;
- private int connNum = 0;
-
- private static final String SQL_VERIFYCONN = "select 1";
-
- Stack<Connection> freePool = new Stack<>();
- Set<Connection> occupiedPool = new HashSet<>();
-
- /**
- * Constructor
- *
- * @param databaseUrl
- * The connection url
- * @param userName
- * user name
- * @param password
- * password
- * @param maxSize
- * max size of the connection pool
- */
- public MySQLConnectionPool(String databaseUrl, String userName,
- String password, int maxSize) {
- this.databaseUrl = databaseUrl;
- this.userName = userName;
- this.password = password;
- this.maxPoolSize = maxSize;
- }
-
- /**
- * Get an available connection
- *
- * @return An available connection
- * @throws SQLException
- * Fail to get an available connection
- */
- public synchronized Connection getConnection() throws SQLException {
- Connection conn = null;
-
- if (isFull()) {
- throw new SQLException("The connection pool is full.");
- }
-
- conn = getConnectionFromPool();
-
- // If there is no free connection, create a new one.
- if (conn == null) {
- conn = createNewConnectionForPool();
- }
-
- // For Azure Database for MySQL, if there is no action on one connection for some
- // time, the connection is lost. By this, make sure the connection is
- // active. Otherwise reconnect it.
- conn = makeAvailable(conn);
- return conn;
- }
-
- /**
- * Return a connection to the pool
- *
- * @param conn
- * The connection
- * @throws SQLException
- * When the connection is returned already or it isn't gotten
- * from the pool.
- */
- public synchronized void returnConnection(Connection conn)
- throws SQLException {
- if (conn == null) {
- throw new NullPointerException();
- }
- if (!occupiedPool.remove(conn)) {
- throw new SQLException(
- "The connection is returned already or it isn't for this pool");
- }
- freePool.push(conn);
- }
-
- /**
- * Verify if the connection is full.
- *
- * @return if the connection is full
- */
- private synchronized boolean isFull() {
- return ((freePool.size() == 0) && (connNum >= maxPoolSize));
- }
-
- /**
- * Create a connection for the pool
- *
- * @return the new created connection
- * @throws SQLException
- * When fail to create a new connection.
- */
- private Connection createNewConnectionForPool() throws SQLException {
- Connection conn = createNewConnection();
- connNum++;
- occupiedPool.add(conn);
- return conn;
- }
-
- /**
- * Crate a new connection
- *
- * @return the new created connection
- * @throws SQLException
- * When fail to create a new connection.
- */
- private Connection createNewConnection() throws SQLException {
- Connection conn = null;
- conn = DriverManager.getConnection(databaseUrl, userName, password);
- return conn;
- }
-
- /**
- * Get a connection from the pool. If there is no free connection, return
- * null
- *
- * @return the connection.
- */
- private Connection getConnectionFromPool() {
- Connection conn = null;
- if (freePool.size() > 0) {
- conn = freePool.pop();
- occupiedPool.add(conn);
- }
- return conn;
- }
-
- /**
- * Make sure the connection is available now. Otherwise, reconnect it.
- *
- * @param conn
- * The connection for verification.
- * @return the available connection.
- * @throws SQLException
- * Fail to get an available connection
- */
- private Connection makeAvailable(Connection conn) throws SQLException {
- if (isConnectionAvailable(conn)) {
- return conn;
- }
-
- // If the connection is't available, reconnect it.
- occupiedPool.remove(conn);
- connNum--;
- conn.close();
-
- conn = createNewConnection();
- occupiedPool.add(conn);
- connNum++;
- return conn;
- }
-
- /**
- * By running a sql to verify if the connection is available
- *
- * @param conn
- * The connection for verification
- * @return if the connection is available for now.
- */
- private boolean isConnectionAvailable(Connection conn) {
- try (Statement st = conn.createStatement()) {
- st.executeQuery(SQL_VERIFYCONN);
- return true;
- } catch (SQLException e) {
- return false;
- }
- }
-
- // Just an Example
- public static void main(String[] args) throws SQLException {
- Connection conn = null;
- MySQLConnectionPool pool = new MySQLConnectionPool(
- "jdbc:mysql://mysqlaasdevintic-sha.cloudapp.net:3306/<Your DB name>",
- "<Your user>", "<Your Password>", 2);
- try {
- conn = pool.getConnection();
- try (Statement statement = conn.createStatement())
- {
- ResultSet res = statement.executeQuery("show tables");
- System.out.println("There are below tables:");
- while (res.next()) {
- String tblName = res.getString(1);
- System.out.println(tblName);
- }
- }
- }
- finally {
- if (conn != null) {
- pool.returnConnection(conn);
- }
- }
- }
-
-}
-
-```
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/security-controls-policy.md
- Title: Azure Policy Regulatory Compliance controls for Azure Database for MySQL
-description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
Previously updated : 05/10/2022-------
-# Azure Policy Regulatory Compliance controls for Azure Database for MySQL
--
-[Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
-provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
-**compliance domains** and **security controls** related to different compliance standards. This
-page lists the **compliance domains** and **security controls** for Azure Database for MySQL. You
-can assign the built-ins for a **security control** individually to help make your Azure resources
-compliant with the specific standard.
---
-## Next steps
--- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/select-right-deployment-type.md
- Title: Selecting the right deployment type - Azure Database for MySQL
-description: This article describes what factors to consider before you deploy Azure Database for MySQL as either infrastructure as a service (IaaS) or platform as a service (PaaS).
----- Previously updated : 08/26/2020--
-# Choose the right MySQL Server option in Azure
--
-With Azure, your MySQL server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has two deployment options, and there are service tiers within each deployment option. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, backups, security, monitoring, scaling or if you want to delegate these operations to Azure.
-
-When making your decision, consider the following two options:
--- **Azure Database for MySQL**. This option is a fully managed MySQL database engine based on the stable version of MySQL community edition. This relational database as a service (DBaaS), hosted on the Azure cloud platform, falls into the industry category of PaaS. With a managed instance of MySQL on Azure, you can use built-in features viz automated patching, high availability, automated backups, elastic scaling, enterprise grade security, compliance and governance, monitoring and alerting that otherwise require extensive configuration when MySQL Server is either on-premises or in an Azure VM. When using MySQL as a service, you pay-as-you-go, with options to scale up or scale out for greater control with no interruption. [Azure Database for MySQL](overview.md), powered by the MySQL community edition is available in two deployment modes:-
- - [Flexible Server](flexible-server/overview.md) - Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that do not need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](flexible-server/overview.md#azure-regions). Flexible servers are best suited for all new developments and migration of production workloads to Azure Database for MySQL service.
-
- - [Single Server](single-server-overview.md) is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option.
-
-- **MySQL on Azure VMs**. This option falls into the industry category of IaaS. With this service, you can run MySQL Server inside a managed virtual machine on the Azure cloud platform. All recent versions and editions of MySQL can be installed in the virtual machine.-
-## Comparing the MySQL deployment options in Azure
-
-The main differences between these options are listed in the following table:
-
-| Attribute | Azure Database for MySQL<br/>Single Server |Azure Database for MySQL<br/>Flexible Server |MySQL on Azure VMs |
-|:-|:-|:|:|
-| [**General**](flexible-server/overview.md) | | | |
-| General availability | Generally Available | Generally Available | Generally Available |
-| Service-level agreement (SLA) | 99.99% availability SLA |99.99% using Availability Zones| 99.99% using Availability Zones|
-| Underlying O/S | Windows | Linux | User Managed |
-| MySQL Edition | Community Edition | Community Edition | Community or Enterprise Edition |
-| MySQL Version Support | 5.6(Retired), 5.7 & 8.0| 5.7 & 8.0 | Any version|
-| Availability zone selection for application colocation | No | Yes | Yes |
-| Username in connection string | `<user_name>@server_name`. For example, `mysqlusr@mypgServer` | Just username. For example, `mysqlusr` | Just username. For example, `mysqlusr` |
-| [**Compute & Storage Scaling**](flexible-server/concepts-compute-storage.md) | | | |
-| Compute tiers | Basic, General Purpose, Memory Optimized | Burstable, General Purpose, Memory Optimized | Burstable, General Purpose, Memory Optimized |
-| Compute scaling | Supported (Scaling from and to Basic tier is **not supported**)| Supported | Supported|
-| Storage size | 5 GiB to 16 TiB| 20 GiB to 16 TiB | 32 GiB to 32,767 GiB|
-| Online Storage scaling | Supported| Supported| Not Supported|
-| Auto storage scaling | Supported| Supported| Not Supported|
-| IOPs scaling | Not Supported| Supported| Not Supported|
-| [**Cost Optimization**](https://azure.microsoft.com/pricing/details/mysql/flexible-server/) | | | |
-| Reserved Instance Pricing | Supported | Supported | Supported |
-| Stop/Start Server for development | Server can be stopped up to 7 days | Server can be stopped up to 30 days | Supported |
-| Low cost Burstable SKU | Not Supported | Supported | Supported |
-| [**Networking/Security**](concepts-security.md) | | | |
-| Network Connectivity | - Public endpoints with server firewall.<br/> - Private access with Private Link support.|- Public endpoints with server firewall.<br/> - Private access with Virtual Network integration.| - Public endpoints with server firewall.<br/> - Private access with Private Link support.|
-| SSL/TLS | Enabled by default with support for TLS v1.2, 1.1 and 1.0 | Enabled by default with support for TLS v1.2, 1.1 and 1.0| Supported with TLS v1.2, 1.1 and 1.0 |
-| Data Encryption at rest | Supported with customer managed keys (BYOK) | Supported with service managed keys | Not Supported|
-| Azure AD Authentication | Supported | Not Supported | Not Supported|
-| Microsoft Defender for Cloud support | Yes | No | No |
-| Server Audit | Supported | Supported | User Managed |
-| [**Patching & Maintenance**](flexible-server/concepts-maintenance.md) | | |
-| Operating system patching| Automatic | Automatic | User managed |
-| MySQL minor version upgrade | Automatic | Automatic | User managed |
-| MySQL in-place major version upgrade | Supported from 5.6 to 5.7 | Not Supported | User Managed |
-| Maintenance control | System managed | Customer managed | User managed |
-| Maintenance window | Anytime within 15 hrs window | 1hr window | User managed |
-| Planned maintenance notification | 3 days | 5 days | User managed |
-| [**High Availability**](flexible-server/concepts-high-availability.md) | | | |
-| High availability | Built-in HA (without hot standby)| Built-in HA (without hot standby), Same-zone and zone-redundant HA with hot standby | User managed |
-| Zone redundancy | Not supported | Supported | Supported|
-| Standby zone placement | Not supported | Supported | Supported|
-| Automatic failover | Yes (spins another server)| Yes | User Managed|
-| User initiated Forced failover | No | Yes | User Managed |
-| Transparent Application failover | Yes | Yes | User Managed|
-| [**Replication**](flexible-server/concepts-read-replicas.md) | | | |
-| Support for read replicas | Yes | Yes | User Managed |
-| Number of read replicas supported | 5 | 10 | User Managed |
-| Mode of replication | Asynchronous | Asynchronous | User Managed |
-| Gtid support for read replicas | Supported | Supported | User Managed |
-| Cross-region support (Geo-replication) | Yes | Not supported | User Managed |
-| Hybrid scenarios | Supported with [Data-in Replication](./concepts-data-in-replication.md)| Supported with [Data-in Replication](./flexible-server/concepts-data-in-replication.md) | User Managed |
-| Gtid support for data-in replication | Supported | Supported | User Managed |
-| Data-out replication | Not Supported | In preview | Supported |
-| [**Backup and Recovery**](flexible-server/concepts-backup-restore.md) | | | |
-| Automated backups | Yes | Yes | No |
-| Backup retention | 7-35 days | 1-35 days | User Managed |
-| Long term retention of backups | User Managed | User Managed | User Managed |
-| Exporting backups | Supported using logical backups | Supported using logical backups | Supported |
-| Point in time restore capability to any time within the retention period | Yes | Yes | User Managed |
-| Fast restore point | No | Yes | No |
-| Ability to restore on a different zone | Not supported | Yes | Yes |
-| Ability to restore to a different VNET | No | Yes | Yes |
-| Ability to restore to a different region | Yes (Geo-redundant) | Yes (Geo-redundant) | User Managed |
-| Ability to restore a deleted server | Yes | Yes | No |
-| [**Disaster Recovery**](flexible-server/concepts-business-continuity.md) | | | |
-| DR across Azure regions | Using cross region read replicas, geo-redundant backup | Using geo-redundant backup | User Managed |
-| Automatic failover | No | Not Supported | No |
-| Can use the same r/w endpoint | No | Not Supported | No |
-| [**Monitoring**](flexible-server/concepts-monitoring.md) | | | |
-| Azure Monitor integration & alerting | Supported | Supported | User Managed |
-| Monitoring database operations | Supported | Supported | User Managed |
-| Query Performance Insights | Supported | Supported (using Workbooks)| User Managed |
-| Server Logs | Supported | Supported (using Diagnostics logs) | User Managed |
-| Audit Logs | Supported | Supported | Supported |
-| Error Logs | Not Supported | Supported | Supported |
-| Azure advisor support | Supported | Not Supported | Not Supported |
-| **Plugins** | | | |
-| validate_password | Not Supported | In preview | Supported |
-| caching_sha2_password | Not Supported | In preview | Supported |
-| [**Developer Productivity**](flexible-server/quickstart-create-server-cli.md) | | | |
-| Fleet Management | Supported with Azure CLI, PowerShell, REST, and Azure Resource Manager | Supported with Azure CLI, PowerShell, REST, and Azure Resource Manager | Supported for VMs with Azure CLI, PowerShell, REST, and Azure Resource Manager |
-| Terraform Support | Supported | Supported | Supported |
-| GitHub Actions | Supported | Supported | User Managed |
-
-## Business motivations for choosing PaaS or IaaS
-
-There are several factors that can influence your decision to choose PaaS or IaaS to host your MySQL databases.
-
-### Cost
-
-Cost reduction is often the primary consideration that determines the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for MySQL and MySQL on Azure VMs.
-
-#### Billing
-
-Azure Database for MySQL is currently available as a service in several tiers with different prices for resources. All resources are billed hourly at a fixed rate. For the latest information on the currently supported service tiers, compute sizes, and storage amounts, see [pricing page](https://azure.microsoft.com/pricing/details/mysql/). You can dynamically adjust service tiers and compute sizes to match your application's varied throughput needs. You're billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/).
-
-With Azure Database for MySQL, Microsoft automatically configures, patches, and upgrades the database software. These automated actions reduce your administration costs. Also, Azure Database for MySQL has [automated backups](./concepts-backup.md) capabilities. These capabilities help you achieve significant cost savings, especially when you have a large number of databases. In contrast, with MySQL on Azure VMs you can choose and run any MySQL version. No matter what MySQL version you use, you pay for the provisioned VM, storage cost associated with the data, backup, monitoring data and log storage and the costs for the specific MySQL license type used (if any).
-
-Azure Database for MySQL provides built-in high availability for any kind of node-level interruption while still maintaining the 99.99% SLA guarantee for the service. However, for database high availability within VMs, you use the high availability options like [MySQL replication](https://dev.mysql.com/doc/refman/8.0/en/replication.html) that are available on a MySQL database. Using a supported high availability option doesn't provide an additional SLA. But it does let you achieve greater than 99.99% database availability at additional cost and administrative overhead.
-
-For more information on pricing, see the following articles:
--- [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/)-- [Virtual machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/)-- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)-
-### Administration
-
-For many businesses, the decision to transition to a cloud service is as much about offloading complexity of administration as it is about cost.
-
-With IaaS, Microsoft:
--- Administers the underlying infrastructure.-- Provides automated patching for underlying hardware and OS.-
-With PaaS, Microsoft:
--- Administers the underlying infrastructure.-- Provides automated patching for underlying hardware, OS, and database engine.-- Manages high availability of the database.-- Automatically performs backups and replicates all data to provide disaster recovery.-- Encrypts the data at rest and in motion by default.-- Monitors your server and provides features for query performance insights and performance recommendations-
-The following list describes administrative considerations for each option:
--- With Azure Database for MySQL, you can continue to administer your database. But you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include:-
- - Databases
- - Sign-in
- - Index tuning
- - Query tuning
- - Auditing
- - Security
-
- Additionally, configuring high availability to another data center requires minimal to no configuration or administration.
--- With MySQL on Azure VMs, you have full control over the operating system and the MySQL server instance configuration. With a VM, you decide when to update or upgrade the operating system and database software and what patches to apply. You also decide when to install any additional software such as an antivirus application. Some automated features are provided to greatly simplify patching, backup, and high availability. You can control the size of the VM, the number of disks, and their storage configurations. For more information, see [Virtual machine and cloud service sizes for Azure](../virtual-machines/sizes.md).-
-### Time to move to Azure
--- Azure Database for MySQL is the right solution for cloud-designed applications when developer productivity and fast time to market for new solutions are critical. With programmatic functionality that is like DBA, the service is suitable for cloud architects and developers because it lowers the need for managing the underlying operating system and database.--- When you want to avoid the time and expense of acquiring new on-premises hardware, MySQL on Azure VMs is the right solution for applications that require a granular control and customization of MySQL engine not supported by the service or requiring access of the underlying OS. This solution is also suitable for migrating existing on-premises applications and databases to Azure intact, for cases where Azure Database for MySQL is a poor fit.-
-Because there's no need to change the presentation, application, and data layers, you save time and budget on rearchitecting your existing solution. Instead, you can focus on migrating all your solutions to Azure and addressing some performance optimizations that the Azure platform might require.
-
-## Next steps
--- See [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/MySQL/).-- Get started by [creating your first server](./quickstart-create-mysql-server-database-using-azure-portal.md).
mysql Single Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server-overview.md
- Title: Overview - Azure Database for MySQL Single Server
-description: Learn about the Azure Database for MySQL Single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
------ Previously updated : 6/19/2021-
-# Azure Database for MySQL Single Server
--
-[Azure Database for MySQL](overview.md) powered by the MySQL community edition is available in two deployment modes:
--- Flexible Server -- Single Server-
-In this article, we'll provide an overview and introduction to core concepts of the Single Server deployment model. To learn about flexible server deployment mode, refer [flexible server overview](flexible-server/index.yml). For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](select-right-deployment-type.md).
-
-## Overview
-Azure Database for MySQL Single Server is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
-
-Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. To learn about the differences between Flexible Server and Single Server deployment options, refer [select the right deployment option for you](select-right-deployment-type.md) documentation.
-
-## High availability
-
-The Single Server deployment model is optimized for built-in high availability, and elasticity at reduced cost. The architecture separates compute and storage. The database engine runs on a proprietary compute container, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
-
-During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
-
-1. A new compute container is provisioned
-2. The storage with data files is mapped to the new container
-3. MySQL database engine is brought online on the new compute container
-4. Gateway service ensures transparent failover ensuring no application side changes requires.
-
-The typical failover time ranges from 60-120 seconds. The cloud native design of Single Server allows it to support 99.99% of availability eliminating the cost of passive hot standby.
-
-Azure's industry leading 99.99% availability service level agreement (SLA), powered by a global network of Microsoft-managed datacenters, helps keep your applications running 24/7.
--
-## Automated Patching
-
-The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There's no user action or configuration settings required for patching. The patching frequency is service managed based on the criticality of the payload. In general, the service follows monthly release schedule as part of the continuous integration and release. Users can subscribe to the [planned maintenance notification](concepts-monitoring.md) to receive notification of the upcoming maintenance 72 hours before the event.
-
-## Automatic Backups
-
-Single Server automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption. Refer to [Backups](concepts-backup.md) for details.
-
-## Adjust performance and scale within seconds
-
-Single Server is available in three SKU tiers: Basic, General Purpose, and Memory Optimized. The Basic tier is best suited for low-cost development and low concurrency workloads. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume. See [Pricing tiers](./concepts-pricing-tiers.md) for details.
-
-## Enterprise grade Security, Compliance, and Governance
-
-Single Server uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default) or [customer managed](concepts-data-encryption-mysql.md). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service supports TLS versions 1.2, 1.1 and 1.0 with an ability to enforce [minimum TLS version](concepts-ssl-connection-security.md).
-
-The service allows private access to the servers using [private link](concepts-data-access-security-private-link.md) and offers threat protection through the optional [Microsoft Defender for open-source relational databases](../security-center/defender-for-databases-introduction.md) plan. Microsoft Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
-
-In addition to native authentication, Single Server supports [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) authentication. Azure AD authentication is a mechanism of connecting to the MySQL servers using identities defined and managed in Azure AD. With Azure AD authentication, you can manage database user identities and other Azure services in a central location, which simplifies and centralizes access control.
-
-[Audit logging](concepts-audit-logs.md) is available to track all database level activity.
-
-Single Server is complaint with all the industry-leading certifications like FedRAMP, HIPAA, PCI DSS. Visit the [Azure Trust Center](https://www.microsoft.com/trustcenter/security) for information about Azure's platform security.
-
-For more information about Azure Database for MySQL security features, see the [security overview](concepts-security.md).
-
-## Monitoring and alerting
-
-Single Server is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service allows configuring slow query logs and comes with a differentiated [Query store](concepts-query-store.md) feature. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. See [Monitoring](concepts-monitoring.md) for details.
-
-## Migration
-
-The service runs community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing application developed on MySQL engine to Single Server. The migration to the single server can be performed using one of the following options:
--- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like mysqldump/mydumper can provide fastest way to migrate. See [Migrate using dump and restore](concepts-migrate-dump-restore.md) for details. -- **Azure Database Migration Service** ΓÇô For seamless and simplified offline migrations to single server with high speed data migration, [Azure Database Migration Service](../dms/tutorial-mysql-azure-mysql-offline-portal.md) can be leveraged. -- **Data-in replication** ΓÇô For minimal downtime migrations, data-in replication, which relies on binlog based replication can also be leveraged. Data-in replication is preferred for minimal downtime migrations by hands-on experts looking for more control over migration. See [data-in replication](concepts-data-in-replication.md) for details.-
-## Contacts
-
-For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address isn't a technical support alias.
-
-In addition, consider the following points of contact as appropriate:
--- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).-- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.-- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).-
-## Next steps
-
-Now that you've read an introduction to Azure Database for MySQL - Single Server deployment mode, you're ready to:
--- Create your first server.
- - [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)
- - [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md)
- - [Azure CLI samples for Azure Database for MySQL](sample-scripts-azure-cli.md)
--- Build your first app using your preferred language:
- - [Python](./connect-python.md)
- - [Node.JS](./connect-nodejs.md)
- - [Java](./connect-java.md)
- - [Ruby](./connect-ruby.md)
- - [PHP](./connect-php.md)
- - [.NET (C#)](./connect-csharp.md)
- - [Go](./connect-go.md)
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server-whats-new.md
- Title: What's new in Azure Database for MySQL Single Server
-description: Learn about recent updates to Azure Database for MySQL - Single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
------ Previously updated : 06/17/2021-
-# What's new in Azure Database for MySQL - Single Server?
--
-Azure Database for MySQL is a relational database service in the Microsoft cloud. The service is based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine and supports versions 5.6(retired), 5.7, and 8.0. [Azure Database for MySQL - Single Server](./overview.md#azure-database-for-mysqlsingle-server) is a deployment mode that provides a fully managed database service with minimal requirements for customizations of database. The Single Server platform is designed to handle most database management functions such as patching, backups, high availability, and security, all with minimal user configuration and control.
-
-This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
-
-## May 2022
-
-Enabled the ability to change the server parameter innodb_ft_server_stopword_table from Portal/CLI.
-Users can now change the value of the innodb_ft_server_stopword_table parameter using the Azure portal and CLI. This parameter helps to configure your own InnoDB FULLTEXT index stopword list for all InnoDB tables. For more information, see [innodb_ft_server_stopword_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_ft_server_stopword_table).
-
-## March 2022
-
-This release of Azure Database for MySQL - Single Server includes the following updates.
-
-**Bug Fixes**
-
-The MySQL 8.0.27 client and newer versions are now compatible with Azure Database for MySQL - Single Server.
-
-## February 2022
-
-This release of Azure Database for MySQL - Single Server includes the following updates.
-
-**Known Issues**
-
-Customers in Japan,East US received two Maintenance Notification emails for this month. The Email notification send for *05-Feb 2022* was send by mistake and no changes will be done to the service on this date. You can safely ignore them. We apologize for the inconvenience.
-
-## December 2021
-
-This release of Azure Database for MySQL - Single Server includes the following updates:
--- **Query Text removed in Query Performance Insights to avoid unauthorized access** -
-Starting December 2021, you will not be able to see the query text of the queries in Query performance insight blade in Azure portal. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk. The recommended steps to view the query text is shared below:
--- Identify the query_id of the top queries from the Query Performance Insight blade in Azure portal-- Login to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries-
- ```sql
- SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
- SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
- ```
--- You can browse the query_digest_text column to identify the query text for the corresponding query_id-
-The above steps will ensure only authenticated and authorized users can have secure access to the query text.
-
-## October 2021
--- **Known Issues**-
-The MySQL 8.0.27 client is incompatible with Azure Database for MySQL - Single Server. All connections from the MySQL 8.0.27 client created either via mysql.exe or workbench will fail. As a workaround, consider using an earlier version of the client (prior to MySQL 8.0.27) or creating an instance of [Azure Database for MySQL - Flexible Server](./flexible-server/overview.md) instead.
-
-## June 2021
-
-This release of Azure Database for MySQL - Single Server includes the following updates.
--- **Enabled the ability to change the server parameter `activate_all_roles_on_login` from Portal/CLI for MySQL 8.0**-
- Users can now change the value of the activate_all_roles_on_login parameter using the Azure portal and CLI. This parameter helps to configure whether to enable automatic activation of all granted roles when users sign in to the server. For more information, see [Server System Variables](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html).
--- **Addressed MySQL Community Bugs #29596969 and #94668**-
- This release addresses an issue with the default expression being ignored in a CREATE TABLE query if the field was marked as PRIMARY KEY for MySQL 8.0. (MySQL Community Bug #29596969, Bug #94668). For more information, see [MySQL Bugs: #94668: Expression Default is made NULL during CREATE TABLE query, if field is made PK](https://bugs.mysql.com/bug.php?id=94668)
--- **Addressed an issue with duplicate table names in "SHOW TABLE" query**-
- We've introduced a new function to give a fine-grained control of the table cache during the table operation. Because of a code defect in the new feature, the entry in the directory cache might be miss configured or added and cause the unexpected behavior like return two tables with the same name. The directory cache only works for the ΓÇ£SHOW TABLEΓÇ¥ related query; it won't impact any DML or DDL queries. This issue is completely resolved in this release.
--- **Increased the default value for the server parameter `max_heap_table_size` to help reduce temp table spills to disk**-
- With this release, the max allowed value for the parameter `max_heap_table_size` has been changed to 8589934592 for General Purpose 64 vCore and Memory Optimize 32 vCore.
--- **Addressed an issue with setting the value of the parameter `sql_require_primary_key` from the portal**-
- Users can now modify the value of the parameter `sql_require_primary_key` directly from the Azure portal.
--- **General Availability of planned maintenance notification**-
- This release provides General Availability of planned maintenance notifications in Azure Database for MySQL - Single Server. For more information, see the article [Planned maintenance notification](concepts-planned-maintenance-notification.md).
--- **Enabled the parameter `redirect_enabled` by default**-
- With this release, the parameter `redirect_enabled` will be enabled by default. Redirection aims to reduce network latency between client applications and MySQL servers by allowing applications to connect directly to backend server nodes. Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft. For more information, see the article [Connect to Azure Database for MySQL with redirection](howto-redirection.md).
-
->[!Note]
-> * Redirection does not work with Private link setup. If you are using Private link for Azure Database for MySQL, you might encounter connection issue. To resolve the issue, make sure the parameter redirect_enabled is set to ΓÇ£OFFΓÇ¥ and the client application is restarted.</br>
-> * If you have a PHP application that uses the mysqlnd_azure redirection driver to connect to Azure Database for MySQL (with redirection enabled by default), you might face a data encoding issue that impacts your insert transactions..</br>
-> To resolve this issue, either:
-> - In Azure portal, disable the redirection by setting the redirect_enabled parameter to ΓÇ£OFFΓÇ¥, and restart the PHP application to clear the driver cache after the change.
-> - Explicitly set the charset related parameters at the session level, based on your settings after the connection is established (for example ΓÇ£set names utf8mb4ΓÇ¥).
-
-## February 2021
-
-This release of Azure Database for MySQL - Single Server includes the following updates.
--- Added new stored procedures to support the global transaction identifier (GTID) for data-in for the version 5.7 and 8.0 Large Storage server.-- Updated to support MySQL versions to 5.6.50 and 5.7.32.-
-## January 2021
-
-This release of Azure Database for MySQL - Single Server includes the following updates.
--- Enabled "reset password" to automatically fix the first admin permission.-- Exposed the `auto_increment_increment/auto_increment_offset` server parameter and `session_track_gtids`.-- Added new stored procedures for control innodb buffer pool dump/restore.-- Exposed the innodb warm up related server parameter for large storage server.-
-## Contacts
-
-If you have questions about or suggestions for working with Azure Database for MySQL, contact the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address isn't a technical support alias.
-
-In addition, consider the following points of contact as appropriate:
--- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).-- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.-- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).-
-## Next steps
--- Learn more about [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/server/).-- Browse the [public documentation](./single-server/index.yml) for Azure Database for MySQL ΓÇô Single Server.-- Review details on [troubleshooting common errors](./howto-troubleshoot-common-errors.md).
mysql App Development Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/app-development-best-practices.md
+
+ Title: App development best practices - Azure Database for MySQL
+description: Learn about best practices for building an app by using Azure Database for MySQL.
+++++ Last updated : 08/11/2020++
+# Best practices for building an application with Azure Database for MySQL
++
+Here are some best practices to help you build a cloud-ready application by using Azure Database for MySQL. These best practices can reduce development time for your app.
+
+## Configuration of application and database resources
+
+### Keep the application and database in the same region
+
+Make sure all your dependencies are in the same region when deploying your application in Azure. Spreading instances across regions or availability zones creates network latency, which might affect the overall performance of your application.
+
+### Keep your MySQL server secure
+
+Configure your MySQL server to be [secure](./concepts-security.md) and not accessible publicly. Use one of these options to secure your server:
+
+- [Firewall rules](./concepts-firewall-rules.md)
+- [Virtual networks](./concepts-data-access-and-security-vnet.md)
+- [Azure Private Link](./concepts-data-access-security-private-link.md)
+
+For security, you must always connect to your MySQL server over SSL and configure your MySQL server and your application to use TLS 1.2. See [How to configure SSL/TLS](./concepts-ssl-connection-security.md).
+
+### Use advanced networking with AKS
+
+When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. To learn more , see [Best practices for Azure Kubernetes Service and Azure Database for MySQL](concepts-aks.md)
+
+### Tune your server parameters
+
+For read-heavy workloads tuning server parameters, `tmp_table_size` and `max_heap_table_size` can help optimize for better performance. To calculate the values required for these variables, look at the total per-connection memory values and the base memory. The sum of per-connection memory parameters, excluding `tmp_table_size`, combined with the base memory accounts for total memory of the server.
+
+To calculate the largest possible size of `tmp_table_size` and `max_heap_table_size`, use the following formula:
+
+`(total memory - (base memory + (sum of per-connection memory * # of connections)) / # of connections`
+
+> [!NOTE]
+> Total memory indicates the total amount of memory that the server has across the provisioned vCores. For example, in a General Purpose two-vCore Azure Database for MySQL server, the total memory will be 5 GB * 2. You can find more details about memory for each tier in the [pricing tier](./concepts-pricing-tiers.md) documentation.
+>
+> Base memory indicates the memory variables, like `query_cache_size` and `innodb_buffer_pool_size`, that MySQL will initialize and allocate at server start. Per-connection memory, like `sort_buffer_size` and `join_buffer_size`, is memory that's allocated only when a query needs it.
+
+### Create non-admin users
+
+[Create non-admin users](./how-to-create-users.md) for each database. Typically, the user names are identified as the database names.
+
+### Reset your password
+
+You can [reset your password](./how-to-create-manage-server-portal.md#update-admin-password) for your MySQL server by using the Azure portal.
+
+Resetting your server password for a production database can bring down your application. It's a good practice to reset the password for any production workloads at off-peak hours to minimize the impact on your application's users.
+
+## Performance and resiliency
+
+Here are a few tools and practices that you can use to help debug performance issues with your application.
+
+### Enable slow query logs to identify performance issues
+
+You can enable [slow query logs](./concepts-server-logs.md) and [audit logs](./concepts-audit-logs.md) on your server. Analyzing slow query logs can help identify performance bottlenecks for troubleshooting.
+
+Audit logs are also available through Azure Diagnostics logs in Azure Monitor logs, Azure Event Hubs, and storage accounts. See [How to troubleshoot query performance issues](./how-to-troubleshoot-query-performance.md).
+
+### Use connection pooling
+
+Managing database connections can have a significant impact on the performance of the application as a whole. To optimize performance, you must reduce the number of times that connections are established and the time for establishing connections in key code paths. Use [connection pooling](./concepts-connectivity.md#access-databases-by-using-connection-pooling-recommended) to connect to Azure Database for MySQL to improve resiliency and performance.
+
+You can use the [ProxySQL](https://proxysql.com/) connection pooler to efficiently manage connections. Using a connection pooler can decrease idle connections and reuse existing connections, which will help avoid problems. See [How to set up ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/connecting-efficiently-to-azure-database-for-mysql-with-proxysql/ba-p/1279842) to learn more.
+
+### Retry logic to handle transient errors
+
+Your application might experience [transient errors](./concepts-connectivity.md#handling-transient-errors) where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds.
+
+A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./how-to-troubleshoot-common-connection-issues.md) to learn more.
+
+### Enable read replication to mitigate failovers
+
+You can use [Data-in Replication](./how-to-data-in-replication.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs.
+
+You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
+
+## Database deployment
+
+### Configure an Azure database for MySQL task in your CI/CD deployment pipeline
+
+Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) and continuous delivery (CD) through [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/) and use a task for [your MySQL server](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment) to update the database by running a custom script against it.
+
+### Use an effective process for manual database deployment
+
+During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
+
+1. Create a copy of a production database on a new database by using [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) or [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-admin-export-import-management.html).
+2. Update the new database with your new schema changes or updates needed for your database.
+3. Put the production database in a read-only state. You should not have write operations on the production database until deployment is completed.
+4. Test your application with the newly updated database from step 1.
+5. Deploy your application changes and make sure the application is now using the new database that has the latest updates.
+6. Keep the old production database so that you can roll back the changes. You can then evaluate to either delete the old production database or export it on Azure Storage if needed.
+
+> [!NOTE]
+> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests.
+>
+> Make sure your application code also handles any failed requests.
+
+### Use MySQL native metrics to see if your workload is exceeding in-memory temporary table sizes
+
+With a read-heavy workload, queries running against your MySQL server might exceed the in-memory temporary table sizes. A read-heavy workload can cause your server to switch to writing temporary tables to disk, which affects the performance of your application. To determine if your server is writing to disk as a result of exceeding temporary table size, look at the following metrics:
+
+```sql
+show global status like 'created_tmp_disk_tables';
+show global status like 'created_tmp_tables';
+```
+
+The `created_tmp_disk_tables` metric indicates how many tables were created on disk. The `created_tmp_table` metric tells you how many temporary tables have to be formed in memory, given your workload. To determine if running a specific query will use temporary tables, run the [EXPLAIN](https://dev.mysql.com/doc/refman/8.0/en/explain.html) statement on the query. The detail in the `extra` column indicates `Using temporary` if the query will run using temporary tables.
+
+To calculate the percentage of your workload with queries spilling to disks, use your metric values in the following formula:
+
+`(created_tmp_disk_tables / (created_tmp_disk_tables + created_tmp_tables)) * 100`
+
+Ideally, this percentage should be less 25%. If you see that the percentage is 25% or greater, we suggest modifying two server parameters, tmp_table_size and max_heap_table_size.
+
+## Database schema and queries
+
+Here are few tips to keep in mind when you build your database schema and your queries.
+
+### Use the right datatype for your table columns
+
+Using the right datatype based on the type of data you want to store can optimize storage and reduce errors that can occur because of incorrect datatypes.
+
+### Use indexes
+
+To avoid slow queries, you can use indexes. Indexes can help find rows with specific columns quickly. See [How to use indexes in MySQL](https://dev.mysql.com/doc/refman/8.0/en/mysql-indexes.html).
+
+### Use EXPLAIN for your SELECT queries
+
+Use the `EXPLAIN` statement to get insights on what MySQL is doing to run your query. It can help you detect bottlenecks or issues with your query. See [How to use EXPLAIN to profile query performance](how-to-troubleshoot-query-performance.md).
mysql Concept Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-monitoring-best-practices.md
+
+ Title: Monitoring best practices - Azure Database for MySQL
+description: This article describes the best practices to monitor your Azure Database for MySQL.
++++++ Last updated : 11/23/2020++
+# Best practices for monitoring Azure Database for MySQL - Single server
++
+Learn about the best practices that can be used to monitor your database operations and ensure that the performance is not compromised as data size grows. As we add new capabilities to the platform, we will continue to refine the best practices detailed in this section.
+
+## Layout of the current monitoring toolkit
+
+Azure Database for MySQL provides tools and methods you can use to monitor usage easily, add, or remove resources (such as CPU, memory, or I/O), troubleshoot potential problems, and help improve the performance of a database. You can [monitor performance metrics](concepts-monitoring.md#metrics) on a regular basis to see the average, maximum, and minimum values for a variety of time ranges.
+
+You can [set up alerts](how-to-alert-on-metric.md#create-an-alert-rule-on-a-metric-from-the-azure-portal) for a metric threshold, so you are informed if the server has reached those limits and take appropriate actions.
+
+Monitor the database server to make sure that the resources assigned to the database can handle the application workload. If the database is hitting resource limits, consider:
+
+* Identifying and optimizing the top resource-consuming queries.
+* Adding more resources by upgrading the service tier.
+
+### CPU utilization
+
+Monitor CPU usage and if the database is exhausting CPU resources. If CPU usage is 90% or more then you should scale up your compute by increasing the number of vCores or scale to next pricing tier. Make sure that the throughput or concurrency is as expected as you scale up/down the CPU.
+
+### Memory
+
+The amount of memory available for the database server is proportional to the [number of vCores](concepts-pricing-tiers.md). Make sure the memory is enough for the workload. Load test your application to verify the memory is sufficient for read and write operations. If the database memory consumption frequently grows beyond a defined threshold, this indicates that you should upgrade your instance by increasing vCores or higher performance tier. Use [Query Store](concepts-query-store.md), [Query Performance Recommendations](concepts-performance-recommendations.md) to identify queries with the longest duration, most executed. Explore opportunities to optimize.
+
+### Storage
+
+The [amount of storage](how-to-create-manage-server-portal.md#scale-compute-and-storage) provisioned for the MySQL server determines the IOPs for your server. The storage used by the service includes the database files, transaction logs, the server logs and backup snapshots. Ensure that the consumed disk space does not constantly exceed above 85 percent of the total provisioned disk space. If that is the case, you need to delete or archive data from the database server to free up some space.
+
+### Network traffic
+
+**Network Receive Throughput, Network Transmit Throughput** ΓÇô The rate of network traffic to and from the MySQL instance in megabytes per second. You need to evaluate the throughput requirement for server and constantly monitor the traffic if throughput is lower than expected.
+
+### Database connections
+
+**Database Connections** ΓÇô The number of client sessions that are connected to the Azure Database for MySQL should be aligned with the [connection limits for the selected SKU](concepts-server-parameters.md#max_connections) size.
+
+## Next steps
+
+* [Best practice for performance of Azure Database for MySQL](concept-performance-best-practices.md)
+* [Best practice for server operations using Azure Database for MySQL](concept-operation-excellence-best-practices.md)
mysql Concept Operation Excellence Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-operation-excellence-best-practices.md
+
+ Title: MySQL server operational best practices - Azure Database for MySQL
+description: This article describes the best practices to operate your MySQL database on Azure.
+++++ Last updated : 11/23/2020++
+# Best practices for server operations on Azure Database for MySQL -Single server
++
+Learn about the best practices for working with Azure Database for MySQL. As we add new capabilities to the platform, we will continue to focus on refining the best practices detailed in this section.
+
+## Azure Database for MySQL Operational Guidelines
+
+The following are operational guidelines that should be followed when working with your Azure Database for MySQL to improve the performance of your database:
+
+* **Co-location**: To reduce network latency, place the client and the database server are in the same Azure region.
+
+* **Monitor your memory, CPU, and storage usage**: You can [setup alerts](how-to-alert-on-metric.md) to notify you when usage patterns change or when you approach the capacity of your deployment, so that you can maintain system performance and availability.
+
+* **Scale up your DB instance**: You can [scale up](how-to-create-manage-server-portal.md) when you are approaching storage capacity limits. You should have some buffer in storage and memory to accommodate unforeseen increases in demand from your applications. You can also [enable the storage autogrow](how-to-auto-grow-storage-portal.md) feature 'ON' just to ensure that the service automatically scales the storage as it nears the storage limits.
+
+* **Configure backups**: Enable [local or geo-redundant backups](how-to-restore-server-portal.md#set-backup-configuration) based on the requirement of the business. Also, you modify the retention period on how long the backups are available for business continuity.
+
+* **Increase I/O capacity**: If your database workload requires more I/O than you have provisioned, recovery or other transactional operations for your database will be slow. To increase the I/O capacity of a server instance, do any or all of the following:
+
+ * Azure database for MySQL provides IOPS scaling at the rate of three IOPS per GB storage provisioned. [Increase the provisioned storage](how-to-create-manage-server-portal.md#scale-storage-up) to scale the IOPS for better performance.
+
+ * If you are already using Provisioned IOPS storage, provision [additional throughput capacity](how-to-create-manage-server-portal.md#scale-storage-up).
+
+* **Scale compute**: Database workload can also be limited due to CPU or memory and this can have serious impact on the transaction processing. Note that compute (pricing tier) can be scaled up or down between [General Purpose or Memory Optimized](concepts-pricing-tiers.md) tiers only.
+
+* **Test for failover**: Manually test failover for your server instance to understand how long the process takes for your use case and to ensure that the application that accesses your server instance can automatically connect to the new server instance after failover.
+
+* **Use primary key**: Make sure your tables have a primary or unique key as you operate on the Azure Database for MySQL. This helps in a lot taking backups, replica etc. and improves performance.
+
+* **Configure TTL value**: If your client application is caching the Domain Name Service (DNS) data of your server instances, set a time-to-live (TTL) value of less than 30 seconds. Because the underlying IP address of a server instance can change after a failover, caching the DNS data for an extended time can lead to connection failures if your application tries to connect to an IP address that no longer is in service.
+
+* Use connection pooling to avoid hitting the [maximum connection limits](concepts-server-parameters.md#max_connections)and use retry logic to avoid intermittent connection issues.
+
+* If you are using replica, use [ProxySQL to balance off load](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/scaling-an-azure-database-for-mysql-workload-running-on/ba-p/1105847) between the primary server and the readable secondary replica server. See the setup steps here. </br>
+
+* When provisioning the resource, make sure you [enabled the autogrow](how-to-auto-grow-storage-portal.md) for your Azure Database for MySQL. This does not add any additional cost and will protect the database from any storage bottlenecks that you might run into. </br>
++
+### Using InnoDB with Azure Database for MySQL
+
+* If using `ibdata1` feature, which is a system tablespace data file cannot shrink or be purged by dropping the data from the table, or moving the table to file-per-table `tablespaces`.
+
+* For a database greater than 1 TB in size, you should create the table in **innodb_file_per_table** `tablespace`. For a single table that is larger than 1 TB in size, you should the [partition](https://dev.mysql.com/doc/refman/5.7/en/partitioning.html) table.
+
+* For a server that has a large number of `tablespace`, the engine startup will be very slow due to the sequential tablespace scan during MySQL startup or failover.
+
+* Set innodb_file_per_table = ON before you create a table, if the total table number is less than 500.
+
+* If you have more than 500 tables in a database, then review the table size for each individual table. For a large table, you should still consider using the file-per-table tablespace to avoid the system tablespace file hit max storage limit.
+
+> [!NOTE]
+> For tables with size less than 5GB, consider using the system tablespace
+> ```sql
+> CREATE TABLE tbl_name ... *TABLESPACE* = *innodb_system*;
+> ```
+
+* [Partition](https://dev.mysql.com/doc/refman/5.7/en/partitioning.html) your table at table creation if you have a very large table might potentially grow beyond 1 TB.
+
+* Use multiple MySQL servers and spread the tables across those servers. Avoid putting too many tables on a single server if you have around 10000 tables or more.
+
+## Next steps
+- [Best practice for performance of Azure Database for MySQL](concept-performance-best-practices.md)
+- [Best practice for monitoring your Azure Database for MySQL](concept-monitoring-best-practices.md)
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-performance-best-practices.md
+
+ Title: Performance best practices - Azure Database for MySQL
+description: This article describes some recommendations to monitor and tune performance for your Azure Database for MySQL.
+++++ Last updated : 1/28/2021++
+# Best practices for optimal performance of your Azure Database for MySQL - Single server
++
+Learn how to get best performance while working with your Azure Database for MySQL - Single server. As we add new capabilities to the platform, we will continue refining our recommendations in this section.
+
+## Physical Proximity
+
+ Make sure you deploy an application and the database in the same region. A quick check before starting any performance benchmarking run is to determine the network latency between the client and database using a simple SELECT 1 query.
+
+## Accelerated Networking
+
+Use accelerated networking for the application server if you are using Azure virtual machine, Azure Kubernetes, or App Services. Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types.
+
+## Connection Efficiency
+
+Establishing a new connection is always an expensive and time-consuming task. When an application requests a database connection, it prioritizes the allocation of existing idle database connections rather than creating a new one. Here are some options for good connection practices:
+
+- **ProxySQL**: Use [ProxySQL](https://proxysql.com/) which provides built-in connection pooling and [load balance your workload](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042) to multiple read replicas as required on demand with any changes in application code.
+
+- **Heimdall Data Proxy**: Alternatively, you can also leverage Heimdall Data Proxy, a vendor-neutral proprietary proxy solution. It supports query caching and read/write split with replication lag detection. You can also refer to how to [accelerate MySQL Performance with the Heimdall proxy](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/accelerate-mysql-performance-with-the-heimdall-proxy/ba-p/1063349).
+
+- **Persistent or Long-Lived Connection**: If your application has short transactions or queries typically with execution time < 5-10 ms, then replace short connections with persistent connections. Replace short connections with persistent connections requires only minor changes to the code, but it has a major effect in terms of improving performance in many typical application scenarios. Make sure to set the timeout or close connection when the transaction is complete.
+
+- **Replica**: If you are using replica, use [ProxySQL](https://proxysql.com/) to balance off load between the primary server and the readable secondary replica server. Learn [how to set up ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/scaling-an-azure-database-for-mysql-workload-running-on/ba-p/1105847).
+
+## Data Import configurations
+
+- You can temporarily scale your instance to higher SKU size before starting a data import operation and then scale it down when the import is successful.
+- You can import your data with minimal downtime by using [Azure Database Migration Service (DMS)](https://datamigration.microsoft.com/) for online or offline migrations.
+
+## Azure Database for MySQL Memory Recommendations
+
+An Azure Database for MySQL performance best practice is to allocate enough RAM so that your working set resides almost completely in memory.
+
+- Check if the memory percentage being used in reaching the [limits](./concepts-pricing-tiers.md) using the [metrics for the MySQL server](./concepts-monitoring.md).
+- Set up alerts on such numbers to ensure that as the servers reaches limits you can take prompt actions to fix it. Based on the limits defined, check if scaling up the database SKUΓÇöeither to higher compute size or to better pricing tier, which results in a dramatic increase in performance.
+- Scale up until your performance numbers no longer drops dramatically after a scaling operation. For information on monitoring a DB instance's metrics, see [MySQL DB Metrics](./concepts-monitoring.md#metrics).
+
+## Use InnoDB Buffer Pool Warmup
+
+After restarting Azure Database for MySQL server, the data pages residing in storage are loaded as the tables are queried which leads to increased latency and slower performance for the first execution of the queries. This may not be acceptable for latency sensitive workloads.
+
+Utilizing InnoDB buffer pool warmup shortens the warmup period by reloading disk pages that were in the buffer pool before the restart rather than waiting for DML or SELECT operations to access corresponding rows.
+
+You can reduce the warmup period after restarting your Azure Database for MySQL server, which represents a performance advantage by configuring [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html). InnoDB saves a percentage of the most recently used pages for each buffer pool at server shutdown and restores these pages at server startup.
+
+It is also important to note that improved performance comes at the expense of longer start-up time for the server. When this parameter is enabled, server startup and restart time is expected to increase depending on the IOPS provisioned on the server.
+
+We recommend testing and monitor the restart time to ensure the start-up/restart performance is acceptable as the server is unavailable during that time. It is not recommended to use this parameter with less than 1000 provisioned IOPS (or in other words, when storage provisioned is less than 335 GB).
+
+To save the state of the buffer pool at server shutdown, set server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up/restart time by lowering and fine-tuning the value of server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`.
+
+> [!Note]
+> InnoDB buffer pool warmup parameters are only supported in general purpose storage servers with up to 16-TB storage. Learn more about [Azure Database for MySQL storage options here](./concepts-pricing-tiers.md#storage).
+
+## Next steps
+
+- [Best practice for server operations using Azure Database for MySQL](concept-operation-excellence-best-practices.md) <br/>
+- [Best practice for monitoring your Azure Database for MySQL](concept-monitoring-best-practices.md)<br/>
+- [Get started with Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md)<br/>
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-reserved-pricing.md
+
+ Title: Prepay for compute with reserved capacity - Azure Database for MySQL
+description: Prepay for Azure Database for MySQL compute resources with reserved capacity
+++++ Last updated : 10/06/2021++
+# Prepay for Azure Database for MySQL compute resources with reserved instances
++
+Azure Database for MySQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
+
+## How does the instance reservation work?
+You do not need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). </br>
+
+You can buy Azure Database for MySQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
+
+* You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
+* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
+* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for MySQL reserved capacity. </br>
+
+The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md).
+
+## Reservation exchanges and refunds
+
+You can exchange a reservation for another reservation of the same type, you can also exchange a reservation from Azure Database for MySQL - Single Server with Flexible Server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+## Reservation discount
+
+You may save up to 67% on compute costs with reserved instances. In order to find the discount for your case, please visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
++
+## Determine the right database size before purchase
+
+The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed server within a specific region and using the same performance tier and hardware generation.</br>
+
+For example, let's suppose that you are running one general purpose, Gen5 ΓÇô 32 vCore MySQL database, and two memory optimized, Gen5 ΓÇô 16 vCore MySQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose, Gen5 ΓÇô 32 vCore database server, and one memory optimized, Gen5 ΓÇô 16 vCore database server. Let's suppose that you know that you will need these resources for at least 1 year. In this case, you should purchase a 64 (2x32) vCores, 1 year reservation for single database general purpose - Gen5 and a 48 (2x16 + 16) vCore 1 year reservation for single database memory optimized - Gen5
++
+## Buy Azure Database for MySQL reserved capacity
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Select **All services** > **Reservations**.
+3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for MySQL** to purchase a new reservation for your MySQL databases.
+4. Fill-in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for MySQL servers that get the discount depend on the scope and quantity selected.
++++
+The following table describes required fields.
+
+| Field | Description |
+| : | :- |
+| Subscription | The subscription used to pay for the Azure Database for MySQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MySQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
+| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for MySQL servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for MySQL servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for MySQL servers in the selected subscription and the selected resource group within that subscription.
+| Region | The Azure region that's covered by the Azure Database for MySQL reserved capacity reservation.
+| Deployment Type | The Azure Database for MySQL resource type that you want to buy the reservation for.
+| Performance Tier | The service tier for the Azure Database for MySQL servers.
+| Term | One year
+| Quantity | The amount of compute resources being purchased within the Azure Database for MySQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you are running or planning to run an Azure Database for MySQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
+
+## Reserved instances API support
+
+Use Azure APIs to programmatically get information for your organization about Azure service or software reservations. For example, use the APIs to:
+
+- Find reservations to buy
+- Buy a reservation
+- View purchased reservations
+- View and manage reservation access
+- Split or merge reservations
+- Change the scope of reservations
+
+For more information, see [APIs for Azure reservation automation](../../cost-management-billing/reservations/reservation-apis.md).
+
+## vCore size flexibility
+
+vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit.
+
+## How to view reserved instance purchase details
+
+You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations). For more information, see [How a reservation discount is applied to Azure Database for MySQL](../../cost-management-billing/reservations/understand-reservation-charges-mysql.md).
+
+## Reserved instance expiration
+
+You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Instances for Azure Database for MySQL](../../cost-management-billing/reservations/understand-reservation-charges-mysql.md).
+
+## Need help ? Contact us
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest)
+
+## Next steps
+
+The vCore reservation discount is applied automatically to the number of Azure Database for MySQL servers that match the Azure Database for MySQL reserved capacity reservation scope and attributes. You can update the scope of the Azure database for MySQL reserved capacity reservation through Azure portal, PowerShell, CLI or through the API. </br></br>
+To learn how to manage the Azure Database for MySQL reserved capacity, see manage Azure Database for MySQL reserved capacity.
+
+To learn more about Azure Reservations, see the following articles:
+
+* [What are Azure Reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md)?
+* [Manage Azure Reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+* [Understand Azure Reservations discount](../../cost-management-billing/reservations/understand-reservation-charges.md)
+* [Understand reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reservation-charges-mysql.md)
+* [Understand reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
+* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-aks.md
+
+ Title: Connect to Azure Kubernetes Service - Azure Database for MySQL
+description: Learn about connecting Azure Kubernetes Service with Azure Database for MySQL
+++++ Last updated : 07/14/2020+++
+# Best practices for Azure Kubernetes Service and Azure Database for MySQL
++
+Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for MySQL together to create an application.
+
+## Create Database before creating the AKS cluster
+
+Azure Database for MySQL has two deployment options:
+
+- Single Sever
+- Flexible Server
+
+Single Server supports Single availability zone and Flexible Server supports multiple availability zones. AKS on the other hand also supports enabling single or multiple availability zones. Creating the database server first to see the availability zone the server is in and then create the AKS clusters in the same availability zone. This can improve performance for the application by reducing networking latency.
+
+## Use Accelerated networking
+
+Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md).
+
+From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
+
+You can confirm whether your AKS cluster has accelerated networking:
+
+1. Go to the Azure portal and select your AKS cluster.
+2. Select the Properties tab.
+3. Copy the name of the **Infrastructure Resource Group**.
+4. Use the portal search bar to locate and open the infrastructure resource group.
+5. Select a VM in that resource group.
+6. Go to the VM's **Networking** tab.
+7. Confirm whether **Accelerated networking** is 'Enabled.'
+
+Or through the Azure CLI using the following two commands:
+
+```azurecli
+az aks show --resource-group myResourceGroup --name myAKSCluster --query "nodeResourceGroup"
+```
+
+The output will be the generated resource group that AKS creates containing the network interface. Take the "nodeResourceGroup" name and use it in the next command. **EnableAcceleratedNetworking** will either be true or false:
+
+```azurecli
+az network nic list --resource-group nodeResourceGroup -o table
+```
+
+## Use Azure premium fileshare
+
+ Use [Azure premium fileshare](../../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal) for persistent storage that can be used by one or many pods, and can be dynamically or statically provisioned. Azure premium fileshare gives you best performance for your application if you expect large number of I/O operations on the file storage. To learn more , see [how to enable Azure Files](../../aks/azure-files-dynamic-pv.md).
+
+## Next steps
+
+Create an AKS cluster [using the Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli), [using Azure PowerShell](/azure/aks/learn/quick-kubernetes-deploy-powershell), or [using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal).
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-audit-logs.md
+
+ Title: Audit logs - Azure Database for MySQL
+description: Describes the audit logs available in Azure Database for MySQL, and the available parameters for enabling logging levels.
+++++ Last updated : 6/24/2020++
+# Audit Logs in Azure Database for MySQL
++
+In Azure Database for MySQL, the audit log is available to users. The audit log can be used to track database-level activity and is commonly used for compliance.
+
+## Configure audit logging
+
+>[!IMPORTANT]
+> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted and minimum amount of data is collected.
+
+By default the audit log is disabled. To enable it, set `audit_log_enabled` to ON.
+
+Other parameters you can adjust include:
+
+- `audit_log_events`: controls the events to be logged. See below table for specific audit events.
+- `audit_log_include_users`: MySQL users to be included for logging. The default value for this parameter is empty, which will include all the users for logging. This has higher priority over `audit_log_exclude_users`. Max length of the parameter is 512 characters.
+- `audit_log_exclude_users`: MySQL users to be excluded from logging. Max length of the parameter is 512 characters.
+
+> [!NOTE]
+> `audit_log_include_users` has higher priority over `audit_log_exclude_users`. For example, if `audit_log_include_users` = `demouser` and `audit_log_exclude_users` = `demouser`, the user will be included in the audit logs because `audit_log_include_users` has higher priority.
+
+| **Event** | **Description** |
+|||
+| `CONNECTION` | - Connection initiation (successful or unsuccessful) <br> - User reauthentication with different user/password during session <br> - Connection termination |
+| `DML_SELECT`| SELECT queries |
+| `DML_NONSELECT` | INSERT/DELETE/UPDATE queries |
+| `DML` | DML = DML_SELECT + DML_NONSELECT |
+| `DDL` | Queries like "DROP DATABASE" |
+| `DCL` | Queries like "GRANT PERMISSION" |
+| `ADMIN` | Queries like "SHOW STATUS" |
+| `GENERAL` | All in DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN |
+| `TABLE_ACCESS` | - Available for MySQL 5.7 and MySQL 8.0 <br> - Table read statements, such as SELECT or INSERT INTO ... SELECT <br> - Table delete statements, such as DELETE or TRUNCATE TABLE <br> - Table insert statements, such as INSERT or REPLACE <br> - Table update statements, such as UPDATE |
+
+## Access audit logs
+
+Audit logs are integrated with Azure Monitor Diagnostic Logs. Once you've enabled audit logs on your MySQL server, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs in the Azure portal, see the [audit log portal article](how-to-configure-audit-logs-portal.md#set-up-diagnostic-logs).
+
+>[!Note]
+>Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings
+
+## Diagnostic Logs Schemas
+
+The following sections describe what's output by MySQL audit logs based on the event type. Depending on the output method, the fields included and the order in which they appear may vary.
+
+### Connection
+
+| **Property** | **Description** |
+|||
+| `TenantId` | Your tenant ID |
+| `SourceSystem` | `Azure` |
+| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
+| `Type` | Type of the log. Always `AzureDiagnostics` |
+| `SubscriptionId` | GUID for the subscription that the server belongs to |
+| `ResourceGroup` | Name of the resource group the server belongs to |
+| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
+| `ResourceType` | `Servers` |
+| `ResourceId` | Resource URI |
+| `Resource` | Name of the server |
+| `Category` | `MySqlAuditLogs` |
+| `OperationName` | `LogEvent` |
+| `LogicalServerName_s` | Name of the server |
+| `event_class_s` | `connection_log` |
+| `event_subclass_s` | `CONNECT`, `DISCONNECT`, `CHANGE USER` (only available for MySQL 5.7) |
+| `connection_id_d` | Unique connection ID generated by MySQL |
+| `host_s` | Blank |
+| `ip_s` | IP address of client connecting to MySQL |
+| `user_s` | Name of user executing the query |
+| `db_s` | Name of database connected to |
+| `\_ResourceId` | Resource URI |
+
+### General
+
+Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN event types.
+
+> [!NOTE]
+> For `sql_text`, log will be truncated if it exceeds 2048 characters.
+
+| **Property** | **Description** |
+|||
+| `TenantId` | Your tenant ID |
+| `SourceSystem` | `Azure` |
+| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
+| `Type` | Type of the log. Always `AzureDiagnostics` |
+| `SubscriptionId` | GUID for the subscription that the server belongs to |
+| `ResourceGroup` | Name of the resource group the server belongs to |
+| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
+| `ResourceType` | `Servers` |
+| `ResourceId` | Resource URI |
+| `Resource` | Name of the server |
+| `Category` | `MySqlAuditLogs` |
+| `OperationName` | `LogEvent` |
+| `LogicalServerName_s` | Name of the server |
+| `event_class_s` | `general_log` |
+| `event_subclass_s` | `LOG`, `ERROR`, `RESULT` (only available for MySQL 5.6) |
+| `event_time` | Query start time in UTC timestamp |
+| `error_code_d` | Error code if query failed. `0` means no error |
+| `thread_id_d` | ID of thread that executed the query |
+| `host_s` | Blank |
+| `ip_s` | IP address of client connecting to MySQL |
+| `user_s` | Name of user executing the query |
+| `sql_text_s` | Full query text |
+| `\_ResourceId` | Resource URI |
+
+### Table access
+
+> [!NOTE]
+> Table access logs are only output for MySQL 5.7.<br>For `sql_text`, log will be truncated if it exceeds 2048 characters.
+
+| **Property** | **Description** |
+|||
+| `TenantId` | Your tenant ID |
+| `SourceSystem` | `Azure` |
+| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
+| `Type` | Type of the log. Always `AzureDiagnostics` |
+| `SubscriptionId` | GUID for the subscription that the server belongs to |
+| `ResourceGroup` | Name of the resource group the server belongs to |
+| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
+| `ResourceType` | `Servers` |
+| `ResourceId` | Resource URI |
+| `Resource` | Name of the server |
+| `Category` | `MySqlAuditLogs` |
+| `OperationName` | `LogEvent` |
+| `LogicalServerName_s` | Name of the server |
+| `event_class_s` | `table_access_log` |
+| `event_subclass_s` | `READ`, `INSERT`, `UPDATE`, or `DELETE` |
+| `connection_id_d` | Unique connection ID generated by MySQL |
+| `db_s` | Name of database accessed |
+| `table_s` | Name of table accessed |
+| `sql_text_s` | Full query text |
+| `\_ResourceId` | Resource URI |
+
+## Analyze logs in Azure Monitor Logs
+
+Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your audited events. Below are some sample queries to help you get started. Make sure to update the below with your server name.
+
+- List GENERAL events on a particular server
+
+ ```kusto
+ AzureDiagnostics
+ | where LogicalServerName_s == '<your server name>'
+ | where Category == 'MySqlAuditLogs' and event_class_s == "general_log"
+ | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | order by TimeGenerated asc nulls last
+ ```
+
+- List CONNECTION events on a particular server
+
+ ```kusto
+ AzureDiagnostics
+ | where LogicalServerName_s == '<your server name>'
+ | where Category == 'MySqlAuditLogs' and event_class_s == "connection_log"
+ | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | order by TimeGenerated asc nulls last
+ ```
+
+- Summarize audited events on a particular server
+
+ ```kusto
+ AzureDiagnostics
+ | where LogicalServerName_s == '<your server name>'
+ | where Category == 'MySqlAuditLogs'
+ | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | summarize count() by event_class_s, event_subclass_s, user_s, ip_s
+ ```
+
+- Graph the audit event type distribution on a particular server
+
+ ```kusto
+ AzureDiagnostics
+ | where LogicalServerName_s == '<your server name>'
+ | where Category == 'MySqlAuditLogs'
+ | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m)
+ | render timechart
+ ```
+
+- List audited events across all MySQL servers with Diagnostic Logs enabled for audit logs
+
+ ```kusto
+ AzureDiagnostics
+ | where Category == 'MySqlAuditLogs'
+ | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | order by TimeGenerated asc nulls last
+ ```
+
+## Next steps
+
+- [How to configure audit logs in the Azure portal](how-to-configure-audit-logs-portal.md)
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-ad-authentication.md
+
+ Title: Active Directory authentication - Azure Database for MySQL
+description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for MySQL
+++++ Last updated : 07/23/2020++
+# Use Azure Active Directory for authenticating with MySQL
++
+Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for MySQL using identities defined in Azure AD.
+With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
+
+Benefits of using Azure AD include:
+
+- Authentication of users across Azure Services in a uniform way
+- Management of password policies and password rotation in a single place
+- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords
+- Customers can manage database permissions using external (Azure AD) groups.
+- Azure AD authentication uses MySQL database users to authenticate identities at the database level
+- Support of token-based authentication for applications connecting to Azure Database for MySQL
+
+To configure and use Azure Active Directory authentication, use the following process:
+
+1. Create and populate Azure Active Directory with user identities as needed.
+2. Optionally associate or change the Active Directory currently associated with your Azure subscription.
+3. Create an Azure AD administrator for the Azure Database for MySQL server.
+4. Create database users in your database mapped to Azure AD identities.
+5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
+
+> [!NOTE]
+> To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Configure and sign in with Azure AD for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md).
+
+## Architecture
+
+The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for MySQL. The arrows indicate communication pathways.
+
+![authentication flow][1]
+
+## Administrator structure
+
+When using Azure AD authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Azure AD administrator. Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL server. Only one Azure AD administrator (a user or group) can be configured at any time.
+
+![admin structure][2]
+
+## Permissions
+
+To create new users that can authenticate with Azure AD, you must be the designated Azure AD administrator. This user is assigned by configuring the Azure AD Administrator account for a specific Azure Database for MySQL server.
+
+To create a new Azure AD database user, you must connect as the Azure AD administrator. This is demonstrated in [Configure and Login with Azure AD for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md).
+
+Any Azure AD authentication is only possible if the Azure AD admin was created for Azure Database for MySQL. If the Azure Active Directory admin was removed from the server, existing Azure Active Directory users created previously can no longer connect to the database using their Azure Active Directory credentials.
+
+## Connecting using Azure AD identities
+
+Azure Active Directory authentication supports the following methods of connecting to a database using Azure AD identities:
+
+- Azure Active Directory Password
+- Azure Active Directory Integrated
+- Azure Active Directory Universal with MFA
+- Using Active Directory Application certificates or client secrets
+- [Managed Identity](how-to-connect-with-managed-identity.md)
+
+Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in.
+
+Please note that management operations, such as adding new users, are only supported for Azure AD user roles at this point.
+
+> [!NOTE]
+> For more details on how to connect with an Active Directory token, see [Configure and sign in with Azure AD for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md).
+
+## Additional considerations
+
+- Azure Active Directory authentication is only available for MySQL 5.7 and newer.
+- Only one Azure AD administrator can be configured for a Azure Database for MySQL server at any time.
+- Only an Azure AD administrator for MySQL can initially connect to the Azure Database for MySQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users.
+- If a user is deleted from Azure AD, that user will no longer be able to authenticate with Azure AD, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user.
+> [!NOTE]
+> Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for MySQL this access will be revoked immediately.
+- If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will re-enable Azure AD logins.
+- Azure Database for MySQL matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing user.
+
+## Next steps
+
+- To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Configure and sign in with Azure AD for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md).
+- For an overview of logins, and database users for Azure Database for MySQL, see [Create users in Azure Database for MySQL](how-to-create-users.md).
+
+<!--Image references-->
+
+[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
+[2]: ./media/concepts-azure-ad-authentication/admin-structure.png
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-advisor-recommendations.md
+
+ Title: Azure Advisor for MySQL
+description: Learn about Azure Advisor recommendations for MySQL.
+++++ Last updated : 04/08/2021+
+# Azure Advisor for MySQL
++
+Learn about how Azure Advisor is applied to Azure Database for MySQL and get answers to common questions.
+## What is Azure Advisor for MySQL?
+The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your MySQL database.
+Advisor recommendations are split among our MySQL database offerings:
+* Azure Database for MySQL - Single Server
+* Azure Database for MySQL - Flexible Server
+
+Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations.
+## Where can I view my recommendations?
+Recommendations are available from the **Overview** navigation sidebar in the Azure portal. A preview will appear as a banner notification, and details can be viewed in the **Notifications** section located just below the resource usage graphs.
++
+## Recommendation types
+Azure Database for MySQL prioritize the following types of recommendations:
+* **Performance**: To improve the speed of your MySQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../../advisor/advisor-performance-recommendations.md).
+* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limit and connection limit recommendations. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-high-availability-recommendations.md).
+* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../../advisor/advisor-cost-recommendations.md).
+
+## Understanding your recommendations
+* **Daily schedule**: For Azure MySQL databases, we check server telemetry and issue recommendations on a daily schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry on the following day.
+* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
+
+## Next steps
+For more information, see [Azure Advisor Overview](../../advisor/advisor-overview.md).
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-backup.md
+
+ Title: Backup and restore - Azure Database for MySQL
+description: Learn about automatic backups and restoring your Azure Database for MySQL server.
+++++ Last updated : 3/27/2020+++
+# Backup and restore in Azure Database for MySQL
++
+Azure Database for MySQL automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion.
+
+## Backups
+
+Azure Database for MySQL takes backups of the data files and the transaction log. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can [optionally configure it](how-to-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted using AES 256-bit encryption.
+
+These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](concepts-migrate-dump-restore.md) to copy a database.
+
+The backup type and frequency is depending on the backend storage for the servers.
+
+### Backup type and frequency
+
+#### Basic storage servers
+
+The Basic storage is the backend storage supporting [Basic tier servers](concepts-pricing-tiers.md). Backups on Basic storage servers are snapshot-based. A full database snapshot is performed daily. There are no differential backups performed for basic storage servers and all snapshot backups are full database backups only.
+
+Transaction log backups occur every five minutes.
+
+#### General purpose storage v1 servers (supports up to 4-TB storage)
+
+The General purpose storage is the backend storage supporting [General Purpose](concepts-pricing-tiers.md) and [Memory Optimized tier](concepts-pricing-tiers.md) server. For servers with general purpose storage up to 4 TB, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. The backups on general purpose storage up to 4-TB storage are not snapshot-based and consumes IO bandwidth at the time of backup. For large databases (> 1 TB) on 4-TB storage, we recommend you consider
+
+- Provisioning more IOPs to account for backup IOs OR
+- Alternatively, migrate to general purpose storage that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred [Azure regions](./concepts-pricing-tiers.md#storage). There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.
+
+#### General purpose storage v2 servers (supports up to 16-TB storage)
+
+In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes.
+
+For more information of Basic and General purpose storage, refer [storage documentation](./concepts-pricing-tiers.md#storage).
+
+### Backup retention
+
+Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./how-to-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./how-to-restore-server-cli.md#set-backup-configuration).
+
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days:
+
+- General purpose storage v1 servers (supporting up to 4-TB storage) will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.
+- General purpose storage v2 servers (supporting up to 16-TB storage) will retain the full database snapshots and transaction log backups in last 8 days.
+
+#### Long-term retention
+
+Long-term retention of backups beyond 35 days is currently not natively supported by the service yet. You have an option to use mysqldump to take backups and store them for long-term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it.
+
+### Backup redundancy options
+
+Azure Database for MySQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../../availability-zones/cross-region-replication-azure.md). This geo-redundancy provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
+
+> [!NOTE]
+>For the following regions - Central India, France Central, UAE North, South Africa North; General purpose storage v2 storage is in Public Preview. If you create a source server in General purpose storage v2 (Supporting up to 16-TB storage) in the above mentioned regions then enabling Geo-Redundant Backup is not supported.
+
+#### Moving from locally redundant to geo-redundant backup storage
+
+Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](concepts-migrate-dump-restore.md) is the only supported option.
+
+### Backup storage cost
+
+Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/).
+
+You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available via the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
+
+The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups.
+
+## Restore
+
+In Azure Database for MySQL, performing a restore creates a new server from the original server's backups and restores all databases contained in the server.
+
+There are two types of restore available:
+
+- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server utilizing the combination of full and transaction log backups.
+- **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region utilizing the most recent backup taken.
+
+The estimated time for the recovery of the server depends on several factors:
+* The size of the databases
+* The number of transaction logs involved
+* The amount of activity that needs to be replayed to recover to the restore point
+* The network bandwidth if the restore is to a different region
+* The number of concurrent restore requests being processed in the target region
+* The presence of primary key in the tables in the database. For faster recovery, consider adding primary key for all the tables in your database. To check if your tables have primary key, you can use the following query:
+```sql
+select tab.table_schema as database_name, tab.table_name from information_schema.tables tab left join information_schema.table_constraints tco on tab.table_schema = tco.table_schema and tab.table_name = tco.table_name and tco.constraint_type = 'PRIMARY KEY' where tco.constraint_type is null and tab.table_schema not in('mysql', 'information_schema', 'performance_schema', 'sys') and tab.table_type = 'BASE TABLE' order by tab.table_schema, tab.table_name;
+```
+For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that a high number of geo-restore requests will be initiated for disaster recovery. When there are many requests, the recovery time for individual databases can increase. Most database restores finish in less than 12 hours.
+
+> [!IMPORTANT]
+> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](how-to-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md).
+
+### Point-in-time restore
+
+Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option.
+
+> [!NOTE]
+> There are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation
+>
+> - time_zone - This value to set to DEFAULT value **SYSTEM**
+> - event_scheduler - The event_scheduler is set to **OFF** on the restored server
+>
+> You will need to set these server parameters by reconfiguring the [server parameter](how-to-server-parameters.md)
+
+Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect.
+
+You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes.
+
+### Geo-restore
+
+You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups.
+- General purpose storage v1 servers (supporting up to 4-TB storage) can be restored to the geo-paired region, or to any Azure region that supports Azure Database for MySQL Single Server service.
+- General purpose storage v2 servers (supporting up to 16-TB storage) can only be restored to Azure regions that support General purpose storage v2 servers infrastructure.
+Review [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage) for the list of supported regions.
+
+Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
+
+> [!IMPORTANT]
+>If a geo-restore is performed for a newly created server, the initial backup synchronization may take more than 24 hours depending on data size as the initial full snapshot backup copy time is much higher. Subsequent snapshot backups are incremental copy and hence the restores are faster after 24 hours of server creation. If you are evaluating geo-restores to define your RTO, we recommend you to wait and evaluate geo-restore **only after 24 hours** of server creation for better estimates.
+
+During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size during geo-restore is not supported.
+
+The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours.
+
+### Perform post-restore tasks
+
+After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running:
+
+- If the new server is meant to replace the original server, redirect clients and client applications to the new server
+- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.
+- Ensure appropriate logins and database level permissions are in place
+- Configure alerts, as appropriate
+
+## Next steps
+
+- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).
+- To restore to a point-in-time using the Azure portal, seeΓÇ»[restore server to a point-in-time using the Azure portal](how-to-restore-server-portal.md).
+- To restore to a point-in-time using Azure CLI, seeΓÇ»[restore server to a point-in-time using CLI](how-to-restore-server-cli.md).
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-business-continuity.md
+
+ Title: Business continuity - Azure Database for MySQL
+description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MySQL service.
+++++ Last updated : 7/7/2020++
+# Overview of business continuity with Azure Database for MySQL - Single Server
++
+This article describes the capabilities that Azure Database for MySQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance.
+
+## Features that you can use to provide business continuity
+
+As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after the disruptive event - this is your Recovery Time Objective (RTO). You also need to understand the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event - this is your Recovery Point Objective (RPO).
+
+Azure Database for MySQL Single Server provides business continuity and disaster recovery features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in a different region. Each has different characteristics for the recovery time and the potential data loss. With [Geo-restore](concepts-backup.md) feature, a new server is created using the backup data that is replicated from another region. The overall time it takes to restore and recover depends on the size of the database and the amount of logs to recover. The overall time to establish the server varies from few minutes to few hours. With [read replicas](concepts-read-replicas.md), transaction logs from the primary are asynchronously streamed to the replica. In the event of a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss.
+
+> [!NOTE]
+> The lag between the primary and the replica depends on the latency between the sites, the amount of data to be transmitted and most importantly on the write workload of the primary server. Heavy write workloads can generate significant lag.
+>
+> Because of asynchronous nature of replication used for read-replicas, they **should not** be considered as a High Availability (HA) solution since the higher lags can mean higher RTO and RPO. Only for workloads where the lag remains smaller through the peak and non-peak times of the workload, read replicas can act as a HA alternative. Otherwise read replicas are intended for true read-scale for ready heavy workloads and for (Disaster Recovery) DR scenarios.
+
+The following table compares RTO and RPO in a **typical workload** scenario:
+
+| **Capability** | **Basic** | **General Purpose** | **Memory optimized** |
+| :: | :-: | :--: | :: |
+| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
+| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h |
+| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
+
+ \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
+
+## Recover a server after a user or application error
+
+You can use the service's backups to recover a server from various disruptive events. A user may accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data due to an application defect, and so on.
+
+You can perform a point-in-time-restore to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server.
+
+> [!IMPORTANT]
+> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](how-to-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md).
+
+## Recover from an Azure regional data center outage
+
+Although rare, an Azure data center can have an outage. When an outage occurs, it causes a business disruption that might only last a few minutes, but could last for hours.
+
+One option is to wait for your server to come back online when the data center outage is over. This works for applications that can afford to have the server offline for some period of time, for example a development environment. When data center has an outage, you do not know how long the outage might last, so this option only works if you don't need your server for a while.
+
+## Geo-restore
+
+The geo-restore feature restores the server using geo-redundant backups. The backups are hosted in your server's [paired region](../../availability-zones/cross-region-replication-azure.md). These backups are accessible even when the region your server is hosted in is offline. You can restore from these backups to any other region and bring your server back online. Learn more about geo-restore from the [backup and restore concepts article](concepts-backup.md).
+
+> [!IMPORTANT]
+> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using mysqldump of your existing server and restore it to a newly created server configured with geo-redundant backups.
+
+## Cross-region read replicas
+
+You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using MySQL's binary log replication technology. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
+
+## FAQ
+
+### Where does Azure Database for MySQL store customer data?
+By default, Azure Database for MySQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
+
+## Next steps
+
+- Learn more about the [automated backups in Azure Database for MySQL](concepts-backup.md).
+- Learn how to restore using [the Azure portal](how-to-restore-server-portal.md) or [the Azure CLI](how-to-restore-server-cli.md).
+- Learn about [read replicas in Azure Database for MySQL](concepts-read-replicas.md).
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-certificate-rotation.md
+
+ Title: Certificate rotation for Azure Database for MySQL
+description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for MySQL
+++++ Last updated : 04/08/2021++
+# Understanding the changes in the Root CA change for Azure Database for MySQL Single Server
++
+Azure Database for MySQL Single Server successfully completed the root certificate change on **February 15, 2021 (02/15/2021)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
+
+> [!NOTE]
+> This article applies to [Azure Database for MySQL - Single Server](single-server-overview.md) ONLY. For [Azure Database for MySQL - Flexible Server](../flexible-server/overview.md), the certificate needed to communicate over SSL is [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)
+>
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+>
+
+#### Why is a root certificate update required?
+
+Azure Database for MySQL users can only use the predefined certificate to connect to their MySQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
+
+Per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
+
+The new certificate is rolled out and in effect as of February 15, 2021 (02/15/2021).
+
+#### What change was performed on February 15, 2021 (02/15/2021)?
+
+On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers don't need to change anything and there's no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
+
+#### Do I need to make any changes on my client to maintain connectivity?
+
+No change is required on client side. If you followed our previous recommendation below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
+
+###### Previous recommendation
+
+To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation, one of the allowed values will be used. Refer to the following steps:
+
+1. Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from the following links:
+
+ * [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem)
+ * [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem)
+
+2. Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
+
+ * For Java (MySQL Connector/J) users, execute:
+
+ ```console
+ keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
+ ```
+
+ ```console
+ keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
+ ```
+
+ Then replace the original keystore file with the new generated one:
+
+ * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
+ * System.setProperty("javax.net.ssl.trustStorePassword","password");
+
+ * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
+
+ :::image type="content" source="media/overview/netconnecter-cert.png" alt-text="Azure Database for MySQL .NET cert diagram":::
+
+ * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
+
+ * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format:
+
+ ```
+ --BEGIN CERTIFICATE--
+ (Root CA1: BaltimoreCyberTrustRoot.crt.pem)
+ --END CERTIFICATE--
+ --BEGIN CERTIFICATE--
+ (Root CA2: DigiCertGlobalRootG2.crt.pem)
+ --END CERTIFICATE--
+ ```
+
+3. Replace the original root CA pem file with the combined root CA file and restart your application/client.
+
+ In the future, after the new certificate is deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
+
+> [!NOTE]
+> Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done, and then it will be safe to drop the **Baltimore certificate**.
+
+#### Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
+
+We evaluated the customer readiness for this change and realized that many customers were looking for extra lead time to manage this change. To provide more lead time to customers for readiness, we decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year, providing sufficient lead time to the customers and end users.
+
+Our recommendation to users is to use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
+
+#### What if we removed the BaltimoreCyberTrustRoot certificate?
+
+You'll start to encounter connectivity errors while connecting to your Azure Database for MySQL server. You'll need to [configure SSL](how-to-configure-ssl.md) with the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
+
+## Frequently asked questions
+
+#### If I'm not using SSL/TLS, do I still need to update the root CA?
+
+ No actions are required if you aren't using SSL/TLS.
+
+#### If I'm using SSL/TLS, do I need to restart my database server to update the root CA?
+
+No, you don't need to restart the database server to start using the new certificate. This root certificate is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
+
+#### How do I know if I'm using SSL/TLS with root certificate verification?
+
+You can identify whether your connections verify the root certificate by reviewing your connection string.
+
+* If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate.
+* If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you don't need to update certificates.
+* If your connection string doesn't specify sslmode, you don't need to update certificates.
+
+If you're using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates.
+
+#### What is the impact of using App Service with Azure Database for MySQL?
+
+For Azure app services connecting to Azure Database for MySQL, there are two possible scenarios depending on how on you're using SSL with your application.
+
+* This new certificate has been added to App Service at platform level. If you're using the SSL certificates included on App Service platform in your application, then no action is needed. This is the most common scenario.
+* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and produce a combined certificate as mentioned above and use the certificate file. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress). This is an uncommon scenario but we have seen some users using this.
+
+#### What is the impact of using Azure Kubernetes Services (AKS) with Azure Database for MySQL?
+
+If you're trying to connect to the Azure Database for MySQL using Azure Kubernetes Services (AKS), it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-own-tls.md).
+
+#### What is the impact of using Azure Data Factory to connect to Azure Database for MySQL?
+
+For a connector using Azure Integration Runtime, the connector uses certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates, and therefore no action is needed.
+
+For a connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you'll need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
+
+#### Do I need to plan a database server maintenance downtime for this change?
+
+No. Since the change is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change.
+
+#### If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
+
+For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL.
+
+#### How often does Microsoft update their certificates or what is the expiry policy?
+
+These certificates used by Azure Database for MySQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times.
+
+#### If I'm using read replicas, do I need to perform this update only on source server or the read replicas?
+
+Since this update is a client-side change, if the client used to read data from the replica server, you'll need to apply the changes for those clients as well.
+
+#### If I'm using Data-in replication, do I need to perform any action?
+
+If you're using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider:
+
+* If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
+
+ ```azurecli-interactive
+ Master_SSL_Allowed : Yes
+ Master_SSL_CA_File : ~\azure_mysqlservice.pem
+ Master_SSL_CA_Path :
+ Master_SSL_Cert : ~\azure_mysqlclient_cert.pem
+ Master_SSL_Cipher :
+ Master_SSL_Key : ~\azure_mysqlclient_key.pem
+ ```
+
+ If you see that the certificate is provided for the CA_file, SSL_Cert, and SSL_Key, you'll need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and create a combined cert file.
+
+* If the data-replication is between two Azure Database for MySQL servers, then you'll need to reset the replica by executing **CALL mysql.az_replication_change_master** and provide the new dual root certificate as the last parameter [master_ssl_ca](how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication).
+
+#### Is there a server-side query to determine whether SSL is being used?
+
+To verify if you're using SSL connection to connect to the server refer [SSL verification](how-to-configure-ssl.md#step-4-verify-the-ssl-connection).
+
+#### Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
+
+No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
+
+#### What if I have further questions?
+
+For questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com).
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-compatibility.md
+
+ Title: Driver and tools compatibility - Azure Database for MySQL
+description: This article describes the MySQL drivers and management tools that are compatible with Azure Database for MySQL.
+++++ Last updated : 11/4/2021+
+# MySQL drivers and management tools compatible with Azure Database for MySQL
++
+This article describes the drivers and management tools that are compatible with Azure Database for MySQL Single Server.
+
+> [!NOTE]
+> This article is only applicable to Azure Database for MySQL Single Server to ensure drivers are compatible with [connectivity architecture](concepts-connectivity-architecture.md) of Single Server service. [Azure Database for MySQL Flexible Server](../flexible-server/overview.md) is compatible with all the drivers and tools supported and compatible with MySQL community edition.
+
+## MySQL Drivers
+Azure Database for MySQL uses the world's most popular community edition of MySQL database. As such, it's compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open-source community to constantly improve the functionality and usability of MySQL drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 is provided in the following table:
+
+| **Programming Language** | **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** |
+| :-- | : | :-- | :- | : | :-- |
+| PHP | mysqli, pdo_mysql, mysqlnd | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.|
+| .NET | Async MySQL Connector for .NET | https://github.com/mysql-net/MySqlConnector <br> [Installation package from NuGet](https://www.nuget.org/packages/MySqlConnector/) | 0.27 and after | 0.26.5 and before | |
+| .NET | MySQL Connector/NET | https://github.com/mysql/mysql-connector-net | 6.6.3, 7.0, 8.0 | | An encoding bug may cause connections to fail on some non-UTF8 Windows systems. |
+| Node.js | mysqljs | https://github.com/mysqljs/mysql/ <br> Installation package from NPM:<br> Run `npm install mysql` from NPM | 2.15 | 2.14.1 and before | |
+| Node.js | node-mysql2 | https://github.com/sidorares/node-mysql2 | 1.3.4+ | | |
+| Go | Go MySQL Driver | https://github.com/go-sql-driver/mysql/releases | 1.3, 1.4 | 1.2 and before | Use `allowNativePasswords=true` in the connection string for version 1.3. Version 1.4 contains a fix and `allowNativePasswords=true` is no longer required. |
+| Python | MySQL Connector/Python | https://pypi.python.org/pypi/mysql-connector-python | 1.2.3, 2.0, 2.1, 2.2, use 8.0.16+ with MySQL 8.0 | 1.2.2 and before | |
+| Python | PyMySQL | https://pypi.org/project/PyMySQL/ | 0.7.11, 0.8.0, 0.8.1, 0.9.3+ | 0.9.0 - 0.9.2 (regression in web2py) | |
+| Java | MariaDB Connector/J | https://downloads.mariadb.org/connector-java/ | 2.1, 2.0, 1.6 | 1.5.5 and before | |
+| Java | MySQL Connector/J | https://github.com/mysql/mysql-connector-j | 5.1.21+, use 8.0.17+ with MySQL 8.0 | 5.1.20 and below | |
+| C | MySQL Connector/C (libmysqlclient) | https://dev.mysql.com/doc/c-api/5.7/en/c-api-implementations.html | 6.0.2+ | | |
+| C | MySQL Connector/ODBC (myodbc) | https://github.com/mysql/mysql-connector-odbc | 3.51.29+ | | |
+| C++ | MySQL Connector/C++ | https://github.com/mysql/mysql-connector-cpp | 1.1.9+ | 1.1.3 and below | |
+| C++ | MySQL++| https://github.com/tangentsoft/mysqlpp | 3.2.3+ | | |
+| Ruby | mysql2 | https://github.com/brianmario/mysql2 | 0.4.10+ | | |
+| R | RMySQL | https://github.com/rstats-db/RMySQL | 0.10.16+ | | |
+| Swift | mysql-swift | https://github.com/novi/mysql-swift | 0.7.2+ | | |
+| Swift | vapor/mysql | https://github.com/vapor/mysql-kit | 2.0.1+ | | |
+
+## Management Tools
+The compatibility advantage extends to database management tools as well. Your existing tools should continue to work with Azure Database for MySQL, as long as the database manipulation operates within the confines of user permissions. Three common database management tools that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 are listed in the following table:
+
+| | **MySQL Workbench 6.x and up** | **Navicat 12** | **PHPMyAdmin 4.x and up** | **dbForge Studio for MySQL 9.0** |
+| :- | :-- | :- | :-| :- |
+| **Create, Update, Read, Write, Delete** | X | X | X | X |
+| **SSL Connection** | X | X | X | X |
+| **SQL Query Auto Completion** | X | X | | X |
+| **Import and Export Data** | X | X | X | X |
+| **Export to Multiple Formats** | X | X | X | X |
+| **Backup and Restore** | | X | | X |
+| **Display Server Parameters** | X | X | X | X |
+| **Display Client Connections** | X | X | X | X |
+
+## Next steps
+
+- [Troubleshoot connection issues to Azure Database for MySQL](how-to-troubleshoot-common-connection-issues.md)
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connection-libraries.md
+
+ Title: Connection libraries - Azure Database for MySQL
+description: This article lists each library or driver that client programs can use when connecting to Azure Database for MySQL.
+++++ Last updated : 8/3/2020++
+# Connection libraries for Azure Database for MySQL
++
+This article lists each library or driver that client programs can use when connecting to Azure Database for MySQL.
+
+## Client interfaces
+MySQL offers standard database driver connectivity for using MySQL with applications and tools that are compatible with industry standards ODBC and JDBC. Any system that works with ODBC or JDBC can use MySQL.
+
+| **Language** | **Platform** | **Additional Resource** | **Download** |
+| :-- | :| :--| :|
+| PHP | Windows, Linux | [MySQL native driver for PHP - mysqlnd](https://dev.mysql.com/downloads/connector/php-mysqlnd/) | [Download](https://secure.php.net/downloads.php) |
+| ODBC | Windows, Linux, macOS X, and Unix platforms | [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | [Download](https://dev.mysql.com/downloads/connector/odbc/) |
+| ADO.NET | Windows | [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | [Download](https://dev.mysql.com/downloads/connector/net/) |
+| JDBC | Platform independent | [MySQL Connector/J 5.1 Developer Guide](https://dev.mysql.com/doc/connector-j/5.1/en/) | [Download](https://dev.mysql.com/downloads/connector/j/) |
+| Node.js | Windows, Linux, macOS X | [sidorares/node-mysql2](https://github.com/sidorares/node-mysql2/tree/master/documentation) | [Download](https://github.com/sidorares/node-mysql2) |
+| Python | Windows, Linux, macOS X | [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) |
+| C++ | Windows, Linux, macOS X | [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/connector-cpp/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) |
+| C | Windows, Linux, macOS X | [MySQL Connector/C Developer Guide](https://dev.mysql.com/doc/c-api/8.0/en/) | [Download](https://dev.mysql.com/downloads/connector/c/)
+| Perl | Windows, Linux, macOS X, and Unix platforms | [DBD::MySQL](https://metacpan.org/pod/DBD::mysql) | [Download](https://metacpan.org/pod/DBD::mysql) |
++
+## Next steps
+Read these quickstarts on how to connect to and query Azure Database for MySQL by using your language of choice:
+
+- [PHP](./connect-php.md)
+- [Java](./connect-java.md)
+- [.NET (C#)](./connect-csharp.md)
+- [Python](./connect-python.md)
+- [Node.JS](./connect-nodejs.md)
+- [Ruby](./connect-ruby.md)
+- [C++](connect-cpp.md)
+- [Go](./connect-go.md)
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
+
+ Title: Connectivity architecture - Azure Database for MySQL
+description: Describes the connectivity architecture for your Azure Database for MySQL server.
+++++ Last updated : 10/15/2021++
+# Connectivity architecture in Azure Database for MySQL
++
+This article explains the Azure Database for MySQL connectivity architecture and how the traffic is directed to your Azure Database for MySQL instance from clients both within and outside Azure.
+
+## Connectivity architecture
+Connection to your Azure Database for MySQL is established through a gateway that is responsible for routing incoming connections to the physical location of your server in our clusters. The following diagram illustrates the traffic flow.
++
+As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MySQL. Therefore, in order to connect to your server, such as from corporate networks, it's necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
+
+## Azure Database for MySQL gateway IP addresses
+
+The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MySQL server.
+
+As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
+
+* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**. You should use fully qualified domain name (FQDN) of your server in the format `<servername>.mysql.database.azure.com`, in the connection string for your application.
+* You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
+
+The following table lists the gateway IP addresses of the Azure Database for MySQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
+
+* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you're provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
+* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
+* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
+
+| **Region name** | **Gateway IP addresses** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
+|||--|--|
+| Australia Central | 20.36.105.0 | | |
+| Australia Central2 | 20.36.113.0 | | |
+| Australia East | 13.75.149.87, 40.79.161.1 | | |
+| Australia South East | 13.73.109.251, 13.77.49.32, 13.77.48.10 | | |
+| Brazil South | 191.233.201.8, 191.233.200.16 | | 104.41.11.5 |
+| Canada Central | 13.71.168.32|| 40.85.224.249, 52.228.35.221 |
+| Canada East | 40.86.226.166, 52.242.30.154 | | |
+| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | |
+| China East | 139.219.130.35 | | |
+| China East 2 | 40.73.82.1, 52.130.120.89 |
+| China East 3 | 52.131.155.192 |
+| China North | 139.219.15.17 | | |
+| China North 2 | 40.73.50.0 | |
+| China North 3 | 52.131.27.192 | |
+| East Asia | 13.75.33.20, 52.175.33.150, 13.75.33.20, 13.75.33.21 | | |
+| East US | 40.71.8.203, 40.71.83.113 | 40.121.158.30 | 191.238.6.43 |
+| East US 2 | 40.70.144.38, 52.167.105.38 | 52.177.185.181 | |
+| France Central | 40.79.137.0, 40.79.129.1 | | |
+| France South | 40.79.177.0 | | |
+| Germany Central | 51.4.144.100 | | |
+| Germany North | 51.116.56.0 | | |
+| Germany North East | 51.5.144.179 | | |
+| Germany West Central | 51.116.152.0 | | |
+| India Central | 104.211.96.159 | | |
+| India South | 104.211.224.146 | | |
+| India West | 104.211.160.80 | | |
+| Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | |
+| Japan West | 191.238.68.11, 40.74.96.6, 40.74.96.7 | 104.214.148.156 | |
+| Korea Central | 52.231.17.13 | 52.231.32.42 | |
+| Korea South | 52.231.145.3, 52.231.151.97 | 52.231.200.86 | |
+| North Central US | 52.162.104.35, 52.162.104.36 | 23.96.178.199 | |
+| North Europe | 52.138.224.6, 52.138.224.7 | 40.113.93.91 | 191.235.193.75 |
+| South Africa North | 102.133.152.0 | | |
+| South Africa West | 102.133.24.0 | | |
+| South Central US | 104.214.16.39, 20.45.120.0 | 13.66.62.124 | 23.98.162.75 |
+| South East Asia | 40.78.233.2, 23.98.80.12 | 104.43.15.0 | |
+| Switzerland North | 51.107.56.0 | | |
+| Switzerland West | 51.107.152.0 | | |
+| UAE Central | 20.37.72.64 | | |
+| UAE North | 65.52.248.0 | | |
+| UK South | 51.140.144.32, 51.105.64.0 | 51.140.184.11 | |
+| UK West | 51.141.8.11 | | |
+| West Central US | 13.78.145.25, 52.161.100.158 | | |
+| West Europe | 13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 |
+| West US | 13.86.216.212, 13.86.217.212 | 104.42.238.205 | 23.99.34.75 |
+| West US2 | 13.66.136.195, 13.66.136.192, 13.66.226.202 | | |
+| West US3 | 20.150.184.2 | | |
+## Connection redirection
+
+Azure Database for MySQL supports an additional connection policy, **redirection**, that helps to reduce network latency between client applications and MySQL servers. With redirection, and after the initial TCP session is established to the Azure Database for MySQL server, the server returns the backend address of the node hosting the MySQL server to the client. Thereafter, all subsequent packets flow directly to the server, bypassing the gateway. As packets flow directly to the server, latency and throughput have improved performance.
+
+This feature is supported in Azure Database for MySQL servers with engine versions 5.6, 5.7, and 8.0.
+
+Support for redirection is available in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft, and is available on [PECL](https://pecl.php.net/package/mysqlnd_azure). See the [configuring redirection](./how-to-redirection.md) article for more information on how to use redirection in your applications.
++
+> [!IMPORTANT]
+> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview.
+
+## Frequently asked questions
+
+### What you need to know about this planned maintenance?
+This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it's automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
+
+### What are we decommissioning?
+Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We're decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
+
+### How can you validate if your connections are going to old gateway nodes or new gateway nodes?
+Ping your server's FQDN, for example ``ping xxx.mysql.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway.
+
+You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses
+
+### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
+You'll receive an email to inform you when we'll start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
+
+### What do I do if my client applications are still connecting to old gateway server?
+This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code.
+
+### Is there any impact for my application connections?
+This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
+
+### Can I request for a specific time window for the maintenance?
+As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for Most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+
+### I'm using private link, will my connections get affected?
+No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
+++
+## Next steps
+* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md)
+* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md)
+* [Configure redirection with Azure Database for MySQL](./how-to-redirection.md)
mysql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity.md
+
+ Title: Transient connectivity errors - Azure Database for MySQL
+description: Learn how to handle transient connectivity errors and connect efficiently to Azure Database for MySQL.
+keywords: mysql connection,connection string,connectivity issues,transient error,connection error,connect efficiently
+++++ Last updated : 3/18/2020++
+# Handle transient errors and connect efficiently to Azure Database for MySQL
++
+This article describes how to handle transient errors and connect efficiently to Azure Database for MySQL.
+
+## Transient errors
+
+A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens. Another reason could be a new version of a PaaS service that is being rolled out. Most of these events are automatically mitigated by the system in less than 60 seconds. A best practice for designing and developing applications in the cloud is to expect transient errors. Assume they can happen in any component at any time and to have the appropriate logic in place to handle these situations.
+
+## Handling transient errors
+
+Transient errors should be handled using retry logic. Situations that must be considered:
+
+* An error occurs when you try to open a connection
+* An idle connection is dropped on the server side. When you try to issue a command it can't be executed
+* An active connection that currently is executing a command is dropped.
+
+The first and second case are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for MySQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
+
+* Wait for 5 seconds before your first retry.
+* For each following retry, the increase the wait exponentially, up to 60 seconds.
+* Set a max number of retries at which point your application considers the operation failed.
+
+When a connection with an active transaction fails, it is more difficult to handle the recovery correctly. There are two cases: If the transaction was read-only in nature, it is safe to reopen the connection and to retry the transaction. If however the transaction was also writing to the database, you must determine if the transaction was rolled back, or if it succeeded before the transient error happened. In that case, you might just not have received the commit acknowledgment from the database server.
+
+One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client-generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully.
+
+When your program communicates with Azure Database for MySQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors.
+
+Make sure to test you retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for MySQL server. Your application should handle the brief downtime that is encountered during this operation without any problems.
+
+## Connect efficiently to Azure Database for MySQL
+
+Database connections are a limited resource, so making effective use of connection pooling to access Azure Database for MySQL optimizes performance. The below section explains how to use connection pooling or persistent connections to more effectively access Azure Database for MySQL.
+
+## Access databases by using connection pooling (recommended)
+
+Managing database connections can have a significant impact on the performance of the application as a whole. To optimize the performance of your application, the goal should be to reduce the number of times connections are established and time for establishing connections in key code paths. We strongly recommend using database connection pooling or persistent connections to connect to Azure Database for MySQL. Database connection pooling handles the creation, management, and allocation of database connections. When a program requests a database connection, it prioritizes the allocation of existing idle database connections, rather than the creation of a new connection. After the program has finished using the database connection, the connection is recovered in preparation for further use, rather than simply being closed down.
+
+For better illustration, this article provides [a piece of sample code](./sample-scripts-java-connection-pooling.md) that uses JAVA as an example. For more information, see [Apache common DBCP](https://commons.apache.org/proper/commons-dbcp/).
+
+> [!NOTE]
+> The server configures a timeout mechanism to close a connection that has been in an idle state for some time to free up resources. Be sure to set up the verification system to ensure the effectiveness of persistent connections when you are using them. For more information, see [Configure verification systems on the client side to ensure the effectiveness of persistent connections](concepts-connectivity.md#configure-verification-mechanisms-in-clients-to-confirm-the-effectiveness-of-persistent-connections).
+
+## Access databases by using persistent connections (recommended)
+
+The concept of persistent connections is similar to that of connection pooling. Replacing short connections with persistent connections requires only minor changes to the code, but it has a major effect in terms of improving performance in many typical application scenarios.
+
+## Access databases by using wait and retry mechanism with short connections
+
+If you have resource limitations, we strongly recommend that you use database pooling or persistent connections to access databases. If your application use short connections and experience connection failures when you approach the upper limit on the number of concurrent connections,you can try wait and retry mechanism. You can set an appropriate wait time, with a shorter wait time after the first attempt. Thereafter, you can try waiting for events multiple times.
+
+## Configure verification mechanisms in clients to confirm the effectiveness of persistent connections
+
+The server configures a timeout mechanism to close a connection thatΓÇÖs been in an idle state for some time to free up resources. When the client accesses the database again, itΓÇÖs equivalent to creating a new connection request between the client and the server. To ensure the effectiveness of connections during the process of using them, configure a verification mechanism on the client. As shown in the following example, you can use Tomcat JDBC connection pooling to configure this verification mechanism.
+
+By setting the TestOnBorrow parameter, when there's a new request, the connection pool automatically verifies the effectiveness of any available idle connections. If such a connection is effective, its directly returned otherwise connection pool withdraws the connection. The connection pool then creates a new effective connection and returns it. This process ensures that database is accessed efficiently.
+
+For information on the specific settings, see the [JDBC connection pool official introduction document](https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Common_Attributes). You mainly need to set the following three parameters: TestOnBorrow (set to true), ValidationQuery (set to SELECT 1), and ValidationQueryTimeout (set to 1). The specific sample code is shown below:
+
+```java
+public class SimpleTestOnBorrowExample {
+ public static void main(String[] args) throws Exception {
+ PoolProperties p = new PoolProperties();
+ p.setUrl("jdbc:mysql://localhost:3306/mysql");
+ p.setDriverClassName("com.mysql.jdbc.Driver");
+ p.setUsername("root");
+ p.setPassword("password");
+ // The indication of whether objects will be validated by the idle object evictor (if any).
+ // If an object fails to validate, it will be dropped from the pool.
+ // NOTE - for a true value to have any effect, the validationQuery or validatorClassName parameter must be set to a non-null string.
+ p.setTestOnBorrow(true);
+
+ // The SQL query that will be used to validate connections from this pool before returning them to the caller.
+ // If specified, this query does not have to return any data, it just can't throw a SQLException.
+ p.setValidationQuery("SELECT 1");
+
+ // The timeout in seconds before a connection validation queries fail.
+ // This works by calling java.sql.Statement.setQueryTimeout(seconds) on the statement that executes the validationQuery.
+ // The pool itself doesn't timeout the query, it is still up to the JDBC driver to enforce query timeouts.
+ // A value less than or equal to zero will disable this feature.
+ p.setValidationQueryTimeout(1);
+ // set other useful pool properties.
+ DataSource datasource = new DataSource();
+ datasource.setPoolProperties(p);
+
+ Connection con = null;
+ try {
+ con = datasource.getConnection();
+ // execute your query here
+ } finally {
+ if (con!=null) try {con.close();}catch (Exception ignore) {}
+ }
+ }
+ }
+```
+
+## Next steps
+
+* [Troubleshoot connection issues to Azure Database for MySQL](how-to-troubleshoot-common-connection-issues.md)
mysql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-and-security-vnet.md
+
+ Title: VNet service endpoints - Azure Database for MySQL
+description: 'Describes how VNet service endpoints work for your Azure Database for MySQL server.'
+++++ Last updated : 7/17/2020+
+# Use Virtual Network service endpoints and rules for Azure Database for MySQL
++
+*Virtual network rules* are one firewall security feature that controls whether your Azure Database for MySQL server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for MySQL server.
+
+To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for MySQL:
++
+> [!NOTE]
+> This feature is available in all regions of Azure where Azure Database for MySQL is deployed for General Purpose and Memory Optimized servers.
+> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server.
+
+You can also consider using [Private Link](concepts-data-access-security-private-link.md) for connections. Private Link provides a private IP address in your VNet for the Azure Database for MySQL server.
+
+<a name="anch-terminology-and-description-82f"></a>
+
+## Terminology and description
+
+**Virtual network:** You can have virtual networks associated with your Azure subscription.
+
+**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) that you have are assigned to subnets. One subnet can contain multiple VMs or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access.
+
+**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for MySQL and PostgreSQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for all Azure SQL Database, Azure Database for MySQL and Azure Database for PostgreSQL servers on the subnet.
+
+**Virtual network rule:** A virtual network rule for your Azure Database for MySQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for MySQL server. To be in the ACL for your Azure Database for MySQL server, the subnet must contain the **Microsoft.Sql** type name.
+
+A virtual network rule tells your Azure Database for MySQL server to accept communications from every node that is on the subnet.
+++++++
+<a name="anch-benefits-of-a-vnet-rule-68b"></a>
+
+## Benefits of a virtual network rule
+
+Until you take action, the VMs on your subnets cannot communicate with your Azure Database for MySQL server. One action that establishes the communication is the creation of a virtual network rule. The rationale for choosing the VNet rule approach requires a compare-and-contrast discussion involving the competing security options offered by the firewall.
+
+### A. Allow access to Azure services
+
+The Connection security pane has an **ON/OFF** button that is labeled **Allow access to Azure services**. The **ON** setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This **ON** setting is probably more open than you want your Azure Database for MySQL Database to be. The virtual network rule feature offers much finer granular control.
+
+### B. IP rules
+
+The Azure Database for MySQL firewall allows you to specify IP address ranges from which communications are accepted into the Azure Database for MySQL Database. This approach is fine for stable IP addresses that are outside the Azure private network. But many nodes inside the Azure private network are configured with *dynamic* IP addresses. Dynamic IP addresses might change, such as when your VM is restarted. It would be folly to specify a dynamic IP address in a firewall rule, in a production environment.
+
+You can salvage the IP option by obtaining a *static* IP address for your VM. For details, see [Configure private IP addresses for a virtual machine by using the Azure portal][vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w].
+
+However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage.
+
+<a name="anch-details-about-vnet-rules-38q"></a>
+
+## Details about virtual network rules
+
+This section describes several details about virtual network rules.
+
+### Only one geographic region
+
+Each Virtual Network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet.
+
+Any virtual network rule is limited to the region that its underlying endpoint applies to.
+
+### Server-level, not database-level
+
+Each virtual network rule applies to your whole Azure Database for MySQL server, not just to one particular database on the server. In other words, virtual network rule applies at the server-level, not at the database-level.
+
+### Security administration roles
+
+There is a separation of security roles in the administration of Virtual Network service endpoints. Action is required from each of the following roles:
+
+- **Network Admin:** &nbsp; Turn on the endpoint.
+- **Database Admin:** &nbsp; Update the access control list (ACL) to add the given subnet to the Azure Database for MySQL server.
+
+*Azure RBAC alternative:*
+
+The roles of Network Admin and Database Admin have more capabilities than are needed to manage virtual network rules. Only a subset of their capabilities is needed.
+
+You have the option of using [Azure role-based access control (Azure RBAC)][rbac-what-is-813s] in Azure to create a single custom role that has only the necessary subset of capabilities. The custom role could be used instead of involving either the Network Admin or the Database Admin. The surface area of your security exposure is lower if you add a user to a custom role, versus adding the user to the other two major administrator roles.
+
+> [!NOTE]
+> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
+> - Both subscriptions must be in the same Azure Active Directory tenant.
+> - The user has the required permissions to initiate operations, such as enabling service endpoints and adding a VNet-subnet to the given Server.
+> - Make sure that both the subscription have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+## Limitations
+
+For Azure Database for MySQL, the virtual network rules feature has the following limitations:
+
+- A Web App can be mapped to a private IP in a VNet/subnet. Even if service endpoints are turned ON from the given VNet/subnet, connections from the Web App to the server will have an Azure public IP source, not a VNet/subnet source. To enable connectivity from a Web App to a server that has VNet firewall rules, you must Allow Azure services to access server on the server.
+
+- In the firewall for your Azure Database for MySQL, each virtual network rule references a subnet. All these referenced subnets must be hosted in the same geographic region that hosts the Azure Database for MySQL.
+
+- Each Azure Database for MySQL server can have up to 128 ACL entries for any given virtual network.
+
+- Virtual network rules apply only to Azure Resource Manager virtual networks; and not to [classic deployment model][arm-deployment-model-568f] networks.
+
+- Turning ON virtual network service endpoints to Azure Database for MySQL using the **Microsoft.Sql** service tag also enables the endpoints for all Azure Database
+
+- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
+
+- If **Microsoft.Sql** is enabled in a subnet, it indicates that you only want to use VNet rules to connect. [Non-VNet firewall rules](concepts-firewall-rules.md) of resources in that subnet will not work.
+
+- On the firewall, IP address ranges do apply to the following networking items, but virtual network rules do not:
+ - [Site-to-Site (S2S) virtual private network (VPN)][vpn-gateway-indexmd-608y]
+ - On-premises via [ExpressRoute][expressroute-indexmd-744v]
+
+## ExpressRoute
+
+If your network is connected to the Azure network through use of [ExpressRoute][expressroute-indexmd-744v], each circuit is configured with two public IP addresses at the Microsoft Edge. The two IP addresses are used to connect to Microsoft Services, such as to Azure Storage, by using Azure Public Peering.
+
+To allow communication from your circuit to Azure Database for MySQL, you must create IP network rules for the public IP addresses of your circuits. In order to find the public IP addresses of your ExpressRoute circuit, open a support ticket with ExpressRoute by using the Azure portal.
+
+## Adding a VNET Firewall rule to your server without turning on VNET Service Endpoints
+
+Merely setting a VNet firewall rule does not help secure the server to the VNet. You must also turn VNet service endpoints **On** for the security to take effect. When you turn service endpoints **On**, your VNet-subnet experiences downtime until it completes the transition from **Off** to **On**. This is especially true in the context of large VNets. You can use the **IgnoreMissingServiceEndpoint** flag to reduce or eliminate the downtime during transition.
+
+You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal.
+
+## Related articles
+- [Azure virtual networks][vm-virtual-network-overview]
+- [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d]
+
+## Next steps
+For articles on creating VNet rules, see:
+- [Create and manage Azure Database for MySQL VNet rules using the Azure portal](how-to-manage-vnet-using-portal.md)
+- [Create and manage Azure Database for MySQL VNet rules using Azure CLI](how-to-manage-vnet-using-cli.md)
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[arm-deployment-model-568f]: ../../azure-resource-manager/management/deployment-models.md
+
+[vm-virtual-network-overview]: ../../virtual-network/virtual-networks-overview.md
+
+[vm-virtual-network-service-endpoints-overview-649d]: ../../virtual-network/virtual-network-service-endpoints-overview.md
+
+[vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]: ../../virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md
+
+[rbac-what-is-813s]: ../../role-based-access-control/overview.md
+
+[vpn-gateway-indexmd-608y]: ../../vpn-gateway/index.yml
+
+[expressroute-indexmd-744v]: ../../expressroute/index.yml
+
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
mysql Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-security-private-link.md
+
+ Title: Private Link - Azure Database for MySQL
+description: Learn how Private link works for Azure Database for MySQL.
+++++ Last updated : 03/10/2020++
+# Private Link for Azure Database for MySQL
++
+Private Link allows you to connect to various PaaS services in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet.
+
+For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../../virtual-network/virtual-networks-overview.md) and Subnet.
+
+> [!NOTE]
+> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
+
+## Data exfiltration prevention
+
+Data ex-filtration in Azure Database for MySQL is when an authorized user, such as a database admin, is able to extract data from one system and move it to another location or system outside the organization. For example, the user moves the data to a storage account owned by a third party.
+
+Consider a scenario with a user running MySQL Workbench inside an Azure Virtual Machine (VM) that is connecting to an Azure Database for MySQL server provisioned in West US. The example below shows how to limit access with public endpoints on Azure Database for MySQL using network access controls.
+
+* Disable all Azure service traffic to Azure Database for MySQL via the public endpoint by setting *Allow Azure Services* to OFF. Ensure no IP addresses or ranges are allowed to access the server either via [firewall rules](./concepts-firewall-rules.md) or [virtual network service endpoints](./concepts-data-access-and-security-vnet.md).
+
+* Only allow traffic to the Azure Database for MySQL using the Private IP address of the VM. For more information, see the articles on [Service Endpoint](concepts-data-access-and-security-vnet.md) and [VNet firewall rules](how-to-manage-vnet-using-portal.md).
+
+* On the Azure VM, narrow down the scope of outgoing connection by using Network Security Groups (NSGs) and Service Tags as follows
+
+ * Specify an NSG rule to allow traffic for *Service Tag = SQL.WestUs* - only allowing connection to Azure Database for MySQL in West US
+ * Specify an NSG rule (with a higher priority) to deny traffic for *Service Tag = SQL* - denying connections to Update to Azure Database for MySQL in all regions</br></br>
+
+At the end of this setup, the Azure VM can connect only to Azure Database for MySQL in the West US region. However, the connectivity isn't restricted to a single Azure Database for MySQL. The VM can still connect to any Azure Database for MySQL in the West US region, including the databases that aren't part of the subscription. While we've reduced the scope of data exfiltration in the above scenario to a specific region, we haven't eliminated it altogether.</br>
+
+With Private Link, you can now set up network access controls like NSGs to restrict access to the private endpoint. Individual Azure PaaS resources are then mapped to specific private endpoints. A malicious insider can only access the mapped PaaS resource (for example an Azure Database for MySQL) and no other resource.
+
+## On-premises connectivity over private peering
+
+When you connect to the public endpoint from on-premises machines, your IP address needs to be added to the IP-based firewall using a server-level firewall rule. While this model works well for allowing access to individual machines for dev or test workloads, it's difficult to manage in a production environment.
+
+With Private Link, you can enable cross-premises access to the private endpoint using [Express Route](https://azure.microsoft.com/services/expressroute/) (ER), private peering or [VPN tunnel](../../vpn-gateway/index.yml). They can subsequently disable all access via public endpoint and not use the IP-based firewall.
+
+> [!NOTE]
+> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
+> - Make sure that both subscriptions have the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+## Configure Private Link for Azure Database for MySQL
+
+### Creation Process
+
+Private endpoints are required to enable Private Link. This can be done using the following how-to guides.
+
+* [Azure portal](./how-to-configure-private-link-portal.md)
+* [CLI](./how-to-configure-private-link-cli.md)
+
+### Approval Process
+Once the network admin creates the private endpoint (PE), the MySQL admin can manage the private endpoint Connection (PEC) to Azure Database for MySQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for MySQL connectivity.
+
+* Navigate to the Azure Database for MySQL server resource in the Azure portal.
+ * Select the private endpoint connections in the left pane
+ * Shows a list of all private endpoint Connections (PECs)
+ * Corresponding private endpoint (PE) created
++
+* Select an individual PEC from the list by selecting it.
++
+* The MySQL server admin can choose to approve or reject a PEC and optionally add a short text response.
++
+* After approval or rejection, the list will reflect the appropriate state along with the response text
++
+## Use cases of Private Link for Azure Database for MySQL
+
+Clients can connect to the private endpoint from the same VNet, [peered VNet](../../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases.
++
+### Connecting from an Azure VM in Peered Virtual Network (VNet)
+Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for MySQL from an Azure VM in a peered VNet.
+
+### Connecting from an Azure VM in VNet-to-VNet environment
+Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for MySQL from an Azure VM in a different region or subscription.
+
+### Connecting from an on-premises environment over VPN
+To establish connectivity from an on-premises environment to the Azure Database for MySQL, choose and implement one of the options:
+
+* [Point-to-Site connection](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
+* [Site-to-Site VPN connection](../../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md)
+* [ExpressRoute circuit](../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)
+
+## Private Link combined with firewall rules
+
+The following situations and outcomes are possible when you use Private Link in combination with firewall rules:
+
+* If you don't configure any firewall rules, then by default, no traffic will be able to access the Azure Database for MySQL.
+
+* If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule.
+
+* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for MySQL is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for MySQL.
+
+## Deny public access for Azure Database for MySQL
+
+If you want to rely only on private endpoints for accessing their Azure Database for MySQL, you can disable setting all public endpoints (i.e. [firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
+
+When this setting is set to *YES*, only connections via private endpoints are allowed to your Azure Database for MySQL. When this setting is set to *NO*, clients can connect to your Azure Database for MySQL based on your firewall or VNet service endpoint settings. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
+
+> [!Note]
+> This feature is available in all Azure regions where Azure Database for MySQL - Single server supports General Purpose and Memory Optimized pricing tiers.
+>
+> This setting does not have any impact on the SSL and TLS configurations for your Azure Database for MySQL.
+
+To learn how to set the **Deny Public Network Access** for your Azure Database for MySQL from Azure portal, refer to [How to configure Deny Public Network Access](how-to-deny-public-network-access.md).
+
+## Next steps
+
+To learn more about Azure Database for MySQL security features, see the following articles:
+
+* To configure a firewall for Azure Database for MySQL, see [Firewall support](./concepts-firewall-rules.md).
+
+* To learn how to configure a virtual network service endpoint for your Azure Database for MySQL, see [Configure access from virtual networks](./concepts-data-access-and-security-vnet.md).
+
+* For an overview of Azure Database for MySQL connectivity, see [Azure Database for MySQL Connectivity Architecture](./concepts-connectivity-architecture.md)
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
mysql Concepts Data Encryption Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-encryption-mysql.md
+
+ Title: Data encryption with customer-managed key - Azure Database for MySQL
+description: Azure Database for MySQL data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.
+++++ Last updated : 01/13/2020++
+# Azure Database for MySQL data encryption with a customer-managed key
++
+Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
+
+Data encryption with customer-managed keys for Azure Database for MySQL, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../../key-vault/general/security-features.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article.
+
+Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key, but does provide services of encryption and decryption to authorized entities. Key Vault can generate the key, import it, or [have it transferred from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md).
+
+> [!NOTE]
+> This feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-data-encryption-mysql.md#limitations) section.
+
+## Benefits
+
+Data encryption with customer-managed keys for Azure Database for MySQL provides the following benefits:
+
+* Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible
+* Full control over the key-lifecycle, including rotation of the key to align with corporate policies
+* Central management and organization of keys in Azure Key Vault
+* Ability to implement separation of duties between security officers, and DBA and system administrators
++
+## Terminology and description
+
+**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
+
+**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK.
+
+The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../../security/fundamentals/encryption-atrest.md).
+
+## How data encryption with a customer-managed key work
++
+For a MySQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server:
+
+* **get**: For retrieving the public part and properties of the key in the key vault.
+* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL.
+* **unwrapKey**: To be able to decrypt the DEK. Azure Database for MySQL needs the decrypted DEK to encrypt/decrypt the data
+
+The key vault administrator can also [enable logging of Key Vault audit events](../../azure-monitor/insights/key-vault-insights-overview.md), so they can be audited later.
+
+When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
+
+## Requirements for configuring data encryption for Azure Database for MySQL
+
+The following are requirements for configuring Key Vault:
+
+* Key Vault and Azure Database for MySQL must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving Key Vault resource afterwards requires you to reconfigure the data encryption.
+* Enable the [soft-delete](../../key-vault/general/soft-delete-overview.md) feature on the key vault with retention period set to **90 days**, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days by default, unless the retention period is explicitly set to <=90 days. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal).
+* Enable the [Purge Protection](../../key-vault/general/soft-delete-overview.md#purge-protection) feature on the key vault with retention period set to **90 days**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via Azure CLI or PowerShell. When purge protection is on, a vault or an object in the deleted state cannot be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed.
+* Grant the Azure Database for MySQL access to the key vault with the get, wrapKey, and unwrapKey permissions by using its unique managed identity. In the Azure portal, the unique 'Service' identity is automatically created when data encryption is enabled on the MySQL. See [Configure data encryption for MySQL](how-to-data-encryption-portal.md) for detailed, step-by-step instructions when you're using the Azure portal.
+
+The following are requirements for configuring the customer-managed key:
+
+* The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048.
+* The key activation date (if set) must be a date and time in the past. The expiration date not set.
+* The key must be in the *Enabled* state.
+* The key must have [soft delete](../../key-vault/general/soft-delete-overview.md) with retention period set to **90 days**.This implicitly sets the required key attribute recoveryLevel: ΓÇ£RecoverableΓÇ¥. If the retention is set to < 90 days, the recoveryLevel: "CustomizedRecoverable", which doesn't the requirement so ensure to set the retention period is set to **90 days**.
+* The key must have [purge protection enabled](../../key-vault/general/soft-delete-overview.md#purge-protection).
+* If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key) into the key vault, make sure to provide it in the supported file formats (`.pfx`, `.byok`, `.backup`).
+
+## Recommendations
+
+When you're using data encryption by using a customer-managed key, here are recommendations for configuring Key Vault:
+
+* Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion.
+* Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated.
+* Ensure that Key Vault and Azure Database for MySQL reside in the same region, to ensure a faster access for DEK wrap, and unwrap operations.
+* Lock down the Azure KeyVault to only **private endpoint and selected networks** and allow only *trusted Microsoft* services to secure the resources.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/keyvault-trusted-service.png" alt-text="trusted-service-with-AKV":::
+
+Here are recommendations for configuring a customer-managed key:
+
+* Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service.
+
+* If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey).
+
+## Inaccessible customer-managed key condition
+
+When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*. Some of the reason why the server can reach this state are:
+
+* If we create a Point In Time Restore server for your Azure Database for MySQL, which has data encryption enabled, the newly created server will be in *Inaccessible* state. You can fix this through [Azure portal](how-to-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](how-to-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
+* If we create a read replica for your Azure Database for MySQL, which has data encryption enabled, the replica server will be in *Inaccessible* state. You can fix this through [Azure portal](how-to-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](how-to-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
+* If you delete the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
+* If we delete the key from the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
+* If the key stored in the Azure KeyVault expires, the key will become invalid and the Azure Database for MySQL will transition into *Inaccessible* state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the server *Available*.
+
+### Accidental key access revocation from Key Vault
+
+It might happen that someone with sufficient access rights to Key Vault accidentally disables server access to the key by:
+
+* Revoking the key vault's `get`, `wrapKey`, and `unwrapKey` permissions from the server.
+* Deleting the key.
+* Deleting the key vault.
+* Changing the key vault's firewall rules.
+* Deleting the managed identity of the server in Azure AD.
+
+## Monitor the customer-managed key in Key Vault
+
+To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features:
+
+* [Azure Resource Health](../../service-health/resource-health-overview.md): An inaccessible database that has lost access to the customer key shows as "Inaccessible" after the first connection to the database has been denied.
+* [Activity log](../../service-health/alerts-activity-log-service-notifications-portal.md): When access to the customer key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible, if you create alerts for these events.
+
+* [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences.
+
+## Restore and replicate with a customer's managed key in Key Vault
+
+After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through read replicas. However, the copy can be changed to reflect a new customer's managed key for encryption. When the customer-managed key is changed, old backups of the server start using the latest key.
+
+To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the source and restored/replica servers:
+
+* Initiate the restore or read replica creation process from the source Azure Database for MySQL.
+* Keep the newly created server (restored/replica) in an inaccessible state, because its unique identity hasn't yet been given permissions to Key Vault.
+* On the restored/replica server, revalidate the customer-managed key in the data encryption settings to ensures that the newly created server is given wrap and unwrap permissions to the key stored in Key Vault.
+
+## Limitations
+
+For Azure Database for MySQL, the support for encryption of data at rest using customers managed key (CMK) has few limitations -
+
+* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
+* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
+
+ > [!NOTE]
+ > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
+ > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
+
+* Encryption is only supported with RSA 2048 cryptographic key.
+
+## Next steps
+
+* Learn how to set up data encryption with a customer-managed key for your Azure database for MySQL by using the [Azure portal](how-to-data-encryption-portal.md) and [Azure CLI](how-to-data-encryption-cli.md).
+* Learn about the storage type support for [Azure Database for MySQL - Single Server](concepts-pricing-tiers.md#storage)
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-in-replication.md
+
+ Title: Data-in Replication - Azure Database for MySQL
+description: Learn about using Data-in Replication to synchronize from an external server into the Azure Database for MySQL service.
+++++ Last updated : 04/08/2021++
+# Replicate data into Azure Database for MySQL
++
+Data-in Replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL service. The external server can be on-premises, in virtual machines, or a database service hosted by other cloud providers. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+
+## When to use Data-in Replication
+
+The main scenarios to consider about using Data-in Replication are:
+
+- **Hybrid Data Synchronization:** With Data-in Replication, you can keep data synchronized between your on-premises servers and Azure Database for MySQL. This synchronization is useful for creating hybrid applications. This method is appealing when you have an existing local database server but want to move the data to a region closer to end users.
+- **Multi-Cloud Synchronization:** For complex cloud solutions, use Data-in Replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database services hosted in those clouds.
+
+For migration scenarios, use the [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/)(DMS).
+
+## Limitations and considerations
+
+### Data not replicated
+
+The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html).
+
+### Filtering
+
+To skip replicating tables from your source server (hosted on-premises, in virtual machines, or a database service hosted by other cloud providers), the `replicate_wild_ignore_table` parameter is supported. Optionally, update this parameter on the replica server hosted in Azure using the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md).
+
+To learn more about this parameter, review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table).
+
+## Supported in General Purpose or Memory Optimized tier only
+
+Data-in Replication is only supported in General Purpose and Memory Optimized pricing tiers.
+
+>[!Note]
+>GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2).
+
+### Requirements
+
+- The source server version must be at least MySQL version 5.6.
+- The source and replica server versions must be the same. For example, both must be MySQL version 5.6 or both must be MySQL version 5.7.
+- Each table must have a primary key.
+- The source server should use the MySQL InnoDB engine.
+- User must have permissions to configure binary logging and create new users on the source server.
+- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` or `mysql.az_replication_change_master_with_gtid` stored procedure. Refer to the following [examples](./how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter.
+- Ensure that the source server's IP address has been added to the Azure Database for MySQL replica server's firewall rules. Update firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md).
+- Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306.
+- Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).
+
+## Next steps
+
+- Learn how to [set up data-in replication](how-to-data-in-replication.md)
+- Learn about [replicating in Azure with read replicas](concepts-read-replicas.md)
+- Learn about how to [migrate data with minimal downtime using DMS](how-to-migrate-online.md)
mysql Concepts Database Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-database-application-development.md
+
+ Title: Application development - Azure Database for MySQL
+description: Introduces design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL
+++++ Last updated : 3/18/2020++
+# Application development overview for Azure Database for MySQL
++
+This article discusses design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL.
+
+> [!TIP]
+> For a tutorial showing you how to create a server, create a server-based firewall, view server properties, create database, and connect and query by using workbench and mysql.exe, see [Design your first Azure Database for MySQL database](tutorial-design-database-using-portal.md)
+
+## Language and platform
+There are code samples available for various programming languages and platforms. You can find links to the code samples at:
+[Connectivity libraries used to connect to Azure Database for MySQL](concepts-connection-libraries.md)
+
+## Tools
+Azure Database for MySQL uses the MySQL community version, compatible with MySQL common management tools such as Workbench or MySQL utilities such as mysql.exe, [phpMyAdmin](https://www.phpmyadmin.net/), [Navicat](https://www.navicat.com/products/navicat-for-mysql), [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/) and others. You can also use the Azure portal, Azure CLI, and REST APIs to interact with the database service.
+
+## Resource limitations
+Azure Database for MySQL manages the resources available to a server by using two different mechanisms:
+- Resources Governance.
+- Enforcement of Limits.
+
+## Security
+Azure Database for MySQL provides resources for limiting access, protecting data, configuring users and roles, and monitoring activities on a MySQL database.
+
+## Authentication
+Azure Database for MySQL supports server authentication of users and logins.
+
+## Resiliency
+When a transient error occurs while connecting to a MySQL database, your code should retry the call. We recommend that the retry logic use back off logic so that it does not overwhelm the SQL database with multiple clients retrying simultaneously.
+
+- Code samples: For code samples that illustrate retry logic, see samples for the language of your choice at: [Connectivity libraries used to connect to Azure Database for MySQL](concepts-connection-libraries.md)
+
+## Managing connections
+Database connections are a limited resource, so we recommend sensible use of connections when accessing your MySQL database to achieve better performance.
+- Access the database by using connection pooling or persistent connections.
+- Access the database by using short connection life span.
+- Use retry logic in your application at the point of the connection attempt to catch failures resulting from concurrent connections have reached the maximum allowed. In the retry logic, set a short delay, and then wait for a random time before the additional connection attempts.
mysql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-firewall-rules.md
+
+ Title: Firewall rules - Azure Database for MySQL
+description: Learn about using firewall rules to enable connections to your Azure Database for MySQL server.
+++++ Last updated : 07/17/2020++
+# Azure Database for MySQL server firewall rules
++
+Firewalls prevent all access to your database server until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request.
+
+To configure a firewall, create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
+
+**Firewall rules:** These rules enable clients to access your entire Azure Database for MySQL server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
+
+## Firewall overview
+All database access to your Azure Database for MySQL server is by default blocked by the firewall. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules.
+
+Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your Azure Database for MySQL database, as shown in the following diagram:
++
+## Connecting from the Internet
+Server-level firewall rules apply to all databases on the Azure Database for MySQL server.
+
+If the IP address of the request is within one of the ranges specified in the server-level firewall rules, then the connection is granted.
+
+If the IP address of the request is outside the ranges specified in any of the database-level or server-level firewall rules, then the connection request fails.
+
+## Connecting from Azure
+It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
+
+If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL server.
+
+> [!IMPORTANT]
+> The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+>
++
+### Connecting from a VNet
+To connect securely to your Azure Database for MySQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
+
+## Programmatically managing firewall rules
+In addition to the Azure portal, firewall rules can be managed programmatically by using the Azure CLI. See also [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md)
+
+## Troubleshooting firewall issues
+Consider the following points when access to the Microsoft Azure Database for MySQL server service does not behave as expected:
+
+* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect.
+
+* **The login is not authorized or an incorrect password was used:** If a login does not have permissions on the Azure Database for MySQL server or the password used is incorrect, the connection to the Azure Database for MySQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must provide the necessary security credentials.
+
+* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you can try one of the following solutions:
+
+ * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for MySQL server, and then add the IP address range as a firewall rule.
+
+ * Get static IP addressing instead for your client computers, and then add the IP addresses as firewall rules.
+
+* **Server's IP appears to be public:** Connections to the Azure Database for MySQL server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
+
+* **Cannot connect from Azure resource with allowed IP:** Check whether the **Microsoft.Sql** service endpoint is enabled for the subnet you are connecting from. If **Microsoft.Sql** is enabled, it indicates that you only want to use [VNet service endpoint rules](concepts-data-access-and-security-vnet.md) on that subnet.
+
+ For example, you may see the following error if you are connecting from an Azure VM in a subnet that has **Microsoft.Sql** enabled but has no corresponding VNet rule:
+ `FATAL: Client from Azure Virtual Networks is not allowed to access the server`
+
+* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error.
+
+## Next steps
+
+* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md)
+* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./how-to-manage-firewall-using-cli.md)
+* [VNet service endpoints in Azure Database for MySQL](./concepts-data-access-and-security-vnet.md)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-high-availability.md
+
+ Title: High availability - Azure Database for MySQL
+description: This article provides information on high availability in Azure Database for MySQL
+++++ Last updated : 7/7/2020++
+# High availability in Azure Database for MySQL
++
+The Azure Database for MySQL service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/mysql) uptime. Azure Database for MySQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MySQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
+
+Azure Database for MySQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
+
+## Components in Azure Database for MySQL
+
+| **Component** | **Description**|
+| | -- |
+| <b>MySQL Database Server | Azure Database for MySQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in 60-120 seconds depending on the transactional activity on the database. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (ib_log) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://dev.mysql.com/doc/refman/5.7/en/innodb-checkpoints.html) process, data pages from the database server memory are also flushed to the storage. |
+| <b>Remote Storage | All MySQL physical data files and log files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within 60 seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
+| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. |
+
+## Planned downtime mitigation
+Azure Database for MySQL is architected to provide high availability during planned downtime operations.
++
+Here are some planned maintenance scenarios:
+
+| **Scenario** | **Description**|
+| | -- |
+| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.|
+| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.|
+| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
+| <b>Minor version upgrades | Azure Database for MySQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
++
+## Unplanned downtime mitigation
+
+Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in 60-120 seconds. The remote storage is automatically attached to the new database server. MySQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MySQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
+++
+### Unplanned downtime: failure scenarios and service recovery
+Here are some failure scenarios and how Azure Database for MySQL automatically recovers:
+
+| **Scenario** | **Automatic recovery** |
+| - | - |
+| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. |
+| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
+
+Here are some failure scenarios that require user action to recover:
+
+| **Scenario** | **Recovery plan** |
+| - | - |
+| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](how-to-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
+| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [mysqldump](concepts-migrate-dump-restore.md), and then use [restore](concepts-migrate-dump-restore.md#restore-your-mysql-database-using-command-line-or-mysql-workbench) to restore those tables into your database. |
+++
+## Summary
+
+Azure Database for MySQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for MySQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/mysql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications.
+
+## Next steps
+- Learn about [Azure regions](../../availability-zones/az-overview.md)
+- Learn about [handling transient connectivity errors](concepts-connectivity.md)
+- Learn how to [replicate your data with read replicas](how-to-read-replicas-portal.md)
mysql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-infrastructure-double-encryption.md
+
+ Title: Infrastructure double encryption - Azure Database for MySQL
+description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service managed keys.
+++++ Last updated : 6/30/2020++
+# Azure Database for MySQL Infrastructure double encryption
++
+Azure Database for MySQL uses storage [encryption of data at-rest](concepts-security.md#at-rest) for data using Microsoft's managed keys. Data, including backups, are encrypted on disk and this encryption is always on and can't be disabled. The encryption uses FIPS 140-2 validated cryptographic module and an AES 256-bit cipher for the Azure storage encryption.
+
+Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for MySQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact.
+
+> [!NOTE]
+> Like data encryption at rest, this feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-infrastructure-double-encryption.md#limitations) section.
+
+Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for MySQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-mysql.md) for the provisioned MySQL server.
+
+Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
+
+> [!NOTE]
+> Using Infrastructure double encryption will have 5-10% impact on the throughput of your Azure Database for MySQL server due to the additional encryption process.
+
+## Benefits
+
+Infrastructure double encryption for Azure Database for MySQL provides the following benefits:
+
+1. **Additional diversity of crypto implementation** - The planned move to hardware-based encryption will further diversify the implementations by providing a hardware-based implementation in addition to the software-based implementation.
+2. **Implementation errors** - Two layers of encryption at infrastructure layer protects against any errors in caching or memory management in higher layers that exposes plaintext data. Additionally, the two layers also ensures against errors in the implementation of the encryption in general.
+
+The combination of these provides strong protection against common threats and weaknesses used to attack cryptography.
+
+## Supported scenarios with infrastructure double encryption
+
+The encryption capabilities that are provided by Azure Database for MySQL can be used together. Below is a summary of the various scenarios that you can use:
+
+| ## | Default encryption | Infrastructure double encryption | Data encryption using Customer-managed keys |
+|:|::|:--:|:--:|
+| 1 | *Yes* | *No* | *No* |
+| 2 | *Yes* | *Yes* | *No* |
+| 3 | *Yes* | *No* | *Yes* |
+| 4 | *Yes* | *Yes* | *Yes* |
+| | | | |
+
+> [!Important]
+> - Scenario 2 and 4 can introduce 5-10 percent drop in throughput based on the workload type for Azure Database for MySQL server due to the additional layer of infrastructure encryption.
+> - Configuring Infrastructure double encryption for Azure Database for MySQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption. However, you can still enable Data encryption using customer-managed keys for the server created with/without infrastructure double encryption.
+
+## Limitations
+
+For Azure Database for MySQL, the support for infrastruction double encryption has few limitations -
+
+* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
+* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
+
+ > [!NOTE]
+ > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
+ > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
+++
+## Next steps
+
+Learn how to [set up Infrastructure double encryption for Azure database for MySQL](how-to-double-encryption.md).
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-limits.md
+
+ Title: Limitations - Azure Database for MySQL
+description: This article describes limitations in Azure Database for MySQL, such as number of connection and storage engine options.
+++++ Last updated : 10/1/2020+
+# Limitations in Azure Database for MySQL
++
+The following sections describe capacity, storage engine support, privilege support, data manipulation statement support, and functional limits in the database service. Also see [general limitations](https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.6/en/limits.html) applicable to the MySQL database engine.
+
+## Server parameters
+
+> [!NOTE]
+> If you are looking for min/max values for server parameters like `max_connections` and `innodb_buffer_pool_size`, this information has moved to the **[server parameters](./concepts-server-parameters.md)** article.
+
+Azure Database for MySQL supports tuning the values of server parameters. The min and max value of some parameters (ex. `max_connections`, `join_buffer_size`, `query_cache_size`) is determined by the pricing tier and vCores of the server. Refer to [server parameters](./concepts-server-parameters.md) for more information about these limits.
+
+Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables are not populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](how-to-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](how-to-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
+
+Password plugins such as "validate_password" and "caching_sha2_password" aren't supported by the service.
+
+## Storage engines
+
+MySQL supports many storage engines. On Azure Database for MySQL, the following storage engines are supported and unsupported:
+
+### Supported
+- [InnoDB](https://dev.mysql.com/doc/refman/5.7/en/innodb-introduction.html)
+- [MEMORY](https://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html)
+
+### Unsupported
+- [MyISAM](https://dev.mysql.com/doc/refman/5.7/en/myisam-storage-engine.html)
+- [BLACKHOLE](https://dev.mysql.com/doc/refman/5.7/en/blackhole-storage-engine.html)
+- [ARCHIVE](https://dev.mysql.com/doc/refman/5.7/en/archive-storage-engine.html)
+- [FEDERATED](https://dev.mysql.com/doc/refman/5.7/en/federated-storage-engine.html)
+
+## Privileges & data manipulation support
+
+Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MySQL server. To maintain the service integrity and SLA at a product level, this service doesn't expose multiple roles.
+
+The MySQL service doesn't allow direct access to the underlying file system. Some data manipulation commands aren't supported.
+
+### Unsupported
+
+The following are unsupported:
+- DBA role: Restricted. Alternatively, you can use the administrator user (created during new server creation), allows you to perform most of DDL and DML statements.
+- SUPER privilege: Similarly, [SUPER privilege](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_super) is restricted.
+- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, remove the `CREATE DEFINER` commands manually or by using the `--skip-definer` command when performing a [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html).
+- System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionality. You can't make changes to the `mysql` system database.
+- `SELECT ... INTO OUTFILE`: Not supported in the service.
+- `LOAD_FILE(file_name)`: Not supported in the service.
+- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege is not supported for taking backups using any [utility tools](./how-to-decide-on-right-migration-tools.md).
+
+### Supported
+- `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). Additionally, if you are using MySQL client version >= 8.0 you need to include `-ΓÇôlocal-infile=1` parameter in your connection string.
++
+## Functional limitations
+
+### Scale operations
+- Dynamic scaling to and from the Basic pricing tiers is currently not supported.
+- Decreasing server storage size is not supported.
+
+### Major version upgrades
+- [Major version upgrade is supported for v5.6 to v5.7 upgrades only](how-to-major-version-upgrade.md). Upgrades to v8.0 is not supported yet.
+
+### Point-in-time-restore
+- When using the PITR feature, the new server is created with the same configurations as the server it is based on.
+- Restoring a deleted server is not supported.
+
+### VNet service endpoints
+- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
+
+### Storage size
+- Please refer to [pricing tiers](concepts-pricing-tiers.md#storage) for the storage size limits per pricing tier.
+
+## Current known issues
+- MySQL server instance displays the wrong server version after connection is established. To get the correct server instance engine version, use the `select version();` command.
+
+## Next steps
+- [What's available in each service tier](concepts-pricing-tiers.md)
+- [Supported MySQL database versions](concepts-supported-versions.md)
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dbforge-studio-for-mysql.md
+
+ Title: Use dbForge Studio for MySQL to migrate a MySQL database to Azure Database for MySQL
+description: The article demonstrates how to migrate to Azure Database for MySQL by using dbForge Studio for MySQL.
+++++ Last updated : 03/03/2021+
+# Migrate data to Azure Database for MySQL with dbForge Studio for MySQL
++
+Looking to move your MySQL databases to Azure Database for MySQL? Consider using the migration tools in dbForge Studio for MySQL. With it, database transfer can be configured, saved, edited, automated, and scheduled.
+
+To complete the examples in this article, you'll need to download and install [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/).
+
+## Connect to Azure Database for MySQL
+
+1. In dbForge Studio for MySQL, select **New Connection** from the **Database** menu.
+
+1. Provide a host name and sign-in credentials.
+
+1. Select **Test Connection** to check the configuration.
++
+## Migrate with the Backup and Restore functionality
+
+You can choose from many options when using dbForge Studio for MySQL to migrate databases to Azure. If you need to move the entire database, it's best to use the **Backup and Restore** functionality.
+
+In this example, we migrate the *sakila* database from MySQL server to Azure Database for MySQL. The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
+
+### Back up the database
+
+1. In dbForge Studio for MySQL, select **Backup Database** from the **Backup and Restore** menu. The **Database Backup Wizard** appears.
+
+1. On the **Backup content** tab of the **Database Backup Wizard**, select database objects you want to back up.
+
+1. On the **Options** tab, configure the backup process to fit your requirements.
+
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png" alt-text="Screenshot showing the options pane of the Backup wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png":::
+
+1. Select **Next**, and then specify error processing behavior and logging options.
+
+1. Select **Backup**.
+
+### Restore the database
+
+1. In dbForge Studio for MySQL, connect to Azure Database for MySQL. [Refer to the instructions](#connect-to-azure-database-for-mysql).
+
+1. Select **Restore Database** from the **Backup and Restore** menu. The **Database Restore Wizard** appears.
+
+1. In the **Database Restore Wizard**, select a file with a database backup.
+
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png" alt-text="Screenshot showing the Restore step of the Database Restore wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png":::
+
+1. Select **Restore**.
+
+1. Check the result.
+
+## Migrate with the Copy Databases functionality
+
+The **Copy Databases** functionality in dbForge Studio for MySQL is similar to **Backup and Restore**, except that it doesn't require two steps to migrate a database. It also lets you transfer two or more databases at once.
+
+>[!NOTE]
+> The **Copy Databases** functionality is only available in the Enterprise edition of dbForge Studio for MySQL.
+
+In this example, we migrate the *world_x* database from MySQL server to Azure Database for MySQL.
+
+To migrate a database using the Copy Databases functionality:
+
+1. In dbForge Studio for MySQL, select **Copy Databases** from the **Database** menu.
+
+1. On the **Copy Databases** tab, specify the source and target connection. Also select the databases to be migrated.
+
+ We enter the Azure MySQL connection and select the *world_x* database. Select the green arrow to start the process.
+
+1. Check the result.
+
+You'll see that the *world_x* database has successfully appeared in Azure MySQL.
++
+## Migrate a database with schema and data comparison
+
+You can choose from many options when using dbForge Studio for MySQL to migrate databases, schemas, and/or data to Azure. If you need to move selective tables from a MySQL database to Azure, it's best to use the **Schema Comparison** and the **Data Comparison** functionality.
+
+In this example, we migrate the *world* database from MySQL server to Azure Database for MySQL.
+
+The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
+
+The logic behind this approach is to create an empty database in Azure Database for MySQL and synchronize it with the source MySQL database. We first use the **Schema Comparison** tool, and next we use the **Data Comparison** functionality. These steps ensure that the MySQL schemas and data are accurately moved to Azure.
+
+To complete this exercise, you'll first need to [connect to Azure Database for MySQL](#connect-to-azure-database-for-mysql) and create an empty database.
+
+### Schema synchronization
+
+1. On the **Comparison** menu, select **New Schema Comparison**. The **New Schema Comparison Wizard** appears.
+
+1. Choose your source and target, and then specify the schema comparison options. Select **Compare**.
+
+1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Schema Synchronization Wizard**.
+
+1. Walk through the steps of the wizard to configure synchronization. Select **Synchronize** to deploy the changes.
+
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png" alt-text="Screenshot showing the schema synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png":::
+
+### Data Comparison
+
+1. On the **Comparison** menu, select **New Data Comparison**. The **New Data Comparison Wizard** appears.
+
+1. Choose your source and target, and then specify the data comparison options. Change mappings if necessary, and then select **Compare**.
+
+1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Data Synchronization Wizard**.
+
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png" alt-text="Screenshot showing the results of the data comparison." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png":::
+
+1. Walk through the steps of the wizard configuring synchronization. Select **Synchronize** to deploy the changes.
+
+1. Check the result.
+
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png" alt-text="Screenshot showing the results of the Data Synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png":::
+
+## Next steps
+- [MySQL overview](overview.md)
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dump-restore.md
+
+ Title: Migrate using dump and restore - Azure Database for MySQL
+description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tools such as mysqldump, MySQL Workbench, and PHPMyAdmin.
+++++ Last updated : 10/30/2020++
+# Migrate your MySQL database to Azure Database for MySQL using dump and restore
++
+This article explains two common ways to back up and restore databases in your Azure Database for MySQL
+- Dump and restore from the command-line (using mysqldump)
+- Dump and restore using PHPMyAdmin
+
+You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
+
+## Before you begin
+To step through this how-to guide, you need to have:
+- [Create Azure Database for MySQL server - Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)
+- [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) command-line utility installed on a machine.
+- [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool to do dump and restore commands.
+
+> [!TIP]
+> If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
++
+## Common use-cases for dump and restore
+
+Most common use-cases are:
+
+- **Moving from other managed service provider** - Most managed service provider may not provide access to the physical storage file for security reasons so logical backup and restore is the only option to migrate.
+- **Migrating from on-premises environment or Virtual machine** - Azure Database for MySQL doesn't support restore of physical backups which makes logical backup and restore as the ONLY approach.
+- **Moving your backup storage from locally redundant to geo-redundant storage** - Azure Database for MySQL allows configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, dump and restore is the ONLY option.
+- **Migrating from alternative storage engines to InnoDB** - Azure Database for MySQL supports only InnoDB Storage engine, and therefore does not support alternative storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before migration to Azure Database for MySQL.
+
+ For example, if you have a WordPress or WebApp using the MyISAM tables, first convert those tables by migrating into InnoDB format before restoring to Azure Database for MySQL. Use the clause `ENGINE=InnoDB` to set the engine used when creating a new table, then transfer the data into the compatible table before the restore.
+
+ ```sql
+ INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
+ ```
+> [!Important]
+> - To avoid any compatibility issues, ensure the same version of MySQL is used on the source and destination systems when dumping databases. For example, if your existing MySQL server is version 5.7, then you should migrate to Azure Database for MySQL configured to run version 5.7. The `mysql_upgrade` command does not function in an Azure Database for MySQL server, and is not supported.
+> - If you need to upgrade across MySQL versions, first dump or export your lower version database into a higher version of MySQL in your own environment. Then run `mysql_upgrade`, before attempting migration into an Azure Database for MySQL.
+
+## Performance considerations
+To optimize performance, take notice of these considerations when dumping large databases:
+- Use the `exclude-triggers` option in mysqldump when dumping databases. Exclude triggers from dump files to avoid the trigger commands firing during the data restore.
+- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and sends a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.
+- Use the `extended-insert` multiple-row syntax that includes several VALUE lists. This results in a smaller dump file and speeds up inserts when the file is reloaded.
+- Use the `order-by-primary` option in mysqldump when dumping databases, so that the data is scripted in primary key order.
+- Use the `disable-keys` option in mysqldump when dumping data, to disable foreign key constraints before load. Disabling foreign key checks provides performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.
+- Use partitioned tables when appropriate.
+- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources using the metrics available in the Azure portal.
+- Use the `defer-table-indexes` option in mysqldump when dumping databases, so that index creation happens after tables data is loaded.
+- Use the `skip-definer` option in mysqldump to omit definer and SQL SECURITY clauses from the create statements for views and stored procedures. When you reload the dump file, it creates objects that use the default DEFINER and SQL SECURITY values.
+- Copy the backup files to an Azure blob/store and perform the restore from there, which should be a lot faster than performing the restore across the Internet.
+
+## Create a database on the target Azure Database for MySQL server
+Create an empty database on the target Azure Database for MySQL server where you want to migrate the data. Use a tool such as MySQL Workbench or mysql.exe to create the database. The database can have the same name as the database that is contained the dumped data or you can create a database with a different name.
+
+To get connected, locate the connection information in the **Overview** of your Azure Database for MySQL.
++
+Add the connection information into your MySQL Workbench.
++
+## Preparing the target Azure Database for MySQL server for fast data loads
+To prepare the target Azure Database for MySQL server for faster data loads, the following server parameters and configuration needs to be changed.
+- max_allowed_packet ΓÇô set to 1073741824 (i.e. 1GB) to prevent any overflow issue due to long rows.
+- slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.
+- query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.
+- innodb_buffer_pool_size ΓÇô Scale up the server to 32 vCore Memory Optimized SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server.
+- innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
+- innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
+- Scale up Storage tier ΓÇô The IOPs for Azure Database for MySQL server increases progressively with the increase in storage tier. For faster loads, you may want to increase the storage tier to increase the IOPs provisioned. Please do remember the storage can only be scaled up, not down.
+
+Once the migration is completed, you can revert back the server parameters and compute tier configuration to its previous values.
+
+## Dump and restore using mysqldump utility
+
+### Create a backup file from the command-line using mysqldump
+To back up an existing MySQL database on the local on-premises server or in a virtual machine, run the following command:
+```bash
+$ mysqldump --opt -u [uname] -p[pass] [dbname] > [backupfile.sql]
+```
+
+The parameters to provide are:
+- [uname] Your database username
+- [pass] The password for your database (note there is no space between -p and the password)
+- [dbname] The name of your database
+- [backupfile.sql] The filename for your database backup
+- [--opt] The mysqldump option
+
+For example, to back up a database named 'testdb' on your MySQL server with the username 'testuser' and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database. Make sure that the username 'testuser' has at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used.
+
+```bash
+GRANT SELECT, LOCK TABLES, SHOW VIEW ON *.* TO 'testuser'@'hostname' IDENTIFIED BY 'password';
+```
+Now run mysqldump to create the backup of `testdb` database
+
+```bash
+$ mysqldump -u root -p testdb > testdb_backup.sql
+```
+To select specific tables in your database to back up, list the table names separated by spaces. For example, to back up only table1 and table2 tables from the 'testdb', follow this example:
+
+```bash
+$ mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql
+```
+To back up more than one database at once, use the --database switch and list the database names separated by spaces.
+```bash
+$ mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
+```
+
+### Restore your MySQL database using command-line or MySQL Workbench
+Once you have created the target database, you can use the mysql command or MySQL Workbench to restore the data into the specific newly created database from the dump file.
+```bash
+mysql -h [hostname] -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
+```
+In this example, restore the data into the newly created database on the target Azure Database for MySQL server.
+
+Here is an example for how to use this **mysql** for **Single Server** :
+
+```bash
+$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p testdb < testdb_backup.sql
+```
+Here is an example for how to use this **mysql** for **Flexible Server** :
+
+```bash
+$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin -p testdb < testdb_backup.sql
+```
++
+## Dump and restore using PHPMyAdmin
+Follow these steps to dump and restore a database using PHPMyadmin.
+
+> [!NOTE]
+> For single server, the username must be in this format , 'username@servername' but for flexible server you can just use 'username' If you use 'username@servername' for flexible server, the connection will fail.
+
+### Export with PHPMyadmin
+To export, you can use the common tool phpMyAdmin, which you may already have installed locally in your environment. To export your MySQL database using PHPMyAdmin:
+1. Open phpMyAdmin.
+2. Select your database. Click the database name in the list on the left.
+3. Click the **Export** link. A new page appears to view the dump of database.
+4. In the Export area, click the **Select All** link to choose the tables in your database.
+5. In the SQL options area, click the appropriate options.
+6. Click the **Save as file** option and the corresponding compression option and then click the **Go** button. A dialog box should appear prompting you to save the file locally.
+
+### Import using PHPMyAdmin
+Importing your database is similar to exporting. Do the following actions:
+1. Open phpMyAdmin.
+2. In the phpMyAdmin setup page, click **Add** to add your Azure Database for MySQL server. Provide the connection details and login information.
+3. Create an appropriately named database and select it on the left of the screen. To rewrite the existing database, click the database name, select all the check boxes beside the table names, and select **Drop** to delete the existing tables.
+4. Click the **SQL** link to show the page where you can type in SQL commands, or upload your SQL file.
+5. Use the **browse** button to find the database file.
+6. Click the **Go** button to export the backup, execute the SQL commands, and re-create your database.
+
+## Known Issues
+For known issues, tips and tricks, we recommend you to look at our [techcommunity blog](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/tips-and-tricks-in-using-mysqldump-and-mysql-restore-to-azure/ba-p/916912).
+
+## Next steps
+- [Connect applications to Azure Database for MySQL](./how-to-connection-string.md).
+- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+- If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-import-export.md
+
+ Title: Import and export - Azure Database for MySQL
+description: This article explains common ways to import and export databases in Azure Database for MySQL, by using tools such as MySQL Workbench.
+++++ Last updated : 10/30/2020++
+# Migrate your MySQL database by using import and export
++
+This article explains two common approaches to importing and exporting data to an Azure Database for MySQL server by using MySQL Workbench.
+
+For detailed and comprehensive migration guidance, see the [migration guide resources](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+
+For other migration scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+
+## Prerequisites
+
+Before you begin migrating your MySQL database, you need to:
+
+- Create an [Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md).
+- Download and install [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool for importing and exporting.
+
+## Create a database on the Azure Database for MySQL server
+
+Create an empty database on the Azure Database for MySQL server by using MySQL Workbench, Toad, or Navicat. The database can have the same name as the database that contains the dumped data, or you can create a database with a different name.
+
+To get connected, do the following:
+
+1. In the Azure portal, look for the connection information on the **Overview** pane of your Azure Database for MySQL.
+
+ :::image type="content" source="./media/concepts-migrate-import-export/1-server-overview-name-login.png" alt-text="Screenshot of the Azure Database for MySQL server connection information in the Azure portal.":::
+
+1. Add the connection information to MySQL Workbench.
+
+ :::image type="content" source="./media/concepts-migrate-import-export/2-setup-new-connection.png" alt-text="Screenshot of the MySQL Workbench connection string.":::
+
+## Determine when to use import and export techniques
+
+> [!TIP]
+> For scenarios where you want to dump and restore the entire database, use the [dump and restore](concepts-migrate-dump-restore.md) approach instead.
+
+In the following scenarios, use MySQL tools to import and export databases into your MySQL database. For other tools, go to the "Migration Methods" section (page 22) of the [MySQL to Azure Database migration guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+
+- When you need to selectively choose a few tables to import from an existing MySQL database into your Azure MySQL database, it's best to use the import and export technique. By doing so, you can omit any unneeded tables from the migration to save time and resources. For example, use the `--include-tables` or `--exclude-tables` switch with [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_include-tables), and the `--tables` switch with [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_tables).
+- When you're moving database objects other than tables, explicitly create those objects. Include constraints (primary key, foreign key, and indexes), views, functions, procedures, triggers, and any other database objects that you want to migrate.
+- When you're migrating data from external data sources other than a MySQL database, create flat files and import them by using [mysqlimport](https://dev.mysql.com/doc/refman/5.7/en/mysqlimport.html).
+
+> [!Important]
+> Both Single Server and Flexible Server support only the InnoDB storage engine. Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MySQL.
+>
+> If your source database uses another storage engine, convert to the InnoDB engine before you migrate the database. For example, if you have a WordPress or web app that uses the MyISAM engine, first convert the tables by migrating the data into InnoDB tables. Use the clause `ENGINE=INNODB` to set the engine for creating a table, and then transfer the data into the compatible table before the migration.
+
+ ```sql
+ INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
+ ```
+
+## Performance recommendations for import and export
+
+For optimal data import and export performance, we recommend that you do the following:
+
+- Create clustered indexes and primary keys before you load data. Load the data in primary key order.
+- Delay the creation of secondary indexes until after the data is loaded.
+- Disable foreign key constraints before you load the data. Disabling foreign key checks provides significant performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.
+- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal.
+- Use partitioned tables when appropriate.
+
+## Import and export data by using MySQL Workbench
+
+There are two ways to export and import data in MySQL Workbench: from the object browser context menu or from the Navigator pane. Each method serves a different purpose.
+
+> [!NOTE]
+> If you're adding a connection to MySQL Single Server or Flexible Server on MySQL Workbench, do the following:
+>
+> - For MySQL Single Server, make sure that the user name is in the format *\<username@servername>*.
+> - For MySQL Flexible Server, use *\<username>* only. If you use *\<username@servername>* to connect, the connection will fail.
+
+### Run the table data export and import wizards from the object browser context menu
++
+The table data wizards support import and export operations by using CSV and JSON files. The wizards include several configuration options, such as separators, column selection, and encoding selection. You can run each wizard against local or remotely connected MySQL servers. The import action includes table, column, and type mapping.
+
+To access these wizards from the object browser context menu, right-click a table, and then select **Table Data Export Wizard** or **Table Data Import Wizard**.
+
+#### The table data export wizard
+
+To export a table to a CSV file:
+
+1. Right-click the table of the database to be exported.
+1. Select **Table Data Export Wizard**. Select the columns to be exported, row offset (if any), and count (if any).
+1. On the **Select data for export** pane, select **Next**. Select the file path, CSV, or JSON file type. Also select the line separator, method of enclosing strings, and field separator.
+1. On the **Select output file location** pane, select **Next**.
+1. On the **Export data** pane, select **Next**.
+
+#### The table data import wizard
+
+To import a table from a CSV file:
+
+1. Right-click the table of the database to be imported.
+1. Look for and select the CSV file to be imported, and then select **Next**.
+1. Select the destination table (new or existing), select or clear the **Truncate table before import** check box, and then select **Next**.
+1. Select the encoding and the columns to be imported, and then select **Next**.
+1. On the **Import data** pane, select **Next**. The wizard imports the data.
+
+### Run the SQL data export and import wizards from the Navigator pane
+
+Use a wizard to export or import SQL data that's generated from MySQL Workbench or from the mysqldump command. You can access the wizards from the **Navigator** pane or you can select **Server** from the main menu.
+
+#### Export data
++
+You can use the **Data Export** pane to export your MySQL data.
+
+1. In MySQL Workbench, on the **Navigator** pane, select **Data Export**.
+
+1. On the **Data Export** pane, select each schema that you want to export.
+
+ For each schema, you can select specific schema objects or tables to export. Configuration options include export to a project folder or a self-contained SQL file, dump stored routines and events, or skip table data.
+
+ Alternatively, use **Export a Result Set** to export a specific result set in the SQL editor to another format, such as CSV, JSON, HTML, and XML.
+
+1. Select the database objects to export, and configure the related options.
+1. Select **Refresh** to load the current objects.
+1. Optionally, select **Advanced Options** at the upper right to refine the export operation. For example, add table locks, use `replace` instead of `insert` statements, and quote identifiers with backtick characters.
+1. Select **Start Export** to begin the export process.
++
+#### Import data
++
+You can use the **Data Import** pane to import or restore exported data from the data export operation or from the mysqldump command.
+
+1. In MySQL Workbench, on the **Navigator** pane, select **Data Export/Restore**.
+1. Select the project folder or self-contained SQL file, select the schema to import into, or select the **New** button to define a new schema.
+1. Select **Start Import** to begin the import process.
+
+## Next steps
+
+- For another migration approach, see [Migrate your MySQL database to an Azure database for MySQL by using dump and restore](concepts-migrate-dump-restore.md).
+- For more information about migrating databases to an Azure database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-mydumper-myloader.md
+
+ Title: Migrate large databases to Azure Database for MySQL using mydumper/myloader
+description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tool mydumper/myloader
+++++ Last updated : 06/18/2021++
+# Migrate large databases to Azure Database for MySQL using mydumper/myloader
++
+Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. To migrate MySQL databases larger than 1 TB to Azure Database for MySQL, consider using community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html), which provide the following benefits:
+
+* Parallelism, to help reduce the migration time.
+* Better performance, by avoiding expensive character set conversion routines.
+* An output format, with separate files for tables, metadata etc., that makes it easy to view/parse data. Consistency, by maintaining snapshot across all threads.
+* Accurate primary and replica log positions.
+* Easy management, as they support Perl Compatible Regular Expressions (PCRE) for specifying database and tables inclusions and exclusions.
+* Schema and data goes together. Don't need to handle it separately like other logical migration tools.
+
+This quickstart shows you how to install, back up, and restore a MySQL database by using mydumper/myloader.
+
+## Prerequisites
+
+Before you begin migrating your MySQL database, you need to:
+
+1. Create an Azure Database for MySQL server by using the [Azure portal](../flexible-server/quickstart-create-server-portal.md).
+
+2. Create an Azure VM running Linux by using the [Azure portal](../../virtual-machines/linux/quick-create-portal.md) (preferably Ubuntu).
+ > [!Note]
+ > Prior to installing the tools, consider the following points:
+ >
+ > * If your source is on-premises and has a high bandwidth connection to Azure (using ExpressRoute), consider installing the tool on an Azure VM.<br>
+ > * If you have a challenge in the bandwidth between the source and target, consider installing mydumper near the source and myloader near the target server. You can use tools **[Azcopy](../../storage/common/storage-use-azcopy-v10.md)** to move the data from on-premises or other cloud solutions to Azure.
+
+3. Install mysql client, do the following steps:
+
+4.
+
+ * Update the package index on the Azure VM running Linux by running the following command:
+ ```bash
+ $ sudo apt update
+ ```
+ * Install the mysql client package by running the following command:
+ ```bash
+ $ sudo apt install mysql-client
+ ```
+
+## Install mydumper/myloader
+
+To install mydumper/myloader, do the following steps.
+
+1. Depending on your OS distribution, download the appropriate package for mydumper/myloader, running the following command:
+2.
+ ```bash
+ $ wget https://github.com/maxbube/mydumper/releases/download/v0.10.1/mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb
+ ```
+
+ > [!Note]
+ > $(lsb_release -cs) helps to identify your distribution.
+
+3. To install the .deb package for mydumper, run the following command:
+
+ ```bash
+ $ dpkg -i mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb
+ ```
+
+ > [!Tip]
+ > The command you use to install the package will differ based on the Linux distribution you have as the installers are different. The mydumper/myloader is available for following distributions Fedora, RedHat , Ubuntu, Debian, CentOS , openSUSE and MacOSX. For more information, see **[How to install mydumper](https://github.com/maxbube/mydumper#how-to-install-mydumpermyloader)**
+
+## Create a backup using mydumper
+
+* To create a backup using mydumper, run the following command:
+
+ ```bash
+ $ mydumper --host=<servername> --user=<username> --password=<Password> --outputdir=./backup --rows=100000 --compress --build-empty-files --threads=16 --compress-protocol --trx-consistency-only --ssl --regex '^(<Db_name>\.)' -L mydumper-logs.txt
+ ```
+
+This command uses the following variables:
+
+* **--host:** The host to connect to
+* **--user:** Username with the necessary privileges
+* **--password:** User password
+* **--rows:** Try to split tables into chunks of this many rows
+* **--outputdir:** Directory to dump output files to
+* **--regex:** Regular expression for Database matching.
+* **--trx-consistency-only:** Transactional consistency only
+* **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer.
+
+ >[!Note]
+ >For more information on other options, you can use with mydumper, run the following command:
+ **mydumper --help** . For more details see, [mydumper\myloader documentation](https://centminmod.com/mydumper.html)<br>
+ >To dump multiple databases in parallel, you can modify regex variable as shown in the example: **regex ΓÇÖ^(DbName1\.|DbName2\.)**
+
+## Restore your database using myloader
+
+* To restore the database that you backed up using mydumper, run the following command:
+
+ ```bash
+ $ myloader --host=<servername> --user=<username> --password=<Password> --directory=./backup --queries-per-transaction=500 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt
+ ```
+
+This command uses the following variables:
+
+* **--host:** The host to connect to
+* **--user:** Username with the necessary privileges
+* **--password:** User password
+* **--directory:** Location where the backup is stored.
+* **--queries-per-transaction:** Recommend setting to value not more than 500
+* **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer
+
+> [!Tip]
+> For more information on other options you can use with myloader, run the following command:
+**myloader --help**
+
+After the database is restored, itΓÇÖs always recommended to validate the data consistency between the source and the target databases.
+
+> [!Note]
+> Submit any issues or feedback regarding the mydumper/myloader tools **[here](https://github.com/maxbube/mydumper/issues)**.
+
+## Next steps
+
+* Learn more about the [mydumper/myloader project in GitHub](https://github.com/maxbube/mydumper).
+* Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
+* [Tutorial: Minimal Downtime Migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server](how-to-migrate-single-flexible-minimum-downtime.md)
+* Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](../flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](../flexible-server/how-to-data-in-replication.md)
+* Commonly encountered [migration errors](./how-to-troubleshoot-common-errors.md)
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-monitoring.md
+
+ Title: Monitoring - Azure Database for MySQL
+description: This article describes the metrics for monitoring and alerting for Azure Database for MySQL, including CPU, storage, and connection statistics.
++++++ Last updated : 10/21/2020+
+# Monitoring in Azure Database for MySQL
+
+Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for MySQL provides various metrics that give insight into the behavior of your server.
+
+## Metrics
+
+All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](how-to-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md).
+
+### List of metrics
+
+These metrics are available for Azure Database for MySQL:
+
+|Metric|Metric Display Name|Unit|Description|
+|||||
+|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
+|memory_percent|Memory percent|Percent|The percentage of memory in use.|
+|io_consumption_percent|IO percent|Percent|The percentage of IO in use. (Not applicable for Basic tier servers)|
+|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
+|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
+|serverlog_storage_percent|Server Log storage percent|Percent|The percentage of server log storage used out of the server's maximum server log storage.|
+|serverlog_storage_usage|Server Log storage used|Bytes|The amount of server log storage in use.|
+|serverlog_storage_limit|Server Log storage limit|Bytes|The maximum server log storage for this server.|
+|storage_limit|Storage limit|Bytes|The maximum storage for this server.|
+|active_connections|Active Connections|Count|The number of active connections to the server.|
+|connections_failed|Failed Connections|Count|The number of failed connections to the server.|
+|seconds_behind_master|Replication lag in seconds|Count|The number of seconds the replica server is lagging against the source server. (Not applicable for Basic tier servers)|
+|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
+|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
+|backup_storage_used|Backup Storage Used|Bytes|The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.|
+
+## Server logs
+
+You can enable slow query and audit logging on your server. These logs are also available through Azure Diagnostic Logs in Azure Monitor logs, Event Hubs, and Storage Account. To learn more about logging, visit the [audit logs](concepts-audit-logs.md) and [slow query logs](concepts-server-logs.md) articles.
+
+## Query Store
+
+[Query Store](concepts-query-store.md) is a feature that keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in the **mysql** schema. You can control the collection and storage of data via various configuration knobs.
+
+## Query Performance Insight
+
+[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible in the **Intelligent Performance** section of your Azure Database for MySQL server's portal page.
+
+## Performance Recommendations
+
+The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
+
+## Planned maintenance notification
+
+[Planned maintenance notifications](./concepts-planned-maintenance-notification.md) allow you to receive alerts for upcoming planned maintenance to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 hours before the event.
+
+Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document.
+
+## Next steps
+
+- See [How to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric.
+- For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md).
+- Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-mysql-monitoring/).
+- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for MySQL - Single Server
mysql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-performance-recommendations.md
+
+ Title: Performance recommendations - Azure Database for MySQL
+description: This article describes the Performance Recommendation feature in Azure Database for MySQL
+++++ Last updated : 6/3/2020+
+# Performance Recommendations in Azure Database for MySQL
++
+**Applies to:** Azure Database for MySQL 5.7, 8.0
+
+The Performance Recommendations feature analyzes your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. If performance schema is OFF, turning on Query Store enables performance_schema and a subset of performance schema instruments required for the feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
+
+## Permissions
+
+**Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature.
+
+## Performance recommendations
+
+The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance.
+
+Open **Performance Recommendations** from the **Intelligent Performance** section of the menu bar on the Azure portal page for your MySQL server.
++
+Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, the analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods.
+
+The **Recommendations** window will show a list of recommendations if any were found and the related query ID that generated this recommendation. With the query ID, you can use the [mysql.query_store](concepts-query-store.md#mysqlquery_store) view to learn more about the query.
++
+Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation.
+
+## Recommendation types
+
+### Index recommendations
+
+*Create Index* recommendations suggest new indexes to speed up the most frequently run or time-consuming queries in the workload. This recommendation type requires [Query Store](concepts-query-store.md) to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation.
+
+### Query recommendations
+
+Query recommendations suggest optimizations and rewrites for queries in the workload. By identifying MySQL query anti-patterns and fixing them syntactically, the performance of time-consuming queries can be improved. This recommendation type requires Query Store to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation.
+
+## Next steps
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL.
mysql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-planned-maintenance-notification.md
+
+ Title: Planned maintenance notification - Azure Database for MySQL - Single Server
+description: This article describes the Planned maintenance notification feature in Azure Database for MySQL - Single Server
+++++ Last updated : 10/21/2020+
+# Planned maintenance notification in Azure Database for MySQL - Single Server
++
+Learn how to prepare for planned maintenance events on your Azure Database for MySQL.
+
+## What is a planned maintenance?
+
+Azure Database for MySQL service performs automated patching of the underlying hardware, OS, and database engine. The patch includes new service features, security, and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patch is tested extensively and rolled out using safe deployment practices.
+
+A planned maintenance is a maintenance window when these service updates are deployed to servers in a given Azure region. During planned maintenance, a notification event is created to inform customers when the service update is deployed in the Azure region hosting their servers. Minimum duration between two planned maintenance is 30 days. You receive a notification of the next maintenance window 72 hours in advance.
+
+## Planned maintenance - duration and customer impact
+
+A planned maintenance for a given Azure region is typically expected to run 15 hrs. The window also includes buffer time to execute a rollback plan if necessary. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events.
+
+In summary, while the planned maintenance event runs for 15 hours, the individual server impact generally lasts 60 seconds depending on the transactional activity on the server. A notification is sent 72 calendar hours before planned maintenance starts and another one while maintenance is in progress for a given region.
+
+## How can I get notified of planned maintenance?
+
+You can utilize the planned maintenance notifications feature to receive alerts for an upcoming planned maintenance event. You will receive the notification about the upcoming maintenance 72 calendar hours before the event and another one while maintenance is in-progress for a given region.
+
+### Planned maintenance notification
+
+> [!IMPORTANT]
+> Planned maintenance notifications are currently available in preview in all regions **except** West Central US
+
+**Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event.
+
+We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted.
+
+You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
+
+### Check planned maintenance notification from Azure portal
+
+1. In the [Azure portal](https://portal.azure.com), select **Service Health**.
+2. Select **Planned Maintenance** tab
+3. Select **Subscription**, **Region**, and **Service** for which you want to check the planned maintenance notification.
+
+### To receive planned maintenance notification
+
+1. In the [portal](https://portal.azure.com), select **Service Health**.
+2. In the **Alerts** section, select **Health alerts**.
+3. Select **+ Add service health alert** and fill in the fields.
+4. Fill out the required fields.
+5. Choose the **Event type**, select **Planned maintenance** or **Select all**
+6. In **Action groups** define how you would like to receive the alert (get an email, trigger a logic app etc.)
+7. Ensure Enable rule upon creation is set to Yes.
+8. Select **Create alert rule** to complete your alert
+
+For detailed steps on how to create **service health alerts**, refer to [Create activity log alerts on service notifications](../../service-health/alerts-activity-log-service-notifications-portal.md).
+
+## Can I cancel or postpone planned maintenance?
+
+Maintenance is needed to keep your server secure, stable, and up-to-date. The planned maintenance event cannot be canceled or postponed. Once the notification is sent to a given Azure region, the patching schedule changes cannot be made for any individual server in that region. The patch is rolled out for entire region at once. Azure Database for MySQL - Single server service is designed for cloud native application that doesn't require granular control or customization of the service. If you are looking to have ability to schedule maintenance for your servers, we recommend you consider [Flexible servers](../flexible-server/overview.md).
+
+## Are all the Azure regions patched at the same time?
+
+No, all the Azure regions are patched during the deployment wise window timings. The deployment wise window generally stretches from 5 PM - 8 AM local time next day, in a given Azure region. Geo-paired Azure regions are patched on different days. For high availability and business continuity of database servers, leveraging [cross region read replicas](./concepts-read-replicas.md#cross-region-replication) is recommended.
+
+## Retry logic
+
+A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors).
++
+## Next steps
+
+- For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team at AskAzureDBforMySQL@service.microsoft.com
+- See [How to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric.
+- [Troubleshoot connection issues to Azure Database for MySQL - Single Server](how-to-troubleshoot-common-connection-issues.md)
+- [Handle transient errors and connect efficiently to Azure Database for MySQL - Single Server](concepts-connectivity.md)
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-pricing-tiers.md
+
+ Title: Pricing tiers - Azure Database for MySQL
+description: Learn about the various pricing tiers for Azure Database for MySQL including compute generations, storage types, storage size, vCores, memory, and backup retention periods.
+++++ Last updated : 02/07/2022++
+# Azure Database for MySQL pricing tiers
++
+You can create an Azure Database for MySQL server in one of three different pricing tiers: Basic, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MySQL server level. A server can have one or many databases.
+
+| Attribute | **Basic** | **General Purpose** | **Memory Optimized** |
+|:|:-|:--|:|
+| Compute generation | Gen 4, Gen 5 | Gen 4, Gen 5 | Gen 5 |
+| vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 |
+| Memory per vCore | 2 GB | 5 GB | 10 GB |
+| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB |
+| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days |
+
+To choose a pricing tier, use the following table as a starting point.
+
+| Pricing tier | Target workloads |
+|:-|:--|
+| Basic | Workloads that require light compute and I/O performance. Examples include servers used for development or testing or small-scale infrequently used applications. |
+| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.|
+| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
+
+> [!NOTE]
+> Dynamic scaling to and from the Basic pricing tiers is currently not supported. Basic Tier SKUs servers can't be scaled up to General Purpose or Memory Optimized Tiers.
+
+After you create a General Purpose or Memory Optimized server, the number of vCores, hardware generation, and pricing tier can be changed up or down within seconds. You also can independently adjust the amount of storage up and the backup retention period up or down with no application downtime. You can't change the backup storage type after a server is created. For more information, see the [Scale resources](#scale-resources) section.
+
+## Compute generations and vCores
+
+Compute resources are provided as vCores, which represent the logical CPU of the underlying hardware. China East 1, China North 1, US DoD Central, and US DoD East utilize Gen 4 logical CPUs that are based on Intel E5-2673 v3 (Haswell) 2.4-GHz processors. All other regions utilize Gen 5 logical CPUs that are based on Intel E5-2673 v4 (Broadwell) 2.3-GHz processors.
+
+## Storage
+
+The storage you provision is the amount of storage capacity available to your Azure Database for MySQL server. The storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.
+
+Azure Database for MySQL ΓÇô Single Server supports the following the backend storage for the servers.
+
+| Storage type | Basic | General purpose v1 | General purpose v2 |
+|:|:-|:--|:|
+| Storage size | 5 GB to 1 TB | 5 GB to 4 TB | 5 GB to 16 TB |
+| Storage increment size | 1 GB | 1 GB | 1 GB |
+| IOPS | Variable |3 IOPS/GB<br/>Min 100 IOPS<br/>Max 6000 IOPS | 3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS |
+
+>[!NOTE]
+> Basic storage does not provide an IOPS guarantee. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio.
+
+### Basic storage
+Basic storage is the backend storage supporting Basic pricing tier servers. Basic storage leverages Azure standard storage in the backend where iops provisioned are not guaranteed and latency is variable. Basic tier is best suited for workloads that require light compute, low cost and I/O performance for development or small-scale infrequently used applications.
+
+### General purpose storage
+General purpose storage is the backend storage supporting General Purpose and Memory Optimized tier server. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. There are two generations of general purpose storage as described below:
+
+#### General purpose storage v1 (Supports up to 4-TB)
+General purpose storage v1 is based on the legacy storage technology which can support up to 4-TB storage and 6000 IOPs per server. General purpose storage v1 is optimized to leverage memory from the compute nodes running MySQL engine for local caching and backups. The backup process on general purpose storage v1 reads from the data and log files in the memory of the compute nodes and copies it to the target backup storage for retention up to 35 days. As a result, the memory and io consumption of storage during backups is relatively higher.
+
+All Azure regions supports General purpose storage v1
+
+For General Purpose or Memory Optimized server on general purpose storage v1, we recommend you consider
+
+* Plan for compute sku tier accounting for 10-30% excess memory for storage caching and backup buffers
+* Provision 10% higher IOPs than required by the database workload to account for backup IOs
+* Alternatively, migrate to general purpose storage v2 described below that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred Azure regions shared below.
+
+#### General purpose storage v2 (Supports up to 16-TB storage)
+General purpose storage v2 is based on the latest storage infrastructure which can support up to 16-TB and 20000 IOPs. In a subset of Azure regions where the infrastructure is available, all newly provisioned servers land on general purpose storage v2 by default. General purpose storage v2 does not consume any memory from the compute node of MySQL and provides better predictable IO latencies compared to general purpose v1 storage. Backups on the general purpose v2 storage servers are snapshot-based with no additional IO overhead. On general purpose v2 storage, the MySQL server performance is expected to higher compared to general purpose storage v1 for the same storage and iops provisioned.There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.
+
+General purpose storage v2 is supported in the following Azure regions:
+
+| Region | General purpose storage v2 availability |
+| | |
+| Australia East | :heavy_check_mark: |
+| Australia South East | :heavy_check_mark: |
+| Brazil South | :heavy_check_mark: |
+| Canada Central | :heavy_check_mark: |
+| Canada East | :heavy_check_mark: |
+| Central US | :heavy_check_mark: |
+| East US | :heavy_check_mark: |
+| East US 2 | :heavy_check_mark: |
+| East Asia | :heavy_check_mark: |
+| Japan East | :heavy_check_mark: |
+| Japan West | :heavy_check_mark: |
+| Korea Central | :heavy_check_mark: |
+| Korea South | :heavy_check_mark: |
+| North Europe | :heavy_check_mark: |
+| North Central US | :heavy_check_mark: |
+| South Central US | :heavy_check_mark: |
+| Southeast Asia | :heavy_check_mark: |
+| UK South | :heavy_check_mark: |
+| UK West | :heavy_check_mark: |
+| West Central US | :heavy_check_mark: |
+| West US | :heavy_check_mark: |
+| West US 2 | :heavy_check_mark: |
+| West Europe | :heavy_check_mark: |
+| Central India* | :heavy_check_mark: |
+| France Central* | :heavy_check_mark: |
+| UAE North* | :heavy_check_mark: |
+| South Africa North* | :heavy_check_mark: |
+
+> [!Note]
+> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br />
+> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, following are the limitations, <br />
+> * Geo-Redundant Backup will not be supported<br />
+> * The replica server should be in the regions which support General purpose storage v2. <br />
+
+
+### How can I determine which storage type my server is running on?
+
+You can find the storage type of your server by going in the Pricing tier blade in portal.
+* If the server is provisioned using Basic SKU, the storage type is Basic storage.
+* If the server is provisioned using General Purpose or Memory Optimized SKU, the storage type is General Purpose storage
+ * If the maximum storage that can be provisioned on your server is up to 4-TB, the storage type is General Purpose storage v1.
+ * If the maximum storage that can be provisioned on your server is up to 16-TB, the storage type is General Purpose storage v2.
+
+### Can I move from general purpose storage v1 to general purpose storage v2? if yes, how and is there any additional cost?
+Yes, migration to general purpose storage v2 from v1 is supported if the underlying storage infrastructure is available in the Azure region of the source server. The migration and v2 storage is available at no additional cost.
+
+### Can I grow storage size after server is provisioned?
+You can add additional storage capacity during and after the creation of the server, and allow the system to grow storage automatically based on the storage consumption of your workload.
+
+>[!IMPORTANT]
+> Storage can only be scaled up, not down.
+
+### Monitoring IO consumption
+You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and IO percent](concepts-monitoring.md).The monitoring metrics for the MySQL server with general purpose storage v1 reports the memory and IO consumed by the MySQL engine but may not capture the memory and IO consumption of the storage layer which is a limitation.
+
+### Reaching the storage limit
+
+Servers with less than or equal to 100 GB provisioned storage are marked read-only if the free storage is less than 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB.
+
+For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 256 MB.
+
+While the service attempts to make the server read-only, all new write transaction requests are blocked and existing active transactions will continue to execute. When the server is set to read-only, all subsequent write operations and transaction commits fail. Read queries will continue to work uninterrupted. After you increase the provisioned storage, the server will be ready to accept write transactions again.
+
+We recommend that you turn on storage auto-grow or to set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on [how to set up an alert](how-to-alert-on-metric.md).
+
+### Storage auto-grow
+
+Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than or equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply.
+
+For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free.
+
+Remember that storage can only be scaled up, not down.
+
+## Backup storage
+
+Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any backup storage you use in excess of this amount is charged in GB per month. For example, if you provision a server with 250 GB of storage, youΓÇÖll have 250 GB of additional storage available for server backups at no charge. Storage for backups in excess of the 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/). To understand factors influencing backup storage usage, monitoring and controlling backup storage cost, you can refer to the [backup documentation](concepts-backup.md).
+
+## Scale resources
+
+After you create your server, you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI. For an example of scaling by using Azure CLI, see [Monitor and scale an Azure Database for MySQL server by using Azure CLI](../scripts/sample-scale-server.md).
+
+When you change the number of vCores, the hardware generation, or the pricing tier, a copy of the original server is created with the new compute allocation. After the new server is up and running, connections are switched over to the new server. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This downtime during scaling can be around 60-120 seconds. The downtime during scaling is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of scaling operation. To avoid longer restart time, it is recommended to perform scaling operations during periods of low transactional activity on the server.
+
+Scaling storage and changing the backup retention period are true online operations. There is no downtime, and your application isn't affected. As IOPS scale with the size of the provisioned storage, you can increase the IOPS available to your server by scaling up storage.
+
+## Pricing
+
+For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/mysql/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.MySQLServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for MySQL** to customize the options.
+
+## Next steps
+
+- Learn how to [create a MySQL server in the portal](how-to-create-manage-server-portal.md).
+- Learn about [service limits](concepts-limits.md).
+- Learn how to [scale out with read replicas](how-to-read-replicas-portal.md).
mysql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-performance-insight.md
+
+ Title: Query Performance Insight - Azure Database for MySQL
+description: This article describes the Query Performance Insight feature in Azure Database for MySQL
+++++ Last updated : 01/12/2022+
+# Query Performance Insight in Azure Database for MySQL
++
+**Applies to:** Azure Database for MySQL 5.7, 8.0
+
+Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them.
+
+## Common scenarios
+
+### Long running queries
+
+- Identifying longest running queries in the past X hours
+- Identifying top N queries that are waiting on resources
+
+### Wait statistics
+
+- Understanding wait nature for a query
+- Understanding trends for resource waits and where resource contention exists
+
+## Prerequisites
+
+For Query Performance Insight to function, data must exist in the [Query Store](concepts-query-store.md).
+
+## Viewing performance insights
+
+The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
+
+In the portal page of your Azure Database for MySQL server, select **Query Performance Insight** under the **Intelligent Performance** section of the menu bar.
+
+### Long running queries
+
+The **Long running queries** tab shows the top 5 Query IDs by average duration per execution, aggregated in 15-minute intervals. You can view more Query IDs by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
+
+> [!Note]
+> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk.
+
+The recommended steps to view the query text is shared below:
+ 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal.
+1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries.
+
+```sql
+ SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
+ SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
+```
+
+You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger time period respectively.
++
+### Wait statistics
+
+> [!NOTE]
+> Wait statistics are meant for troubleshooting query performance issues. It is recommended to be turned on only for troubleshooting purposes. <br>If you receive the error message in the Azure portal "*The issue encountered for 'Microsoft.DBforMySQL'; cannot fulfill the request. If this issue continues or is unexpected, please contact support with this information.*" while viewing wait statistics, use a smaller time period.
+
+Wait statistics provides a view of the wait events that occur during the execution of a specific query. Learn more about the wait event types in the [MySQL engine documentation](https://go.microsoft.com/fwlink/?linkid=2098206).
+
+Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server.
+
+Queries displayed in the wait statistics view are grouped by the queries that exhibit the largest waits during the specified time interval.
+
+> [!Note]
+> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk.
+
+The recommended steps to view the query text is shared below:
+ 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal.
+1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries.
+
+```sql
+ SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
+ SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
+```
++
+## Next steps
+
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL.
mysql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-store.md
+
+ Title: Query Store - Azure Database for MySQL
+description: Learn about the Query Store feature in Azure Database for MySQL to help you track performance over time.
+++++ Last updated : 5/12/2020+
+# Monitor Azure Database for MySQL performance with Query Store
++
+**Applies to:** Azure Database for MySQL 5.7, 8.0
+
+The Query Store feature in Azure Database for MySQL provides a way to track query performance over time. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in the **mysql** schema database in the Azure Database for MySQL instance.
+
+## Common scenarios for using Query Store
+
+Query store can be used in a number of scenarios, including the following:
+
+- Detecting regressed queries
+- Determining the number of times a query was executed in a given time window
+- Comparing the average execution time of a query across time windows to see large deltas
+
+## Enabling Query Store
+
+Query Store is an opt-in feature, so it isn't active by default on a server. The query store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database.
+
+### Enable Query Store using the Azure portal
+
+1. Sign in to the Azure portal and select your Azure Database for MySQL server.
+1. Select **Server Parameters** in the **Settings** section of the menu.
+1. Search for the query_store_capture_mode parameter.
+1. Set the value to ALL and **Save**.
+
+To enable wait statistics in your Query Store:
+
+1. Search for the query_store_wait_sampling_capture_mode parameter.
+1. Set the value to ALL and **Save**.
+
+Allow up to 20 minutes for the first batch of data to persist in the mysql database.
+
+## Information in Query Store
+
+Query Store has two stores:
+
+- A runtime statistics store for persisting the query execution statistics information.
+- A wait statistics store for persisting wait statistics information.
+
+To minimize space usage, the runtime execution statistics in the runtime statistics store are aggregated over a fixed, configurable time window. The information in these stores is visible by querying the query store views.
+
+The following query returns information about queries in Query Store:
+
+```sql
+SELECT * FROM mysql.query_store;
+```
+
+Or this query for wait statistics:
+
+```sql
+SELECT * FROM mysql.query_store_wait_stats;
+```
+
+## Finding wait queries
+
+> [!NOTE]
+> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely.
+
+Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics.
+
+Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store:
+
+| **Observation** | **Action** |
+|||
+|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level. |
+|High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. |
+|High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries. |
+
+## Configuration options
+
+When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window.
+
+The following options are available for configuring Query Store parameters.
+
+| **Parameter** | **Description** | **Default** | **Range** |
+|||||
+| query_store_capture_mode | Turn the query store feature ON/OFF based on the value. Note: If performance_schema is OFF, turning on query_store_capture_mode will turn on performance_schema and a subset of performance schema instruments required for this feature. | ALL | NONE, ALL |
+| query_store_capture_interval | The query store capture interval in minutes. Allows specifying the interval in which the query metrics are aggregated | 15 | 5 - 60 |
+| query_store_capture_utility_queries | Turning ON or OFF to capture all the utility queries that is executing in the system. | NO | YES, NO |
+| query_store_retention_period_in_days | Time window in days to retain the data in the query store. | 7 | 1 - 30 |
+
+The following options apply specifically to wait statistics.
+
+| **Parameter** | **Description** | **Default** | **Range** |
+|||||
+| query_store_wait_sampling_capture_mode | Allows turning ON / OFF the wait statistics. | NONE | NONE, ALL |
+| query_store_wait_sampling_frequency | Alters frequency of wait-sampling in seconds. 5 to 300 seconds. | 30 | 5-300 |
+
+> [!NOTE]
+> Currently **query_store_capture_mode** supersedes this configuration, meaning both **query_store_capture_mode** and **query_store_wait_sampling_capture_mode** have to be enabled to ALL for wait statistics to work. If **query_store_capture_mode** is turned off, then wait statistics is turned off as well since wait statistics utilizes the performance_schema enabled, and the query_text captured by query store.
+
+Use the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md) to get or set a different value for a parameter.
+
+## Views and functions
+
+View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](how-to-create-users.md) can use these views to see the data in Query Store. These views are only available in the **mysql** database.
+
+Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash.
+
+### mysql.query_store
+
+This view returns all the data in Query Store. There is one row for each distinct database ID, user ID, and query ID.
+
+| **Name** | **Data Type** | **IS_NULLABLE** | **Description** |
+|||||
+| `schema_name`| varchar(64) | NO | Name of the schema |
+| `query_id`| bigint(20) | NO| Unique ID generated for the specific query, if the same query executes in different schema, a new ID will be generated |
+| `timestamp_id` | timestamp| NO| Timestamp in which the query is executed. This is based on the query_store_interval configuration|
+| `query_digest_text`| longtext| NO| The normalized query text after removing all the literals|
+| `query_sample_text` | longtext| NO| First appearance of the actual query with literals|
+| `query_digest_truncated` | bit| YES| Whether the query text has been truncated. Value will be Yes if the query is longer than 1 KB|
+| `execution_count` | bigint(20)| NO| The number of times the query got executed for this timestamp ID / during the configured interval period|
+| `warning_count` | bigint(20)| NO| Number of warnings this query generated during the internal|
+| `error_count` | bigint(20)| NO| Number of errors this query generated during the interval|
+| `sum_timer_wait` | double| YES| Total execution time of this query during the interval|
+| `avg_timer_wait` | double| YES| Average execution time for this query during the interval|
+| `min_timer_wait` | double| YES| Minimum execution time for this query|
+| `max_timer_wait` | double| YES| Maximum execution time|
+| `sum_lock_time` | bigint(20)| NO| Total amount of time spent for all the locks for this query execution during this time window|
+| `sum_rows_affected` | bigint(20)| NO| Number of rows affected|
+| `sum_rows_sent` | bigint(20)| NO| Number of rows sent to client|
+| `sum_rows_examined` | bigint(20)| NO| Number of rows examined|
+| `sum_select_full_join` | bigint(20)| NO| Number of full joins|
+| `sum_select_scan` | bigint(20)| NO| Number of select scans |
+| `sum_sort_rows` | bigint(20)| NO| Number of rows sorted|
+| `sum_no_index_used` | bigint(20)| NO| Number of times when the query did not use any indexes|
+| `sum_no_good_index_used` | bigint(20)| NO| Number of times when the query execution engine did not use any good indexes|
+| `sum_created_tmp_tables` | bigint(20)| NO| Total number of temp tables created|
+| `sum_created_tmp_disk_tables` | bigint(20)| NO| Total number of temp tables created in disk (generates I/O)|
+| `first_seen` | timestamp| NO| The first occurrence (UTC) of the query during the aggregation window|
+| `last_seen` | timestamp| NO| The last occurrence (UTC) of the query during this aggregation window|
+
+### mysql.query_store_wait_stats
+
+This view returns wait events data in Query Store. There is one row for each distinct database ID, user ID, query ID, and event.
+
+| **Name**| **Data Type** | **IS_NULLABLE** | **Description** |
+|||||
+| `interval_start` | timestamp | NO| Start of the interval (15-minute increment)|
+| `interval_end` | timestamp | NO| End of the interval (15-minute increment)|
+| `query_id` | bigint(20) | NO| Generated unique ID on the normalized query (from query store)|
+| `query_digest_id` | varchar(32) | NO| The normalized query text after removing all the literals (from query store) |
+| `query_digest_text` | longtext | NO| First appearance of the actual query with literals (from query store) |
+| `event_type` | varchar(32) | NO| Category of the wait event |
+| `event_name` | varchar(128) | NO| Name of the wait event |
+| `count_star` | bigint(20) | NO| Number of wait events sampled during the interval for the query |
+| `sum_timer_wait_ms` | double | NO| Total wait time (in milliseconds) of this query during the interval |
+
+### Functions
+
+| **Name**| **Description** |
+|||
+| `mysql.az_purge_querystore_data(TIMESTAMP)` | Purges all query store data before the given time stamp |
+| `mysql.az_procedure_purge_querystore_event(TIMESTAMP)` | Purges all wait event data before the given time stamp |
+| `mysql.az_procedure_purge_recommendation(TIMESTAMP)` | Purges recommendations whose expiration is before the given time stamp |
+
+## Limitations and known issues
+
+- If a MySQL server has the parameter `read_only` on, Query Store cannot capture data.
+- Query Store functionality can be interrupted if it encounters long Unicode queries (\>= 6000 bytes).
+- The retention period for wait statistics is 24 hours.
+- Wait statistics uses sample to capture a fraction of events. The frequency can be modified using the parameter `query_store_wait_sampling_frequency`.
+
+## Next steps
+
+- Learn more about [Query Performance Insights](concepts-query-performance-insight.md)
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-read-replicas.md
+
+ Title: Read replicas - Azure Database for MySQL
+description: 'Learn about read replicas in Azure Database for MySQL: choosing regions, creating replicas, connecting to replicas, monitoring replication, and stopping replication.'
+++++ Last updated : 06/17/2021+++
+# Read replicas in Azure Database for MySQL
++
+The read replica feature allows you to replicate data from an Azure Database for MySQL server to a read-only server. You can replicate from the source server to up to five replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+
+Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
+
+To learn more about MySQL replication features and issues, see the [MySQL replication documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html).
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+>
+
+## When to use a read replica
+
+The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the source.
+
+A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
+
+Because replicas are read-only, they don't directly reduce write-capacity burdens on the source. This feature isn't targeted at write-intensive workloads.
+
+The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay.
+
+## Cross-region replication
+
+You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
+
+You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its [paired region](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) or the universal replica regions. The following picture shows which replica regions are available depending on your source region.
+
+### Universal replica regions
+
+You can create a read replica in any of the following regions, regardless of where your source server is located. The supported universal replica regions include:
+
+| Region | Replica availability |
+| | |
+| Australia East | :heavy_check_mark: |
+| Australia South East | :heavy_check_mark: |
+| Brazil South | :heavy_check_mark: |
+| Canada Central | :heavy_check_mark: |
+| Canada East | :heavy_check_mark: |
+| Central US | :heavy_check_mark: |
+| East US | :heavy_check_mark: |
+| East US 2 | :heavy_check_mark: |
+| East Asia | :heavy_check_mark: |
+| Japan East | :heavy_check_mark: |
+| Japan West | :heavy_check_mark: |
+| Korea Central | :heavy_check_mark: |
+| Korea South | :heavy_check_mark: |
+| North Europe | :heavy_check_mark: |
+| North Central US | :heavy_check_mark: |
+| South Central US | :heavy_check_mark: |
+| Southeast Asia | :heavy_check_mark: |
+| Switzerland North | :heavy_check_mark: |
+| UK South | :heavy_check_mark: |
+| UK West | :heavy_check_mark: |
+| West Central US | :heavy_check_mark: |
+| West US | :heavy_check_mark: |
+| West US 2 | :heavy_check_mark: |
+| West Europe | :heavy_check_mark: |
+| Central India* | :heavy_check_mark: |
+| France Central* | :heavy_check_mark: |
+| UAE North* | :heavy_check_mark: |
+| South Africa North* | :heavy_check_mark: |
+
+> [!Note]
+> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br />
+> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, you are limited to create replica server only in the Azure regions which support General purpose storage v2.
+
+### Paired regions
+
+In addition to the universal replica regions, you can create a read replica in the Azure paired region of your source server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../../availability-zones/cross-region-replication-azure.md).
+
+If you're using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
+
+However, there are limitations to consider:
+
+* Regional availability: Azure Database for MySQL is available in France Central, UAE North, and Germany Central. However, their paired regions aren't available.
+
+* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia.
+ This means that a source server in West India can create a replica in South India. However, a source server in South India can't create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region isn't West India.
+
+## Create a replica
+
+> [!IMPORTANT]
+> * The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
+> * If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
++
+When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the source server. The creation time depends on the amount of data on the source and the time since the last weekly full backup. The time can range from a few minutes to several hours. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation.
+
+Every replica is enabled for storage [auto-grow](concepts-pricing-tiers.md#storage-auto-grow). The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent an interruption in replication caused by out-of-storage errors.
+
+Learn how to [create a read replica in the Azure portal](how-to-read-replicas-portal.md).
+
+## Connect to a replica
+
+At creation, a replica inherits the firewall rules of the source server. Afterwards, these rules are independent from the source server.
+
+The replica inherits the admin account from the source server. All user accounts on the source server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the source server.
+
+You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for MySQL server. For a server named **myreplica** with the admin username **myadmin**, you can connect to the replica by using the mysql CLI:
+
+```bash
+mysql -h myreplica.mysql.database.azure.com -u myadmin@myreplica -p
+```
+
+At the prompt, enter the password for the user account.
+
+## Monitor replication
+
+Azure Database for MySQL provides the **Replication lag in seconds** metric in Azure Monitor. This metric is available for replicas only. This metric is calculated using the `seconds_behind_master` metric available in MySQL's `SHOW SLAVE STATUS` command. Set an alert to inform you when the replication lag reaches a value that isn't acceptable for your workload.
+
+If you see increased replication lag, refer to [troubleshooting replication latency](how-to-troubleshoot-replication-latency.md) to troubleshoot and understand possible causes.
+
+## Stop replication
+
+You can stop replication between a source and a replica. After replication is stopped between a source server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the source server.
+
+When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There's no automated failover between a source and its replica.
+
+> [!IMPORTANT]
+> The standalone server can't be made into a replica again.
+> Before you stop replication on a read replica, ensure the replica has all the data that you require.
+
+Learn how to [stop replication to a replica](how-to-read-replicas-portal.md).
+
+## Failover
+
+There's no automated failover between source and replica servers.
+
+Since replication is asynchronous, there's lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
+
+> [!Tip]
+> If you failover to the replica, the lag at the time you delink the replica from the source will indicate how much data is lost.
+
+After you've decided you want to failover to a replica:
+
+1. Stop replication to the replica<br/>
+ This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
+
+2. Point your application to the (former) replica<br/>
+ Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
+
+After your application is successfully processing reads and writes, you've completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 listed previously.
+
+## Global transaction identifier (GTID)
+
+Global transaction identifier (GTID) is a unique identifier created with each committed transaction on a source server and is OFF by default in Azure Database for MySQL. GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB(General purpose storage v2). To learn more about GTID and how it's used in replication, refer to MySQL's [replication with GTID](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html) documentation.
+
+MySQL supports two types of transactions: GTID transactions (identified with GTID) and anonymous transactions (don't have a GTID allocated)
+
+The following server parameters are available for configuring GTID:
+
+|**Server parameter**|**Description**|**Default Value**|**Values**|
+|--|--|--|--|
+|`gtid_mode`|Indicates if GTIDs are used to identify transactions. Changes between modes can only be done one step at a time in ascending order (ex. `OFF` -> `OFF_PERMISSIVE` -> `ON_PERMISSIVE` -> `ON`)|`OFF`|`OFF`: Both new and replication transactions must be anonymous <br> `OFF_PERMISSIVE`: New transactions are anonymous. Replicated transactions can either be anonymous or GTID transactions. <br> `ON_PERMISSIVE`: New transactions are GTID transactions. Replicated transactions can either be anonymous or GTID transactions. <br> `ON`: Both new and replicated transactions must be GTID transactions.|
+|`enforce_gtid_consistency`|Enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. This value must be set to `ON` before enabling GTID replication. |`OFF`|`OFF`: All transactions are allowed to violate GTID consistency. <br> `ON`: No transaction is allowed to violate GTID consistency. <br> `WARN`: All transactions are allowed to violate GTID consistency, but a warning is generated. |
+
+> [!NOTE]
+> * After GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support.
+>
+> * To change GTID's from one value to another can only be one step at a time in ascending order of modes. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to ON_PERMISSIVE but not to ON.
+>
+> * To keep replication consistent, you cannot update it for a master/replica server.
+>
+> * Recommended to SET enforce_gtid_consistency to ON before you can set gtid_mode=ON
++
+To enable GTID and configure the consistency behavior, update the `gtid_mode` and `enforce_gtid_consistency` server parameters using the [Azure portal](how-to-server-parameters.md), [Azure CLI](how-to-configure-server-parameters-using-cli.md), or [PowerShell](how-to-configure-server-parameters-using-powershell.md).
+
+If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID replication. In order to make sure that the replication is consistent, `gtid_mode` cannot be changed once the master or replica server(s) is created with GTID enabled.
+
+## Considerations and limitations
+
+### Pricing tiers
+
+Read replicas are currently only available in the General Purpose and Memory Optimized pricing tiers.
+
+> [!NOTE]
+> The cost of running the replica server is based on the region where the replica server is running.
+
+### Source server restart
+
+Server that has General purpose storage v1, the `log_bin` parameter will be OFF by default. The value will be turned ON when you create the first read replica. If a source server has no existing read replicas, source server will first restart to prepare itself for replication. Please consider server restart and perform this operation during off-peak hours.
+
+Source server that has General purpose storage v2, the `log_bin` parameter will be ON by default and does not require a restart when you add a read replica.
+
+### New replicas
+
+A read replica is created as a new Azure Database for MySQL server. An existing server can't be made into a replica. You can't create a replica of another read replica.
+
+### Replica configuration
+
+A replica is created by using the same server configuration as the source. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and backup retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
+
+> [!IMPORTANT]
+> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the source.
+
+Firewall rules and parameter settings are inherited from the source server to the replica when the replica is created. Afterwards, the replica's rules are independent.
+
+### Stopped replicas
+
+If you stop replication between a source server and a read replica, the stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again.
+
+### Deleted source and standalone servers
+
+When a source server is deleted, replication is stopped to all read replicas. These replicas automatically become standalone servers and can accept both reads and writes. The source server itself is deleted.
+
+### User accounts
+
+Users on the source server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the source server.
+
+### Server parameters
+
+To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas.
+
+The following server parameters are locked on both the source and replica servers:
+
+* [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html)
+* [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators)
+
+The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers.
+
+To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas.
+
+### GTID
+
+GTID is supported on:
+
+* MySQL versions 5.7 and 8.0.
+* Servers that support storage up to 16 TB. Refer to the [pricing tier](concepts-pricing-tiers.md#storage) article for the full list of regions that support 16 TB storage.
+
+GTID is OFF by default. After GTID is enabled, you can't turn it back off. If you need to turn GTID OFF, contact support.
+
+If GTID is enabled on a source server, newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you can't update `gtid_mode` on the source or replica server(s).
+
+### Other
+
+* Creating a replica of a replica isn't supported.
+* In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information.
+* Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.
+* Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html)
+
+## Next steps
+
+* Learn how to [create and manage read replicas using the Azure portal](how-to-read-replicas-portal.md)
+* Learn how to [create and manage read replicas using the Azure CLI and REST API](how-to-read-replicas-cli.md)
mysql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-security.md
+
+ Title: Security - Azure Database for MySQL
+description: An overview of the security features in Azure Database for MySQL.
+++++ Last updated : 3/18/2020++
+# Security in Azure Database for MySQL
++
+There are multiple layers of security that are available to protect the data on your Azure Database for MySQL server. This article outlines those security options.
+
+## Information protection and encryption
+
+### In-transit
+Azure Database for MySQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default.
+
+### At-rest
+The Azure Database for MySQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
++
+## Network security
+Connections to an Azure Database for MySQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
+
+A newly created Azure Database for MySQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
+
+### IP firewall rules
+IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information.
+
+### Virtual network firewall rules
+Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for MySQL server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-and-security-vnet.md).
+
+### Private IP
+Private Link allows you to connect to your Azure Database for MySQL in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-security-private-link.md)
+
+## Access management
+
+While creating the Azure Database for MySQL server, you provide credentials for an administrator user. This administrator can be used to create additional MySQL users.
++
+## Threat protection
+
+You can opt in to [Microsoft Defender for open-source relational databases](/azure/security-center/defender-for-databases-introduction) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
+
+[Audit logging](concepts-audit-logs.md) is available to track activity in your databases.
++
+## Next steps
+- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md)
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-logs.md
+
+ Title: Slow query logs - Azure Database for MySQL
+description: Describes the slow query logs available in Azure Database for MySQL, and the available parameters for enabling different logging levels.
+++++ Last updated : 11/6/2020+
+# Slow query logs in Azure Database for MySQL
+
+In Azure Database for MySQL, the slow query log is available to users. Access to the transaction log is not supported. The slow query log can be used to identify performance bottlenecks for troubleshooting.
+
+For more information about the MySQL slow query log, see the MySQL reference manual's [slow query log section](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html).
+
+When [Query Store](concepts-query-store.md) is enabled on your server, you may see the queries like "`CALL mysql.az_procedure_collect_wait_stats (900, 30);`" logged in your slow query logs. This behavior is expected as the Query Store feature collects statistics about your queries.
+
+## Configure slow query logging
+By default the slow query log is disabled. To enable it, set `slow_query_log` to ON. This can be enabled using the Azure portal or Azure CLI.
+
+Other parameters you can adjust include:
+
+- **long_query_time**: if a query takes longer than long_query_time (in seconds) that query is logged. The default is 10 seconds.
+- **log_slow_admin_statements**: if ON includes administrative statements like ALTER_TABLE and ANALYZE_TABLE in the statements written to the slow_query_log.
+- **log_queries_not_using_indexes**: determines whether queries that do not use indexes are logged to the slow_query_log
+- **log_throttle_queries_not_using_indexes**: This parameter limits the number of non-index queries that can be written to the slow query log. This parameter takes effect when log_queries_not_using_indexes is set to ON.
+- **log_output**: if "File", allows the slow query log to be written to both the local server storage and to Azure Monitor Diagnostic Logs. If "None", the slow query log will only be written to Azure Monitor Diagnostics Logs.
+
+> [!IMPORTANT]
+> If your tables are not indexed, setting the `log_queries_not_using_indexes` and `log_throttle_queries_not_using_indexes` parameters to ON may affect MySQL performance since all queries running against these non-indexed tables will be written to the slow query log.<br><br>
+> If you plan on logging slow queries for an extended period of time, it is recommended to set `log_output` to "None". If set to "File", these logs are written to the local server storage and can affect MySQL performance.
+
+See the MySQL [slow query log documentation](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) for full descriptions of the slow query log parameters.
+
+## Access slow query logs
+There are two options for accessing slow query logs in Azure Database for MySQL: local server storage or Azure Monitor Diagnostic Logs. This is set using the `log_output` parameter.
+
+For local server storage, you can list and download slow query logs using the Azure portal or the Azure CLI. In the Azure portal, navigate to your server in the Azure portal. Under the **Monitoring** heading, select the **Server Logs** page. For more information on Azure CLI, see [Configure and access slow query logs using Azure CLI](how-to-configure-server-logs-in-cli.md).
+
+Azure Monitor Diagnostic Logs allows you to pipe slow query logs to Azure Monitor Logs (Log Analytics), Azure Storage, or Event Hubs. See [below](concepts-server-logs.md#diagnostic-logs) for more information.
+
+## Local server storage log retention
+When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available.The 7 GB storage limit for the server logs is available free of cost and cannot be extended.
+
+Logs are rotated every 24 hours or 7 GB, whichever comes first.
+
+> [!Note]
+> The above log retention does not apply to logs that are piped using Azure Monitor Diagnostic Logs. You can change the retention period for the data sinks being emitted to (ex. Azure Storage).
+
+## Diagnostic logs
+Azure Database for MySQL is integrated with Azure Monitor Diagnostic Logs. Once you have enabled slow query logs on your MySQL server, you can choose to have them emitted to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs, see the how to section of the [diagnostic logs documentation](../../azure-monitor/essentials/platform-logs-overview.md).
+
+>[!Note]
+>Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings
+
+The following table describes what's in each log. Depending on the output method, the fields included and the order in which they appear may vary.
+
+| **Property** | **Description** |
+|||
+| `TenantId` | Your tenant ID |
+| `SourceSystem` | `Azure` |
+| `TimeGenerated` [UTC] | Time stamp when the log was recorded in UTC |
+| `Type` | Type of the log. Always `AzureDiagnostics` |
+| `SubscriptionId` | GUID for the subscription that the server belongs to |
+| `ResourceGroup` | Name of the resource group the server belongs to |
+| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
+| `ResourceType` | `Servers` |
+| `ResourceId` | Resource URI |
+| `Resource` | Name of the server |
+| `Category` | `MySqlSlowLogs` |
+| `OperationName` | `LogEvent` |
+| `Logical_server_name_s` | Name of the server |
+| `start_time_t` [UTC] | Time the query began |
+| `query_time_s` | Total time in seconds the query took to execute |
+| `lock_time_s` | Total time in seconds the query was locked |
+| `user_host_s` | Username |
+| `rows_sent_d` | Number of rows sent |
+| `rows_examined_s` | Number of rows examined |
+| `last_insert_id_s` | [last_insert_id](https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id) |
+| `insert_id_s` | Insert ID |
+| `sql_text_s` | Full query |
+| `server_id_s` | The server's ID |
+| `thread_id_s` | Thread ID |
+| `\_ResourceId` | Resource URI |
+
+> [!Note]
+> For `sql_text`, log will be truncated if it exceeds 2048 characters.
+
+## Analyze logs in Azure Monitor Logs
+
+Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your slow queries. Below are some sample queries to help you get started. Make sure to update the below with your server name.
+
+- Queries longer than 10 seconds on a particular server
+
+ ```Kusto
+ AzureDiagnostics
+ | where LogicalServerName_s == '<your server name>'
+ | where Category == 'MySqlSlowLogs'
+ | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
+ | where query_time_d > 10
+ ```
+
+- List top 5 longest queries on a particular server
+
+ ```Kusto
+ AzureDiagnostics
+ | where LogicalServerName_s == '<your server name>'
+ | where Category == 'MySqlSlowLogs'
+ | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
+ | order by query_time_d desc
+ | take 5
+ ```
+
+- Summarize slow queries by minimum, maximum, average, and standard deviation query time on a particular server
+
+ ```Kusto
+ AzureDiagnostics
+ | where LogicalServerName_s == '<your server name>'
+ | where Category == 'MySqlSlowLogs'
+ | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
+ | summarize count(), min(query_time_d), max(query_time_d), avg(query_time_d), stdev(query_time_d), percentile(query_time_d, 95) by LogicalServerName_s
+ ```
+
+- Graph the slow query distribution on a particular server
+
+ ```Kusto
+ AzureDiagnostics
+ | where LogicalServerName_s == '<your server name>'
+ | where Category == 'MySqlSlowLogs'
+ | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
+ | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m)
+ | render timechart
+ ```
+
+- Display queries longer than 10 seconds across all MySQL servers with Diagnostic Logs enabled
+
+ ```Kusto
+ AzureDiagnostics
+ | where Category == 'MySqlSlowLogs'
+ | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
+ | where query_time_d > 10
+ ```
+
+## Next Steps
+- [How to configure slow query logs from the Azure portal](how-to-configure-server-logs-in-portal.md)
+- [How to configure slow query logs from the Azure CLI](how-to-configure-server-logs-in-cli.md)
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-parameters.md
+
+ Title: Server parameters - Azure Database for MySQL
+description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL.
+++++ Last updated : 1/26/2021+
+# Server parameters in Azure Database for MySQL
++
+This article provides considerations and guidelines for configuring server parameters in Azure Database for MySQL.
+
+## What are server parameters?
+
+The MySQL engine provides many different server variables and parameters that you use to configure and tune engine behavior. Some parameters can be set dynamically during runtime, while others are static, and require a server restart in order to apply.
+
+Azure Database for MySQL exposes the ability to change the value of various MySQL server parameters by using the [Azure portal](./how-to-server-parameters.md), the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), and [PowerShell](./how-to-configure-server-parameters-using-powershell.md) to match your workload's needs.
+
+## Configurable server parameters
+
+The list of supported server parameters is constantly growing. In the Azure portal, use the server parameters tab to view the full list and configure server parameters values.
+
+Refer to the following sections to learn more about the limits of several commonly updated server parameters. The limits are determined by the pricing tier and vCores of the server.
+
+### Thread pools
+
+MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there is a corresponding drop in performance. Many active threads can affect the performance significantly, due to increased context switching, thread contention, and bad locality for CPU caches.
+
+*Thread pools*, a server-side feature and distinct from connection pooling, maximize performance by introducing a dynamic pool of worker threads. You use this feature to limit the number of active threads running on the server and minimize thread churn. This helps ensure that a burst of connections won't cause the server to run out of resources or memory. Thread pools are most efficient for short queries and CPU intensive workloads, such as OLTP workloads.
+
+For more information, see [Introducing thread pools in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/introducing-thread-pools-in-azure-database-for-mysql-service/ba-p/1504173).
+
+> [!NOTE]
+> Thread pools aren't supported for MySQL 5.6.
+
+### Configure the thread pool
+
+To enable a thread pool, update the `thread_handling` server parameter to `pool-of-threads`. By default, this parameter is set to `one-thread-per-connection`, which means MySQL creates a new thread for each new connection. This is a static parameter, and requires a server restart to apply.
+
+You can also configure the maximum and minimum number of threads in the pool by setting the following server parameters:
+
+- `thread_pool_max_threads`: This value ensures that there won't be more than this number of threads in the pool.
+- `thread_pool_min_threads`: This value sets the number of threads that will be reserved even after connections are closed.
+
+To improve performance issues of short queries on the thread pool, you can enable *batch execution*. Instead of returning back to the thread pool immediately after running a query, threads will keep active for a short time to wait for the next query through this connection. The thread then runs the query rapidly and, when this is complete, the thread waits for the next one. This process continues until the overall time spent exceeds a threshold.
+
+You determine the behavior of batch execution by using the following server parameters:
+
+- `thread_pool_batch_wait_timeout`: This value specifies the time a thread waits for another query to process.
+- `thread_pool_batch_max_time`: This value determines the maximum time a thread will repeat the cycle of query execution and waiting for the next query.
+
+> [!IMPORTANT]
+> Don't turn on the thread pool in production until you've tested it.
+
+### log_bin_trust_function_creators
+
+In Azure Database for MySQL, binary logs are always enabled (the `log_bin` parameter is set to `ON`). If you want to use triggers, you get error similar to the following: *You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*.
+
+The binary logging format is always **ROW**, and all connections to the server *always* use row-based binary logging. Row-based binary logging helps maintain security, and binary logging can't break, so you can safely set [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to `TRUE`.
+
+### innodb_buffer_pool_size
+
+Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) to learn more about this parameter.
+
+#### Servers on [general purpose storage v1 (supporting up to 4 TB)](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb)
+
+|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
+||||||
+|Basic|1|872415232|134217728|872415232|
+|Basic|2|2684354560|134217728|2684354560|
+|General Purpose|2|3758096384|134217728|3758096384|
+|General Purpose|4|8053063680|134217728|8053063680|
+|General Purpose|8|16106127360|134217728|16106127360|
+|General Purpose|16|32749125632|134217728|32749125632|
+|General Purpose|32|66035122176|134217728|66035122176|
+|General Purpose|64|132070244352|134217728|132070244352|
+|Memory Optimized|2|7516192768|134217728|7516192768|
+|Memory Optimized|4|16106127360|134217728|16106127360|
+|Memory Optimized|8|32212254720|134217728|32212254720|
+|Memory Optimized|16|65498251264|134217728|65498251264|
+|Memory Optimized|32|132070244352|134217728|132070244352|
+
+#### Servers on [general purpose storage v2 (supporting up to 16 TB)](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage)
+
+|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
+||||||
+|Basic|1|872415232|134217728|872415232|
+|Basic|2|2684354560|134217728|2684354560|
+|General Purpose|2|7516192768|134217728|7516192768|
+|General Purpose|4|16106127360|134217728|16106127360|
+|General Purpose|8|32212254720|134217728|32212254720|
+|General Purpose|16|65498251264|134217728|65498251264|
+|General Purpose|32|132070244352|134217728|132070244352|
+|General Purpose|64|264140488704|134217728|264140488704|
+|Memory Optimized|2|15032385536|134217728|15032385536|
+|Memory Optimized|4|32212254720|134217728|32212254720|
+|Memory Optimized|8|64424509440|134217728|64424509440|
+|Memory Optimized|16|130996502528|134217728|130996502528|
+|Memory Optimized|32|264140488704|134217728|264140488704|
+
+### innodb_file_per_table
+
+MySQL stores the `InnoDB` table in different tablespaces, based on the configuration you provide during the table creation. The [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html) is the storage area for the `InnoDB` data dictionary. A [file-per-table tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html) contains data and indexes for a single `InnoDB` table, and is stored in the file system in its own data file.
+
+You control this behavior by using the `innodb_file_per_table` server parameter. Setting `innodb_file_per_table` to `OFF` causes `InnoDB` to create tables in the system tablespace. Otherwise, `InnoDB` creates tables in file-per-table tablespaces.
+
+> [!NOTE]
+> You can only update `innodb_file_per_table` in the general purpose and memory optimized pricing tiers on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) and [general purpose storage v1](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb).
+
+Azure Database for MySQL supports 4 TB (at the largest) in a single data file on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage). If your database size is larger than 4 TB, you should create the table in the [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size that is larger than 4 TB, you should use the partition table.
+
+### join_buffer_size
+
+Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_join_buffer_size) to learn more about this parameter.
+
+|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
+||||||
+|Basic|1|Not configurable in Basic tier|N/A|N/A|
+|Basic|2|Not configurable in Basic tier|N/A|N/A|
+|General Purpose|2|262144|128|268435455|
+|General Purpose|4|262144|128|536870912|
+|General Purpose|8|262144|128|1073741824|
+|General Purpose|16|262144|128|2147483648|
+|General Purpose|32|262144|128|4294967295|
+|General Purpose|64|262144|128|4294967295|
+|Memory Optimized|2|262144|128|536870912|
+|Memory Optimized|4|262144|128|1073741824|
+|Memory Optimized|8|262144|128|2147483648|
+|Memory Optimized|16|262144|128|4294967295|
+|Memory Optimized|32|262144|128|4294967295|
+
+### max_connections
+
+|**Pricing tier**|**vCore(s)**|**Default value**|**Min value**|**Max value**|
+||||||
+|Basic|1|50|10|50|
+|Basic|2|100|10|100|
+|General Purpose|2|300|10|600|
+|General Purpose|4|625|10|1250|
+|General Purpose|8|1250|10|2500|
+|General Purpose|16|2500|10|5000|
+|General Purpose|32|5000|10|10000|
+|General Purpose|64|10000|10|20000|
+|Memory Optimized|2|625|10|1250|
+|Memory Optimized|4|1250|10|2500|
+|Memory Optimized|8|2500|10|5000|
+|Memory Optimized|16|5000|10|10000|
+|Memory Optimized|32|10000|10|20000|
+
+When the number of connections exceeds the limit, you might receive an error.
+
+> [!TIP]
+> To manage connections efficiently, it's a good idea to use a connection pooler, like ProxySQL. To learn about setting up ProxySQL, see the blog post [Load balance read replicas using ProxySQL in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042). Note that ProxySQL is an open source community tool. It's supported by Microsoft on a best-effort basis.
+
+### max_heap_table_size
+
+Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_heap_table_size) to learn more about this parameter.
+
+|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
+||||||
+|Basic|1|Not configurable in Basic tier|N/A|N/A|
+|Basic|2|Not configurable in Basic tier|N/A|N/A|
+|General Purpose|2|16777216|16384|268435455|
+|General Purpose|4|16777216|16384|536870912|
+|General Purpose|8|16777216|16384|1073741824|
+|General Purpose|16|16777216|16384|2147483648|
+|General Purpose|32|16777216|16384|4294967295|
+|General Purpose|64|16777216|16384|4294967295|
+|Memory Optimized|2|16777216|16384|536870912|
+|Memory Optimized|4|16777216|16384|1073741824|
+|Memory Optimized|8|16777216|16384|2147483648|
+|Memory Optimized|16|16777216|16384|4294967295|
+|Memory Optimized|32|16777216|16384|4294967295|
+
+### query_cache_size
+
+The query cache is turned off by default. To enable the query cache, configure the `query_cache_type` parameter.
+
+Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_query_cache_size) to learn more about this parameter.
+
+> [!NOTE]
+> The query cache is deprecated as of MySQL 5.7.20 and has been removed in MySQL 8.0.
+
+|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value**|
+||||||
+|Basic|1|Not configurable in Basic tier|N/A|N/A|
+|Basic|2|Not configurable in Basic tier|N/A|N/A|
+|General Purpose|2|0|0|16777216|
+|General Purpose|4|0|0|33554432|
+|General Purpose|8|0|0|67108864|
+|General Purpose|16|0|0|134217728|
+|General Purpose|32|0|0|134217728|
+|General Purpose|64|0|0|134217728|
+|Memory Optimized|2|0|0|33554432|
+|Memory Optimized|4|0|0|67108864|
+|Memory Optimized|8|0|0|134217728|
+|Memory Optimized|16|0|0|134217728|
+|Memory Optimized|32|0|0|134217728|
+
+### lower_case_table_names
+
+The `lower_case_table_name` parameter is set to 1 by default, and you can update this parameter in MySQL 5.6 and MySQL 5.7.
+
+Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_lower_case_table_names) to learn more about this parameter.
+
+> [!NOTE]
+> In MySQL 8.0, `lower_case_table_name` is set to 1 by default, and you can't change it.
+
+### innodb_strict_mode
+
+If you receive an error similar to `Row size too large (> 8126)`, consider turning off the `innodb_strict_mode` parameter. You can't modify `innodb_strict_mode` globally at the server level. If row data size is larger than 8K, the data is truncated, without an error notification, leading to potential data loss. It's a good idea to modify the schema to fit the page size limit.
+
+You can set this parameter at a session level, by using `init_connect`. To set `innodb_strict_mode` at a session level, refer to [setting parameter not listed](./how-to-server-parameters.md#setting-parameters-not-listed).
+
+> [!NOTE]
+> If you have a read replica server, setting `innodb_strict_mode` to `OFF` at the session-level on a source server will break the replication. We suggest keeping the parameter set to `ON` if you have read replicas.
+
+### sort_buffer_size
+
+Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sort_buffer_size) to learn more about this parameter.
+
+|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
+||||||
+|Basic|1|Not configurable in Basic tier|N/A|N/A|
+|Basic|2|Not configurable in Basic tier|N/A|N/A|
+|General Purpose|2|524288|32768|4194304|
+|General Purpose|4|524288|32768|8388608|
+|General Purpose|8|524288|32768|16777216|
+|General Purpose|16|524288|32768|33554432|
+|General Purpose|32|524288|32768|33554432|
+|General Purpose|64|524288|32768|33554432|
+|Memory Optimized|2|524288|32768|8388608|
+|Memory Optimized|4|524288|32768|16777216|
+|Memory Optimized|8|524288|32768|33554432|
+|Memory Optimized|16|524288|32768|33554432|
+|Memory Optimized|32|524288|32768|33554432|
+
+### tmp_table_size
+
+Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_tmp_table_size) to learn more about this parameter.
+
+|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
+||||||
+|Basic|1|Not configurable in Basic tier|N/A|N/A|
+|Basic|2|Not configurable in Basic tier|N/A|N/A|
+|General Purpose|2|16777216|1024|67108864|
+|General Purpose|4|16777216|1024|134217728|
+|General Purpose|8|16777216|1024|268435456|
+|General Purpose|16|16777216|1024|536870912|
+|General Purpose|32|16777216|1024|1073741824|
+|General Purpose|64|16777216|1024|1073741824|
+|Memory Optimized|2|16777216|1024|134217728|
+|Memory Optimized|4|16777216|1024|268435456|
+|Memory Optimized|8|16777216|1024|536870912|
+|Memory Optimized|16|16777216|1024|1073741824|
+|Memory Optimized|32|16777216|1024|1073741824|
+
+### InnoDB buffer pool warmup
+
+After you restart Azure Database for MySQL, the data pages that reside in the disk are loaded, as the tables are queried. This leads to increased latency and slower performance for the first run of the queries. For workloads that are sensitive to latency, you might find this slower performance unacceptable.
+
+You can use `InnoDB` buffer pool warmup to shorten the warmup period. This process reloads disk pages that were in the buffer pool *before* the restart, rather than waiting for DML or SELECT operations to access corresponding rows. For more information, see [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html).
+
+Note that improved performance comes at the expense of longer start-up time for the server. When you enable this parameter, the server startup and restart time is expected to increase, depending on the IOPS provisioned on the server. It's a good idea to test and monitor the restart time, to ensure that the start-up or restart performance is acceptable, because the server is unavailable during that time. Don't use this parameter when the IOPS provisioned is less than 1000 IOPS (in other words, when the storage provisioned is less than 335 GB).
+
+To save the state of the buffer pool at server shutdown, set the server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set the server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up or restart by lowering and fine-tuning the value of the server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`.
+
+> [!Note]
+> `InnoDB` buffer pool warmup parameters are only supported in general purpose storage servers with up to 16 TB storage. For more information, see [Azure Database for MySQL storage options](./concepts-pricing-tiers.md#storage).
+
+### time_zone
+
+Upon initial deployment, a server running Azure Database for MySQL includes systems tables for time zone information, but these tables aren't populated. You can populate the tables by calling the `mysql.az_load_timezone` stored procedure from tools like the MySQL command line or MySQL Workbench. For information about how to call the stored procedures and set the global or session-level time zones, see [Working with the time zone parameter (Azure portal)](how-to-server-parameters.md#working-with-the-time-zone-parameter) or [Working with the time zone parameter (Azure CLI)](how-to-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter).
+
+### binlog_expire_logs_seconds
+
+In Azure Database for MySQL, this parameter specifies the number of seconds the service waits before purging the binary log file.
+
+The *binary log* contains events that describe database changes, such as table creation operations or changes to table data. It also contains events for statements that can potentially make changes. The binary log is used mainly for two purposes, replication and data recovery operations.
+
+Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. In case of multiple replicas, the binary logs wait for the slowest replica to read the changes before being purged. If you want binary logs to persist longer, you can configure the parameter `binlog_expire_logs_seconds`. If you set `binlog_expire_logs_seconds` to `0`, which is the default value, it purges as soon as the handle to the binary log is freed. If you set `binlog_expire_logs_seconds` to greater than 0, then the binary log only purges after that period of time.
+
+For Azure Database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data out from the Azure Database for MySQL service, you must set this parameter in the primary to avoid purging binary logs before the replica reads from the changes from the primary. If you set the `binlog_expire_logs_seconds` to a higher value, then the binary logs won't get purged soon enough. This can lead to an increase in the storage billing.
+
+## Non-configurable server parameters
+
+The following server parameters aren't configurable in the service:
+
+|**Parameter**|**Fixed value**|
+| : | :-- |
+|`innodb_file_per_table` in the basic tier|OFF|
+|`innodb_flush_log_at_trx_commit`|1|
+|`sync_binlog`|1|
+|`innodb_log_file_size`|256 MB|
+|`innodb_log_files_in_group`|2|
+
+Other variables not listed here are set to the default MySQL values. Refer to the MySQL docs for versions [8.0](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html), [5.7](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html), and [5.6](https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html).
+
+## Next steps
+
+- Learn how to [configure server parameters by using the Azure portal](./how-to-server-parameters.md)
+- Learn how to [configure server parameters by using the Azure CLI](./how-to-configure-server-parameters-using-cli.md)
+- Learn how to [configure server parameters by using PowerShell](./how-to-configure-server-parameters-using-powershell.md)
mysql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-servers.md
+
+ Title: Server concepts - Azure Database for MySQL
+description: This topic provides considerations and guidelines for working with Azure Database for MySQL servers.
+++++ Last updated : 3/18/2020+
+# Server concepts in Azure Database for MySQL
++
+This article provides considerations and guidelines for working with Azure Database for MySQL servers.
+
+## What is an Azure Database for MySQL server?
+
+An Azure Database for MySQL server is a central administrative point for multiple databases. It is the same MySQL server construct that you may be familiar with in the on-premises world. Specifically, the Azure Database for MySQL service is managed, provides performance guarantees, and exposes access and features at server-level.
+
+An Azure Database for MySQL server:
+
+- Is created within an Azure subscription.
+- Is the parent resource for databases.
+- Provides a namespace for databases.
+- Is a container with strong lifetime semantics - delete a server and it deletes the contained databases.
+- Collocates resources in a region.
+- Provides a connection endpoint for server and database access.
+- Provides the scope for management policies that apply to its databases: login, firewall, users, roles, configurations, etc.
+- Is available in multiple versions. For more information, see [Supported Azure Database for MySQL database versions](./concepts-supported-versions.md).
+
+Within an Azure Database for MySQL server, you can create one or multiple databases. You can opt to create a single database per server to use all the resources or to create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Pricing tiers](./concepts-pricing-tiers.md).
+
+## How do I connect and authenticate to an Azure Database for MySQL server?
+
+The following elements help ensure safe access to your database.
+
+| Security concept | Description |
+| :-- | :-- |
+| **Authentication and authorization** | Azure Database for MySQL server supports native MySQL authentication. You can connect and authenticate to a server with the server's admin login. |
+| **Protocol** | The service supports a message-based protocol used by MySQL. |
+| **TCP/IP** | The protocol is supported over TCP/IP and over Unix-domain sockets. |
+| **Firewall** | To help protect your data, a firewall rule prevents all access to your database server, until you specify which computers have permission. See [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md). |
+| **SSL** | The service supports enforcing SSL connections between your applications and your database server. See [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). |
+
+## Stop/Start an Azure Database for MySQL
+
+Azure Database for MySQL gives you the ability to **Stop** the server when not in use and **Start** the server when you resume activity. This is essentially done to save costs on the database servers and only pay for the resource when in use. This becomes even more important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. Later, when you want to bring the server back online, you can either use the [Azure portal](how-to-stop-start-server.md) or [CLI](how-to-stop-start-server.md).
+
+When the server is in the **Stopped** state, the server's compute is not billed. However, storage continues to to be billed as the server's storage remains to ensure that data files are available when the server is started again.
+
+> [!IMPORTANT]
+> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can chose to **Stop** it again if you are not using the server.
+
+During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to [start the server](how-to-stop-start-server.md).
+
+### Limitations of Stop/start operation
+- Not supported with read replica configurations (both source and replicas).
+
+## How do I manage a server?
+
+You can manage the creation, deletion, server parameter configuration (my.cnf), scaling, networking, security, high availability, backup & restore, monitoring of your Azure Database for MySQL servers by using the Azure portal or the Azure CLI. In addition, following stored procedures are available in Azure Database for MySQL to perform certain database administration tasks required as SUPER user privilege is not supported on the server.
+
+|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**|
+|--|--|--|--|
+|*mysql.az_kill*|processlist_id|N/A|Equivalent to [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the connection associated with the provided processlist_id after terminating any statement the connection is executing.|
+|*mysql.az_kill_query*|processlist_id|N/A|Equivalent to [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the statement the connection is currently executing. Leaves the connection itself alive.|
+|*mysql.az_load_timezone*|N/A|N/A|Loads [time zone tables](how-to-server-parameters.md#working-with-the-time-zone-parameter) to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").|
+
+## Next steps
+
+- For an overview of the service, see [Azure Database for MySQL Overview](./overview.md)
+- For information about specific resource quotas and limitations based on your **pricing tier**, see [Pricing tiers](./concepts-pricing-tiers.md)
+- For information about connecting to the service, see [Connection libraries for Azure Database for MySQL](./concepts-connection-libraries.md).
mysql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-ssl-connection-security.md
+
+ Title: SSL/TLS connectivity - Azure Database for MySQL
+description: Information for configuring Azure Database for MySQL and associated applications to properly use SSL connections
+++++ Last updated : 07/09/2020++
+# SSL/TLS connectivity in Azure Database for MySQL
++
+Azure Database for MySQL supports connecting your database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application.
+
+> [!NOTE]
+> Updating the `require_secure_transport` server parameter value does not affect the MySQL service's behavior. Use the SSL and TLS enforcement features outlined in this article to secure connections to your database.
+
+>[!NOTE]
+> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021).
+
+> [!IMPORTANT]
+> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md)
+
+## SSL Default settings
+
+By default, the database service should be configured to require SSL connections when connecting to MySQL. We recommend to avoid disabling the SSL option whenever possible.
+
+When provisioning a new Azure Database for MySQL server through the Azure portal and CLI, enforcement of SSL connections is enabled by default.
+
+Connection strings for various programming languages are shown in the Azure portal. Those connection strings include the required SSL parameters to connect to your database. In the Azure portal, select your server. Under the **Settings** heading, select the **Connection strings**. The SSL parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations.
+
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MySQL server which is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem.
+
+Similarly, the following links point to the certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
+
+To learn how to enable or disable SSL connection when developing application, refer to [How to configure SSL](how-to-configure-ssl.md).
+
+## TLS enforcement in Azure Database for MySQL
+
+Azure Database for MySQL supports encryption for clients connecting to your database server using Transport Layer Security (TLS). TLS is an industry standard protocol that ensures secure network connections between your database server and client applications, allowing you to adhere to compliance requirements.
+
+### TLS settings
+
+Azure Database for MySQL provides the ability to enforce the TLS version for the client connections. To enforce the TLS version, use the **Minimum TLS version** option setting. The following values are allowed for this option setting:
+
+| Minimum TLS setting | Client TLS version supported |
+|:|-:|
+| TLSEnforcementDisabled (default) | No TLS required |
+| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher |
+| TLS1_1 | TLS 1.1, TLS 1.2 and higher |
+| TLS1_2 | TLS version 1.2 and higher |
++
+For example, setting the value of minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected.
+
+> [!Note]
+> By default, Azure Database for MySQL does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`).
+>
+> Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement.
+
+The minimum TLS version setting doesnt require any restart of the server can be set while the server is online. To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](how-to-tls-configurations.md).
+
+## Cipher support by Azure Database for MySQL Single server
+
+As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database serer. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites doesn't match one of suites listed below, incoming client connections will be rejected.
+
+### Cipher suite supported
+
+* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+
+## Next steps
+
+- [Connection libraries for Azure Database for MySQL](concepts-connection-libraries.md)
+- Learn how to [configure SSL](how-to-configure-ssl.md)
+- Learn how to [configure TLS](how-to-tls-configurations.md)
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-supported-versions.md
+
+ Title: Supported versions - Azure Database for MySQL
+description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL service.
++++++ Last updated : 11/4/2021+
+# Supported Azure Database for MySQL server versions
++
+Azure Database for MySQL has been developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports all the current major version supported by the community namely MySQL 5.7 and 8.0. MySQL uses the X.Y.Z naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
+
+## Connect to a gateway node that is running a specific MySQL version
+
+In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL service architecture.
+
+As Azure Database for MySQL supports major version v5.7 and v8.0, the default port 3306 to connect to Azure Database for MySQL runs MySQL client version 5.6 (least common denominator) to support connections to servers of all 2 supported major versions. However, if your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string.
+
+In Azure Database for MySQL service, gateway nodes listens on port 3308 for v5.7 clients and port 3309 for v8.0 clients. In other words, if you would like to connect to v5.7 gateway client, you should use your fully qualified server name and port 3308 to connect to your server from client application. Similarly, if you would like to connect to v8.0 gateway client, you can use your fully qualified server name and port 3309 to connect to your server. Check the following example for further clarity.
++
+> [!NOTE]
+> Connecting to Azure Database for MySQL via ports 3308 and 3309 are only supported for public connectivity, Private Link and VNet service endpoints can only be used with port 3306.
+
+## Azure Database for MySQL currently supports the following major and minor versions of MySQL:
+
+| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](../flexible-server/overview.md) <br/> Current minor version |
+|:-|:-|:|
+|MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html) (Retired) | Not supported|
+|MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-32.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
+|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
+
+Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
+
+## Managing updates and upgrades
+
+The service automatically manages patching for bug fix version updates. For example, 5.7.20 to 5.7.21.
+
+Major version upgrade is currently supported by service for upgrades from MySQL v5.6 to v5.7. For more details, refer [how to perform major version upgrades](how-to-major-version-upgrade.md). If you'd like to upgrade from 5.7 to 8.0, we recommend you perform [dump and restore](./concepts-migrate-dump-restore.md) to a server that was created with the new engine version.
+
+## Next steps
+
+- For details around Azure Database for MySQL versioning policy, see [this document](concepts-version-policy.md).
+- For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](./concepts-pricing-tiers.md)
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-version-policy.md
+
+ Title: Version support policy - Azure Database for MySQL - Single Server and Flexible Server (Preview)
+description: Describes the policy around MySQL major and minor versions in Azure Database for MySQL
++++++ Last updated : 11/03/2020+
+# Azure Database for MySQL version support policy
++
+This page describes the Azure Database for MySQL versioning policy, and is applicable to Azure Database for MySQL - Single Server and Azure Database for MySQL - Flexible Server (Preview) deployment modes.
+
+## Supported MySQL versions
+
+Azure Database for MySQL has been developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports all the current major version supported by the community namely MySQL 5.6, 5.7 and 8.0. MySQL uses the X.Y.Z naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
+
+Azure Database for MySQL currently supports the following major and minor versions of MySQL:
+
+| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](../flexible-server/overview.md) <br/> Current minor version |
+|:-|:-|:|
+|MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html)(Retired) | Not supported|
+|MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
+|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
+
+> [!NOTE]
+> In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. If your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string as explained in our documentation [here.](concepts-supported-versions.md#connect-to-a-gateway-node-that-is-running-a-specific-mysql-version)
+
+> [!IMPORTANT]
+> MySQL v5.6 is retired on Single Server as of February 2021. Starting from September 1st 2021, you will not be able to create new v5.6 servers on Azure Database for MySQL - Single Server deployment option. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
+
+Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
+
+## Major version support
+
+Each major version of MySQL will be supported by Azure Database for MySQL from the date on which Azure begins supporting the version until the version is retired by the MySQL community, as provided in the [versioning policy](https://www.mysql.com/support/eol-notice.html).
+
+## Minor version support
+
+Azure Database for MySQL automatically performs minor version upgrades to the Azure preferred MySQL version as part of periodic maintenance.
+
+## Major version retirement policy
+
+The table below provides the retirement details for MySQL major versions. The dates follow the [MySQL versioning policy](https://www.mysql.com/support/eol-notice.html).
+
+| Version | What's New | Azure support start date | Retirement date|
+| -- | -- | | -- |
+| [MySQL 5.6](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/)| [Features](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-49.html) | March 20, 2018 | February 2021
+| [MySQL 5.7](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-31.html) | March 20, 2018 | October 2023
+| [MySQL 8](https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html)) | December 11, 2019 | April 2026
+
+## Retired MySQL engine versions not supported in Azure Database for MySQL
+
+After the retirement date for each MySQL database version, if you continue running the retired version, note the following restrictions:
+
+- As the community will not be releasing any further bug fixes or security fixes, Azure Database for MySQL will not patch the retired database engine for any bugs or security issues or otherwise take security measures with regard to the retired database engine. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
+- If any support issue you may experience relates to the MySQL database, we may not be able to provide you with support. In such cases, you will have to upgrade your database in order for us to provide you with any support.
+- You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
+- New service capabilities developed by Azure Database for MySQL may only be available to supported database server versions.
+- Uptime SLAs will apply solely to Azure Database for MySQL service-related issues and not to any downtime caused by database engine-related bugs.
+- In the extreme event of a serious threat to the service caused by the MySQL database engine vulnerability identified in the retired database version, Azure may chose to stop the compute node of your database server to secure the service first. You will be asked to upgrade the server before bringing the server online. During the upgrade process, your data will always be protected using automatic backups performed on the service which can be used to restore back to the older version if desired.
+
+## Next steps
+
+- See Azure Database for MySQL - Single Server [supported versions](./concepts-supported-versions.md)
+- See Azure Database for MySQL - Flexible Server [supported versions](../flexible-server/concepts-supported-versions.md)
+- See MySQL [dump and restore](./concepts-migrate-dump-restore.md) to perform upgrades.
mysql Connect Cpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-cpp.md
+
+ Title: 'Quickstart: Connect using C++ - Azure Database for MySQL'
+description: This quickstart provides a C++ code sample you can use to connect and query data from Azure Database for MySQL.
+++++
+ms.devlang: cpp
+ Last updated : 5/26/2020
+adobe-target: true
++
+# Quickstart: Use Connector/C++ to connect and query data in Azure Database for MySQL
++
+This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C++ application. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes you're familiar with developing using C++ and you're new to working with Azure Database for MySQL.
+
+## Prerequisites
+
+This quickstart uses the resources created in either of the following guides as a starting point:
+- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)
+- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)
+
+You also need to:
+- Install [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework)
+- Install [Visual Studio](https://www.visualstudio.com/downloads/)
+- Install [MySQL Connector/C++](https://dev.mysql.com/downloads/connector/cpp/)
+- Install [Boost](https://www.boost.org/)
+
+> [!IMPORTANT]
+> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md)
+
+## Install Visual Studio and .NET
+The steps in this section assume that you're familiar with developing using .NET.
+
+### **Windows**
+- Install Visual Studio 2019 Community. Visual Studio 2019 Community is a full featured, extensible, free IDE. With this IDE, you can create modern applications for Android, iOS, Windows, web and database applications, and cloud services. You can install either the full .NET Framework or just .NET Core: the code snippets in the Quickstart work with either. If you already have Visual Studio installed on your computer, skip the next two steps.
+ 1. Download the [Visual Studio 2019 installer](https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15).
+ 2. Run the installer and follow the installation prompts to complete the installation.
+
+### **Configure Visual Studio**
+1. From Visual Studio, Project -> Properties -> Linker -> General > Additional Library Directories, add the "\lib\opt" directory (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\lib\opt) of the C++ connector.
+2. From Visual Studio, Project -> Properties -> C/C++ -> General -> Additional Include Directories:
+ - Add the "\include" directory of c++ connector (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\include\).
+ - Add the Boost library's root directory (for example: C:\boost_1_64_0\).
+3. From Visual Studio, Project -> Properties -> Linker -> Input > Additional Dependencies, add **mysqlcppconn.lib** into the text field.
+4. Either copy **mysqlcppconn.dll** from the C++ connector library folder in step 3 to the same directory as the application executable or add it to the environment variable so your application can find it.
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-cpp/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
+
+## Connect, create table, and insert data
+Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method createStatement() and execute() to run the database commands.
+
+Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
+
+```c++
+#include <stdlib.h>
+#include <iostream>
+#include "stdafx.h"
+
+#include "mysql_connection.h"
+#include <cppconn/driver.h>
+#include <cppconn/exception.h>
+#include <cppconn/prepared_statement.h>
+using namespace std;
+
+//for demonstration only. never save your password in the code!
+const string server = "tcp://yourservername.mysql.database.azure.com:3306";
+const string username = "username@servername";
+const string password = "yourpassword";
+
+int main()
+{
+ sql::Driver *driver;
+ sql::Connection *con;
+ sql::Statement *stmt;
+ sql::PreparedStatement *pstmt;
+
+ try
+ {
+ driver = get_driver_instance();
+ con = driver->connect(server, username, password);
+ }
+ catch (sql::SQLException e)
+ {
+ cout << "Could not connect to server. Error message: " << e.what() << endl;
+ system("pause");
+ exit(1);
+ }
+
+ //please create database "quickstartdb" ahead of time
+ con->setSchema("quickstartdb");
+
+ stmt = con->createStatement();
+ stmt->execute("DROP TABLE IF EXISTS inventory");
+ cout << "Finished dropping table (if existed)" << endl;
+ stmt->execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);");
+ cout << "Finished creating table" << endl;
+ delete stmt;
+
+ pstmt = con->prepareStatement("INSERT INTO inventory(name, quantity) VALUES(?,?)");
+ pstmt->setString(1, "banana");
+ pstmt->setInt(2, 150);
+ pstmt->execute();
+ cout << "One row inserted." << endl;
+
+ pstmt->setString(1, "orange");
+ pstmt->setInt(2, 154);
+ pstmt->execute();
+ cout << "One row inserted." << endl;
+
+ pstmt->setString(1, "apple");
+ pstmt->setInt(2, 100);
+ pstmt->execute();
+ cout << "One row inserted." << endl;
+
+ delete pstmt;
+ delete con;
+ system("pause");
+ return 0;
+}
+```
+
+## Read data
+
+Use the following code to connect and read the data by using a **SELECT** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the select commands. Next, the code uses next() to advance to the records in the results. Finally, the code uses getInt() and getString() to parse the values in the record.
+
+Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
+
+```c++
+#include <stdlib.h>
+#include <iostream>
+#include "stdafx.h"
+
+#include "mysql_connection.h"
+#include <cppconn/driver.h>
+#include <cppconn/exception.h>
+#include <cppconn/resultset.h>
+#include <cppconn/prepared_statement.h>
+using namespace std;
+
+//for demonstration only. never save your password in the code!
+const string server = "tcp://yourservername.mysql.database.azure.com:3306";
+const string username = "username@servername";
+const string password = "yourpassword";
+
+int main()
+{
+ sql::Driver *driver;
+ sql::Connection *con;
+ sql::PreparedStatement *pstmt;
+ sql::ResultSet *result;
+
+ try
+ {
+ driver = get_driver_instance();
+ //for demonstration only. never save password in the code!
+ con = driver->connect(server, username, password);
+ }
+ catch (sql::SQLException e)
+ {
+ cout << "Could not connect to server. Error message: " << e.what() << endl;
+ system("pause");
+ exit(1);
+ }
+
+ con->setSchema("quickstartdb");
+
+ //select
+ pstmt = con->prepareStatement("SELECT * FROM inventory;");
+ result = pstmt->executeQuery();
+
+ while (result->next())
+ printf("Reading from table=(%d, %s, %d)\n", result->getInt(1), result->getString(2).c_str(), result->getInt(3));
+
+ delete result;
+ delete pstmt;
+ delete con;
+ system("pause");
+ return 0;
+}
+```
+
+## Update data
+Use the following code to connect and read the data by using an **UPDATE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the update commands.
+
+Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
+
+```c++
+#include <stdlib.h>
+#include <iostream>
+#include "stdafx.h"
+
+#include "mysql_connection.h"
+#include <cppconn/driver.h>
+#include <cppconn/exception.h>
+#include <cppconn/resultset.h>
+#include <cppconn/prepared_statement.h>
+using namespace std;
+
+//for demonstration only. never save your password in the code!
+const string server = "tcp://yourservername.mysql.database.azure.com:3306";
+const string username = "username@servername";
+const string password = "yourpassword";
+
+int main()
+{
+ sql::Driver *driver;
+ sql::Connection *con;
+ sql::PreparedStatement *pstmt;
+
+ try
+ {
+ driver = get_driver_instance();
+ //for demonstration only. never save password in the code!
+ con = driver->connect(server, username, password);
+ }
+ catch (sql::SQLException e)
+ {
+ cout << "Could not connect to server. Error message: " << e.what() << endl;
+ system("pause");
+ exit(1);
+ }
+
+ con->setSchema("quickstartdb");
+
+ //update
+ pstmt = con->prepareStatement("UPDATE inventory SET quantity = ? WHERE name = ?");
+ pstmt->setInt(1, 200);
+ pstmt->setString(2, "banana");
+ pstmt->executeQuery();
+ printf("Row updated\n");
+
+ delete con;
+ delete pstmt;
+ system("pause");
+ return 0;
+}
+```
++
+## Delete data
+Use the following code to connect and read the data by using a **DELETE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the delete commands.
+
+Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
+
+```c++
+#include <stdlib.h>
+#include <iostream>
+#include "stdafx.h"
+
+#include "mysql_connection.h"
+#include <cppconn/driver.h>
+#include <cppconn/exception.h>
+#include <cppconn/resultset.h>
+#include <cppconn/prepared_statement.h>
+using namespace std;
+
+//for demonstration only. never save your password in the code!
+const string server = "tcp://yourservername.mysql.database.azure.com:3306";
+const string username = "username@servername";
+const string password = "yourpassword";
+
+int main()
+{
+ sql::Driver *driver;
+ sql::Connection *con;
+ sql::PreparedStatement *pstmt;
+ sql::ResultSet *result;
+
+ try
+ {
+ driver = get_driver_instance();
+ //for demonstration only. never save password in the code!
+ con = driver->connect(server, username, password);
+ }
+ catch (sql::SQLException e)
+ {
+ cout << "Could not connect to server. Error message: " << e.what() << endl;
+ system("pause");
+ exit(1);
+ }
+
+ con->setSchema("quickstartdb");
+
+ //delete
+ pstmt = con->prepareStatement("DELETE FROM inventory WHERE name = ?");
+ pstmt->setString(1, "orange");
+ result = pstmt->executeQuery();
+ printf("Row deleted\n");
+
+ delete pstmt;
+ delete con;
+ delete result;
+ system("pause");
+ return 0;
+}
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md)
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-csharp.md
+
+ Title: 'Quickstart: Connect using C# - Azure Database for MySQL'
+description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for MySQL."
+++++
+ms.devlang: csharp
+ Last updated : 10/18/2020++
+# Quickstart: Use .NET (C#) to connect and query data in Azure Database for MySQL
++
+This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database.
+
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.
+- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.
+- Install the [.NET SDK for your platform](https://dotnet.microsoft.com/download) (Windows, Ubuntu Linux, or macOS) for your platform.
+
+|Action| Connectivity method|How-to guide|
+|: |: |: |
+| **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)|
+| **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)|
+| **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) |
+
+- [Create a database and non-admin user](./how-to-create-users.md)
+
+[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
+
+## Create a C# project
+At a command prompt, run:
+
+```
+mkdir AzureMySqlExample
+cd AzureMySqlExample
+dotnet new console
+dotnet add package MySqlConnector
+```
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-csharp/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
+
+## Step 1: Connect and insert data
+Use the following code to connect and load the data by using `CREATE TABLE` and `INSERT INTO` SQL statements. The code uses the methods of the `MySqlConnection` class:
+- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
+- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand), sets the CommandText property
+- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands.
+
+Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using MySqlConnector;
+
+namespace AzureMySqlExample
+{
+ class MySqlCreate
+ {
+ static async Task Main(string[] args)
+ {
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = "YOUR-SERVER.mysql.database.azure.com",
+ Database = "YOUR-DATABASE",
+ UserID = "USER@YOUR-SERVER",
+ Password = "PASSWORD",
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.WriteLine("Opening connection");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "DROP TABLE IF EXISTS inventory;";
+ await command.ExecuteNonQueryAsync();
+ Console.WriteLine("Finished dropping table (if existed)");
+
+ command.CommandText = "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
+ await command.ExecuteNonQueryAsync();
+ Console.WriteLine("Finished creating table");
+
+ command.CommandText = @"INSERT INTO inventory (name, quantity) VALUES (@name1, @quantity1),
+ (@name2, @quantity2), (@name3, @quantity3);";
+ command.Parameters.AddWithValue("@name1", "banana");
+ command.Parameters.AddWithValue("@quantity1", 150);
+ command.Parameters.AddWithValue("@name2", "orange");
+ command.Parameters.AddWithValue("@quantity2", 154);
+ command.Parameters.AddWithValue("@name3", "apple");
+ command.Parameters.AddWithValue("@quantity3", 100);
+
+ int rowCount = await command.ExecuteNonQueryAsync();
+ Console.WriteLine(String.Format("Number of rows inserted={0}", rowCount));
+ }
+
+ // connection will be closed by the 'using' block
+ Console.WriteLine("Closing connection");
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
++
+## Step 2: Read data
+
+Use the following code to connect and read the data by using a `SELECT` SQL statement. The code uses the `MySqlConnection` class with methods:
+- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
+- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.
+- [ExecuteReaderAsync()](/dotnet/api/system.data.common.dbcommand.executereaderasync) to run the database commands.
+- [ReadAsync()](/dotnet/api/system.data.common.dbdatareader.readasync#System_Data_Common_DbDataReader_ReadAsync) to advance to the records in the results. Then the code uses GetInt32 and GetString to parse the values in the record.
++
+Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using MySqlConnector;
+
+namespace AzureMySqlExample
+{
+ class MySqlRead
+ {
+ static async Task Main(string[] args)
+ {
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = "YOUR-SERVER.mysql.database.azure.com",
+ Database = "YOUR-DATABASE",
+ UserID = "USER@YOUR-SERVER",
+ Password = "PASSWORD",
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.WriteLine("Opening connection");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "SELECT * FROM inventory;";
+
+ using (var reader = await command.ExecuteReaderAsync())
+ {
+ while (await reader.ReadAsync())
+ {
+ Console.WriteLine(string.Format(
+ "Reading from table=({0}, {1}, {2})",
+ reader.GetInt32(0),
+ reader.GetString(1),
+ reader.GetInt32(2)));
+ }
+ }
+ }
+
+ Console.WriteLine("Closing connection");
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
+
+## Step 3: Update data
+Use the following code to connect and read the data by using an `UPDATE` SQL statement. The code uses the `MySqlConnection` class with method:
+- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
+- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property
+- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands.
++
+Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using MySqlConnector;
+
+namespace AzureMySqlExample
+{
+ class MySqlUpdate
+ {
+ static async Task Main(string[] args)
+ {
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = "YOUR-SERVER.mysql.database.azure.com",
+ Database = "YOUR-DATABASE",
+ UserID = "USER@YOUR-SERVER",
+ Password = "PASSWORD",
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.WriteLine("Opening connection");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "UPDATE inventory SET quantity = @quantity WHERE name = @name;";
+ command.Parameters.AddWithValue("@quantity", 200);
+ command.Parameters.AddWithValue("@name", "banana");
+
+ int rowCount = await command.ExecuteNonQueryAsync();
+ Console.WriteLine(String.Format("Number of rows updated={0}", rowCount));
+ }
+
+ Console.WriteLine("Closing connection");
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
+
+## Step 4: Delete data
+Use the following code to connect and delete the data by using a `DELETE` SQL statement.
+
+The code uses the `MySqlConnection` class with method
+- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
+- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.
+- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands.
++
+Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using MySqlConnector;
+
+namespace AzureMySqlExample
+{
+ class MySqlDelete
+ {
+ static async Task Main(string[] args)
+ {
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = "YOUR-SERVER.mysql.database.azure.com",
+ Database = "YOUR-DATABASE",
+ UserID = "USER@YOUR-SERVER",
+ Password = "PASSWORD",
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.WriteLine("Opening connection");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "DELETE FROM inventory WHERE name = @name;";
+ command.Parameters.AddWithValue("@name", "orange");
+
+ int rowCount = await command.ExecuteNonQueryAsync();
+ Console.WriteLine(String.Format("Number of rows deleted={0}", rowCount));
+ }
+
+ Console.WriteLine("Closing connection");
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/>
+
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
+
+[Cannot find what you are looking for?Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-go.md
+
+ Title: 'Quickstart: Connect using Go - Azure Database for MySQL'
+description: This quickstart provides several Go code samples you can use to connect and query data from Azure Database for MySQL.
+++++
+ms.devlang: golang
+ Last updated : 5/26/2020++
+# Quickstart: Use Go language to connect and query data in Azure Database for MySQL
++
+This quickstart demonstrates how to connect to an Azure Database for MySQL from Windows, Ubuntu Linux, and Apple macOS platforms by using code written in the [Go](https://go.dev/) language. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Go and that you are new to working with Azure Database for MySQL.
+
+## Prerequisites
+This quickstart uses the resources created in either of these guides as a starting point:
+- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)
+- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)
+
+> [!IMPORTANT]
+> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md)
+
+## Install Go and MySQL connector
+Install [Go](https://go.dev/doc/install) and the [go-sql-driver for MySQL](https://github.com/go-sql-driver/mysql#installation) on your own computer. Depending on your platform, follow the steps in the appropriate section:
+
+### Windows
+1. [Download](https://go.dev/dl/) and install Go for Microsoft Windows according to the [installation instructions](https://go.dev/doc/install).
+2. Launch the command prompt from the start menu.
+3. Make a folder for your project such. `mkdir %USERPROFILE%\go\src\mysqlgo`.
+4. Change directory into the project folder, such as `cd %USERPROFILE%\go\src\mysqlgo`.
+5. Set the environment variable for GOPATH to point to the source code directory. `set GOPATH=%USERPROFILE%\go`.
+6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
+
+ In summary, install Go, then run these commands in the command prompt:
+ ```cmd
+ mkdir %USERPROFILE%\go\src\mysqlgo
+ cd %USERPROFILE%\go\src\mysqlgo
+ set GOPATH=%USERPROFILE%\go
+ go get github.com/go-sql-driver/mysql
+ ```
+
+### Linux (Ubuntu)
+1. Launch the Bash shell.
+2. Install Go by running `sudo apt-get install golang-go`.
+3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`.
+4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`.
+5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
+6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
+
+ In summary, run these bash commands:
+ ```bash
+ sudo apt-get install golang-go
+ mkdir -p ~/go/src/mysqlgo/
+ cd ~/go/src/mysqlgo/
+ export GOPATH=~/go/
+ go get github.com/go-sql-driver/mysql
+ ```
+
+### Apple macOS
+1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform.
+2. Launch the Bash shell.
+3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`.
+4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`.
+5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
+6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
+
+ In summary, install Go, then run these bash commands:
+ ```bash
+ mkdir -p ~/go/src/mysqlgo/
+ cd ~/go/src/mysqlgo/
+ export GOPATH=~/go/
+ go get github.com/go-sql-driver/mysql
+ ```
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-go/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
+
+
+## Build and run Go code
+1. To write Golang code, you can use a simple text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE), try [Gogland](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/).
+2. Paste the Go code from the sections below into text files, and then save them into your project folder with file extension \*.go (such as Windows path `%USERPROFILE%\go\src\mysqlgo\createtable.go` or Linux path `~/go/src/mysqlgo/createtable.go`).
+3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and then replace the example values with your own values.
+4. Launch the command prompt or Bash shell. Change directory into your project folder. For example, on Windows `cd %USERPROFILE%\go\src\mysqlgo\`. On Linux `cd ~/go/src/mysqlgo/`. Some of the IDE editors mentioned offer debug and runtime capabilities without requiring shell commands.
+5. Run the code by typing the command `go run createtable.go` to compile the application and run it.
+6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application.
+
+## Connect, create table, and insert data
+Use the following code to connect to the server, create a table, and load the data by using an **INSERT** SQL statement.
+
+The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
+
+The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and it checks the connection by using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several DDL commands. The code also uses [Prepare()](http://go-database-sql.org/prepared.html) and Exec() to run prepared statements with different parameters to insert three rows. Each time, a custom checkError() method is used to check if an error occurred and panic to exit.
+
+Replace the `host`, `database`, `user`, and `password` constants with your own values.
+
+```Go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+
+ _ "github.com/go-sql-driver/mysql"
+)
+
+const (
+ host = "mydemoserver.mysql.database.azure.com"
+ database = "quickstartdb"
+ user = "myadmin@mydemoserver"
+ password = "yourpassword"
+)
+
+func checkError(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+
+ // Initialize connection string.
+ var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
+
+ // Initialize connection object.
+ db, err := sql.Open("mysql", connectionString)
+ checkError(err)
+ defer db.Close()
+
+ err = db.Ping()
+ checkError(err)
+ fmt.Println("Successfully created connection to database.")
+
+ // Drop previous table of same name if one exists.
+ _, err = db.Exec("DROP TABLE IF EXISTS inventory;")
+ checkError(err)
+ fmt.Println("Finished dropping table (if existed).")
+
+ // Create table.
+ _, err = db.Exec("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
+ checkError(err)
+ fmt.Println("Finished creating table.")
+
+ // Insert some data into table.
+ sqlStatement, err := db.Prepare("INSERT INTO inventory (name, quantity) VALUES (?, ?);")
+ res, err := sqlStatement.Exec("banana", 150)
+ checkError(err)
+ rowCount, err := res.RowsAffected()
+ fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
+
+ res, err = sqlStatement.Exec("orange", 154)
+ checkError(err)
+ rowCount, err = res.RowsAffected()
+ fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
+
+ res, err = sqlStatement.Exec("apple", 100)
+ checkError(err)
+ rowCount, err = res.RowsAffected()
+ fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
+ fmt.Println("Done.")
+}
+
+```
+
+## Read data
+Use the following code to connect and read the data by using a **SELECT** SQL statement.
+
+The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
+
+The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Query()](https://go.dev/pkg/database/sql/#DB.Query) method to run the select command. Then it runs [Next()](https://go.dev/pkg/database/sql/#Rows.Next) to iterate through the result set and [Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) to parse the column values, saving the value into variables. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
+
+Replace the `host`, `database`, `user`, and `password` constants with your own values.
+
+```Go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+
+ _ "github.com/go-sql-driver/mysql"
+)
+
+const (
+ host = "mydemoserver.mysql.database.azure.com"
+ database = "quickstartdb"
+ user = "myadmin@mydemoserver"
+ password = "yourpassword"
+)
+
+func checkError(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+
+ // Initialize connection string.
+ var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
+
+ // Initialize connection object.
+ db, err := sql.Open("mysql", connectionString)
+ checkError(err)
+ defer db.Close()
+
+ err = db.Ping()
+ checkError(err)
+ fmt.Println("Successfully created connection to database.")
+
+ // Variables for printing column data when scanned.
+ var (
+ id int
+ name string
+ quantity int
+ )
+
+ // Read some data from the table.
+ rows, err := db.Query("SELECT id, name, quantity from inventory;")
+ checkError(err)
+ defer rows.Close()
+ fmt.Println("Reading data:")
+ for rows.Next() {
+ err := rows.Scan(&id, &name, &quantity)
+ checkError(err)
+ fmt.Printf("Data row = (%d, %s, %d)\n", id, name, quantity)
+ }
+ err = rows.Err()
+ checkError(err)
+ fmt.Println("Done.")
+}
+```
+
+## Update data
+Use the following code to connect and update the data using a **UPDATE** SQL statement.
+
+The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
+
+The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the update command. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
+
+Replace the `host`, `database`, `user`, and `password` constants with your own values.
+
+```Go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+
+ _ "github.com/go-sql-driver/mysql"
+)
+
+const (
+ host = "mydemoserver.mysql.database.azure.com"
+ database = "quickstartdb"
+ user = "myadmin@mydemoserver"
+ password = "yourpassword"
+)
+
+func checkError(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+
+ // Initialize connection string.
+ var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
+
+ // Initialize connection object.
+ db, err := sql.Open("mysql", connectionString)
+ checkError(err)
+ defer db.Close()
+
+ err = db.Ping()
+ checkError(err)
+ fmt.Println("Successfully created connection to database.")
+
+ // Modify some data in table.
+ rows, err := db.Exec("UPDATE inventory SET quantity = ? WHERE name = ?", 200, "banana")
+ checkError(err)
+ rowCount, err := rows.RowsAffected()
+ fmt.Printf("Updated %d row(s) of data.\n", rowCount)
+ fmt.Println("Done.")
+}
+```
+
+## Delete data
+Use the following code to connect and remove data using a **DELETE** SQL statement.
+
+The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
+
+The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the delete command. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
+
+Replace the `host`, `database`, `user`, and `password` constants with your own values.
+
+```Go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ _ "github.com/go-sql-driver/mysql"
+)
+
+const (
+ host = "mydemoserver.mysql.database.azure.com"
+ database = "quickstartdb"
+ user = "myadmin@mydemoserver"
+ password = "yourpassword"
+)
+
+func checkError(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+
+ // Initialize connection string.
+ var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
+
+ // Initialize connection object.
+ db, err := sql.Open("mysql", connectionString)
+ checkError(err)
+ defer db.Close()
+
+ err = db.Ping()
+ checkError(err)
+ fmt.Println("Successfully created connection to database.")
+
+ // Modify some data in table.
+ rows, err := db.Exec("DELETE FROM inventory WHERE name = ?", "orange")
+ checkError(err)
+ rowCount, err := rows.RowsAffected()
+ fmt.Printf("Deleted %d row(s) of data.\n", rowCount)
+ fmt.Println("Done.")
+}
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
+
+ Title: 'Quickstart: Use Java and JDBC with Azure Database for MySQL'
+description: Learn how to use Java and JDBC with an Azure Database for MySQL database.
++++++
+ms.devlang: java
Last updated : 08/17/2020++
+# Quickstart: Use Java and JDBC with Azure Database for MySQL
++
+This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL](./index.yml).
+
+JDBC is the standard Java API to connect to traditional relational databases.
+
+## Prerequisites
+
+- An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).
+- [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.
+- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).
+- The [Apache Maven](https://maven.apache.org/) build tool.
+
+## Prepare the working environment
+
+We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
+
+Set up those environment variables by using the following commands:
+
+```bash
+AZ_RESOURCE_GROUP=database-workshop
+AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+AZ_LOCATION=<YOUR_AZURE_REGION>
+AZ_MYSQL_USERNAME=demo
+AZ_MYSQL_PASSWORD=<YOUR_MYSQL_PASSWORD>
+AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
+```
+
+Replace the placeholders with the following values, which are used throughout this article:
+
+- `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.
+- `<YOUR_MYSQL_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
+- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).
+
+Next, create a resource group:
+
+```azurecli
+az group create \
+ --name $AZ_RESOURCE_GROUP \
+ --location $AZ_LOCATION \
+ | jq
+```
+
+> [!NOTE]
+> We use the `jq` utility, which is installed by default on [Azure Cloud Shell](https://shell.azure.com/) to display JSON data and make it more readable.
+> If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
+
+## Create an Azure Database for MySQL instance
+
+The first thing we'll create is a managed MySQL server.
+
+> [!NOTE]
+> You can read more detailed information about creating MySQL servers in [Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md).
+
+In [Azure Cloud Shell](https://shell.azure.com/), run the following script:
+
+```azurecli
+az mysql server create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME \
+ --location $AZ_LOCATION \
+ --sku-name B_Gen5_1 \
+ --storage-size 5120 \
+ --admin-user $AZ_MYSQL_USERNAME \
+ --admin-password $AZ_MYSQL_PASSWORD \
+ | jq
+```
+
+This command creates a small MySQL server.
+
+### Configure a firewall rule for your MySQL server
+
+Azure Database for MySQL instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
+
+Because you configured our local IP address at the beginning of this article, you can open the server's firewall by running:
+
+```azurecli
+az mysql server firewall-rule create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME-database-allow-local-ip \
+ --server $AZ_DATABASE_NAME \
+ --start-ip-address $AZ_LOCAL_IP_ADDRESS \
+ --end-ip-address $AZ_LOCAL_IP_ADDRESS \
+ | jq
+```
+
+### Configure a MySQL database
+
+The MySQL server that you created earlier is empty. It doesn't have any database that you can use with the Java application. Create a new database called `demo`:
+
+```azurecli
+az mysql db create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name demo \
+ --server-name $AZ_DATABASE_NAME \
+ | jq
+```
+
+### Create a new Java project
+
+Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>com.example</groupId>
+ <artifactId>demo</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <name>demo</name>
+
+ <properties>
+ <java.version>1.8</java.version>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>mysql</groupId>
+ <artifactId>mysql-connector-java</artifactId>
+ <version>8.0.20</version>
+ </dependency>
+ </dependencies>
+</project>
+```
+
+This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
+
+- Java 8
+- A recent MySQL driver for Java
+
+### Prepare a configuration file to connect to Azure Database for MySQL
+
+Create a *src/main/resources/application.properties* file, and add:
+
+```properties
+url=jdbc:mysql://$AZ_DATABASE_NAME.mysql.database.azure.com:3306/demo?serverTimezone=UTC
+user=demo@$AZ_DATABASE_NAME
+password=$AZ_MYSQL_PASSWORD
+```
+
+- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.
+- Replace the `$AZ_MYSQL_PASSWORD` variable with the value that you configured at the beginning of this article.
+
+> [!NOTE]
+> We append `?serverTimezone=UTC` to the configuration property `url`, to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, our Java server would not use the same date format as the database, which would result in an error.
+
+### Create an SQL file to generate the database schema
+
+We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
+
+```sql
+DROP TABLE IF EXISTS todo;
+CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN);
+```
+
+## Code the application
+
+### Connect to the database
+
+Next, add the Java code that will use JDBC to store and retrieve data from your MySQL server.
+
+Create a *src/main/java/DemoApplication.java* file, that contains:
+
+```java
+package com.example.demo;
+
+import com.mysql.cj.jdbc.AbandonedConnectionCleanupThread;
+
+import java.sql.*;
+import java.util.*;
+import java.util.logging.Logger;
+
+public class DemoApplication {
+
+ private static final Logger log;
+
+ static {
+ System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
+ log =Logger.getLogger(DemoApplication.class.getName());
+ }
+
+ public static void main(String[] args) throws Exception {
+ log.info("Loading application properties");
+ Properties properties = new Properties();
+ properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
+
+ log.info("Connecting to the database");
+ Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
+ log.info("Database connection test: " + connection.getCatalog());
+
+ log.info("Create database schema");
+ Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
+ Statement statement = connection.createStatement();
+ while (scanner.hasNextLine()) {
+ statement.execute(scanner.nextLine());
+ }
+
+ /*
+ Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+ insertData(todo, connection);
+ todo = readData(connection);
+ todo.setDetails("congratulations, you have updated data!");
+ updateData(todo, connection);
+ deleteData(todo, connection);
+ */
+
+ log.info("Closing database connection");
+ connection.close();
+ AbandonedConnectionCleanupThread.uncheckedShutdown();
+ }
+}
+```
+
+This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the MySQL server and create a schema that will store our data.
+
+In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
+
+> [!NOTE]
+> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
+
+> [!NOTE]
+> The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver specific command to destroy an internal thread when shutting down the application.
+> It can be safely ignored.
+
+You can now execute this main class with your favorite tool:
+
+- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.
+- Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
+
+The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection, as you should see in the console logs:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Closing database connection
+```
+
+### Create a domain class
+
+Create a new `Todo` Java class, next to the `DemoApplication` class, and add the following code:
+
+```java
+package com.example.demo;
+
+public class Todo {
+
+ private Long id;
+ private String description;
+ private String details;
+ private boolean done;
+
+ public Todo() {
+ }
+
+ public Todo(Long id, String description, String details, boolean done) {
+ this.id = id;
+ this.description = description;
+ this.details = details;
+ this.done = done;
+ }
+
+ public Long getId() {
+ return id;
+ }
+
+ public void setId(Long id) {
+ this.id = id;
+ }
+
+ public String getDescription() {
+ return description;
+ }
+
+ public void setDescription(String description) {
+ this.description = description;
+ }
+
+ public String getDetails() {
+ return details;
+ }
+
+ public void setDetails(String details) {
+ this.details = details;
+ }
+
+ public boolean isDone() {
+ return done;
+ }
+
+ public void setDone(boolean done) {
+ this.done = done;
+ }
+
+ @Override
+ public String toString() {
+ return "Todo{" +
+ "id=" + id +
+ ", description='" + description + '\'' +
+ ", details='" + details + '\'' +
+ ", done=" + done +
+ '}';
+ }
+}
+```
+
+This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script.
+
+### Insert data into Azure Database for MySQL
+
+In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database:
+
+```java
+private static void insertData(Todo todo, Connection connection) throws SQLException {
+ log.info("Insert data");
+ PreparedStatement insertStatement = connection
+ .prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);");
+
+ insertStatement.setLong(1, todo.getId());
+ insertStatement.setString(2, todo.getDescription());
+ insertStatement.setString(3, todo.getDetails());
+ insertStatement.setBoolean(4, todo.isDone());
+ insertStatement.executeUpdate();
+}
+```
+
+You can now uncomment the two following lines in the `main` method:
+
+```java
+Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+insertData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Closing database connection
+```
+
+### Reading data from Azure Database for MySQL
+
+Let's read the data previously inserted, to validate that our code works correctly.
+
+In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
+
+```java
+private static Todo readData(Connection connection) throws SQLException {
+ log.info("Read data");
+ PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;");
+ ResultSet resultSet = readStatement.executeQuery();
+ if (!resultSet.next()) {
+ log.info("There is no data in the database!");
+ return null;
+ }
+ Todo todo = new Todo();
+ todo.setId(resultSet.getLong("id"));
+ todo.setDescription(resultSet.getString("description"));
+ todo.setDetails(resultSet.getString("details"));
+ todo.setDone(resultSet.getBoolean("done"));
+ log.info("Data read from the database: " + todo.toString());
+ return todo;
+}
+```
+
+You can now uncomment the following line in the `main` method:
+
+```java
+todo = readData(connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Closing database connection
+```
+
+### Updating data in Azure Database for MySQL
+
+Let's update the data we previously inserted.
+
+Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
+
+```java
+private static void updateData(Todo todo, Connection connection) throws SQLException {
+ log.info("Update data");
+ PreparedStatement updateStatement = connection
+ .prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;");
+
+ updateStatement.setString(1, todo.getDescription());
+ updateStatement.setString(2, todo.getDetails());
+ updateStatement.setBoolean(3, todo.isDone());
+ updateStatement.setLong(4, todo.getId());
+ updateStatement.executeUpdate();
+ readData(connection);
+}
+```
+
+You can now uncomment the two following lines in the `main` method:
+
+```java
+todo.setDetails("congratulations, you have updated data!");
+updateData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
+[INFO ] Closing database connection
+```
+
+### Deleting data in Azure Database for MySQL
+
+Finally, let's delete the data we previously inserted.
+
+Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
+
+```java
+private static void deleteData(Todo todo, Connection connection) throws SQLException {
+ log.info("Delete data");
+ PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;");
+ deleteStatement.setLong(1, todo.getId());
+ deleteStatement.executeUpdate();
+ readData(connection);
+}
+```
+
+You can now uncomment the following line in the `main` method:
+
+```java
+deleteData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+[INFO ] Closing database connection
+```
+
+## Clean up resources
+
+Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for MySQL.
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md)
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
+
+ Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL'
+description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL.
+++++
+ms.devlang: javascript
+ Last updated : 12/11/2020+
+# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL
++
+In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
+
+This topic assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- An Azure Database for MySQL server. [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).
+
+> [!IMPORTANT]
+> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md)
+
+## Install Node.js and the MySQL connector
+
+Depending on your platform, follow the instructions in the appropriate section to install [Node.js](https://nodejs.org). Use npm to install the [mysql](https://www.npmjs.com/package/mysql) package and its dependencies into your project folder.
+
+### Windows
+
+1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your desired Windows installer option.
+2. Make a local project folder such as `nodejsmysql`.
+3. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\`
+4. Run the NPM tool to install the mysql library into the project folder.
+
+ ```cmd
+ cd c:\nodejsmysql\
+ "C:\Program Files\nodejs\npm" install mysql
+ "C:\Program Files\nodejs\npm" list
+ ```
+
+5. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
+
+### Linux (Ubuntu)
+
+1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js.
+
+ ```bash
+ # Using Ubuntu
+ curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
+ sudo apt-get install -y nodejs
+
+ # Using Debian, as root
+ curl -sL https://deb.nodesource.com/setup_14.x | bash -
+ apt-get install -y nodejs
+ ```
+
+2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
+
+ ```bash
+ mkdir nodejsmysql
+ cd nodejsmysql
+ npm install --save mysql
+ npm list
+ ```
+3. Verify the installation by checking npm list output text. The version number may vary as new patches are released.
+
+### macOS
+
+1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your macOS installer.
+
+2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
+
+ ```bash
+ mkdir nodejsmysql
+ cd nodejsmysql
+ npm install --save mysql
+ npm list
+ ```
+
+3. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
+
+## Get connection information
+
+Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Select the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-nodejs/server-name-azure-database-mysql.png" alt-text="Azure Database for MySQL server name":::
+
+## Running the code samples
+
+1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js).
+1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the server and database.
+1. **Obtain SSL certificate**: Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive.
+
+ **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
+
+ See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
+1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file.
+1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`.
+1. To run the application, enter the node command followed by the file name, such as `node createtable.js`.
+1. On Windows, if the node application is not in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
+
+## Connect, create table, and insert data
+
+Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements.
+
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) function is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) function is used to execute the SQL query against MySQL database.
+
+```javascript
+const mysql = require('mysql');
+const fs = require('fs');
+
+var config =
+{
+ host: 'mydemoserver.mysql.database.azure.com',
+ user: 'myadmin@mydemoserver',
+ password: 'your_password',
+ database: 'quickstartdb',
+ port: 3306,
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
+};
+
+const conn = new mysql.createConnection(config);
+
+conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else
+ {
+ console.log("Connection established.");
+ queryDatabase();
+ }
+});
+
+function queryDatabase(){
+ conn.query('DROP TABLE IF EXISTS inventory;', function (err, results, fields) {
+ if (err) throw err;
+ console.log('Dropped inventory table if existed.');
+ })
+ conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);',
+ function (err, results, fields) {
+ if (err) throw err;
+ console.log('Created inventory table.');
+ })
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150],
+ function (err, results, fields) {
+ if (err) throw err;
+ else console.log('Inserted ' + results.affectedRows + ' row(s).');
+ })
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 154],
+ function (err, results, fields) {
+ if (err) throw err;
+ console.log('Inserted ' + results.affectedRows + ' row(s).');
+ })
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100],
+ function (err, results, fields) {
+ if (err) throw err;
+ console.log('Inserted ' + results.affectedRows + ' row(s).');
+ })
+ conn.end(function (err) {
+ if (err) throw err;
+ else console.log('Done.')
+ });
+};
+```
+
+## Read data
+
+Use the following code to connect and read the data by using a **SELECT** SQL statement.
+
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query.
+
+```javascript
+const mysql = require('mysql');
+const fs = require('fs');
+
+var config =
+{
+ host: 'mydemoserver.mysql.database.azure.com',
+ user: 'myadmin@mydemoserver',
+ password: 'your_password',
+ database: 'quickstartdb',
+ port: 3306,
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
+};
+
+const conn = new mysql.createConnection(config);
+
+conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else {
+ console.log("Connection established.");
+ readData();
+ }
+ });
+
+function readData(){
+ conn.query('SELECT * FROM inventory',
+ function (err, results, fields) {
+ if (err) throw err;
+ else console.log('Selected ' + results.length + ' row(s).');
+ for (i = 0; i < results.length; i++) {
+ console.log('Row: ' + JSON.stringify(results[i]));
+ }
+ console.log('Done.');
+ })
+ conn.end(
+ function (err) {
+ if (err) throw err;
+ else console.log('Closing connection.')
+ });
+};
+```
+
+## Update data
+
+Use the following code to connect and update data by using an **UPDATE** SQL statement.
+
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
+
+```javascript
+const mysql = require('mysql');
+const fs = require('fs');
+
+var config =
+{
+ host: 'mydemoserver.mysql.database.azure.com',
+ user: 'myadmin@mydemoserver',
+ password: 'your_password',
+ database: 'quickstartdb',
+ port: 3306,
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
+};
+
+const conn = new mysql.createConnection(config);
+
+conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else {
+ console.log("Connection established.");
+ updateData();
+ }
+ });
+
+function updateData(){
+ conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [200, 'banana'],
+ function (err, results, fields) {
+ if (err) throw err;
+ else console.log('Updated ' + results.affectedRows + ' row(s).');
+ })
+ conn.end(
+ function (err) {
+ if (err) throw err;
+ else console.log('Done.')
+ });
+};
+```
+
+## Delete data
+
+Use the following code to connect and delete data by using a **DELETE** SQL statement.
+
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
++
+```javascript
+const mysql = require('mysql');
+const fs = require('fs');
+
+var config =
+{
+ host: 'mydemoserver.mysql.database.azure.com',
+ user: 'myadmin@mydemoserver',
+ password: 'your_password',
+ database: 'quickstartdb',
+ port: 3306,
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
+};
+
+const conn = new mysql.createConnection(config);
+
+conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else {
+ console.log("Connection established.");
+ deleteData();
+ }
+ });
+
+function deleteData(){
+ conn.query('DELETE FROM inventory WHERE name = ?', ['orange'],
+ function (err, results, fields) {
+ if (err) throw err;
+ else console.log('Deleted ' + results.affectedRows + ' row(s).');
+ })
+ conn.end(
+ function (err) {
+ if (err) throw err;
+ else console.log('Done.')
+ });
+};
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-php.md
+
+ Title: 'Quickstart: Connect using PHP - Azure Database for MySQL'
+description: This quickstart provides several PHP code samples you can use to connect and query data from Azure Database for MySQL.
++++++ Last updated : 10/28/2020++
+# Quickstart: Use PHP to connect and query data in Azure Database for MySQL
+
+This quickstart demonstrates how to connect to an Azure Database for MySQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database.
+
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.
+- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.
+
+ |Action| Connectivity method|How-to guide|
+ |: |: |: |
+ | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)|
+ | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)|
+ | **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) |
+
+- [Create a database and non-admin user](./how-to-create-users.md?tabs=single-server)
+- Install latest PHP version for your operating system
+ - [PHP on macOS](https://secure.php.net/manual/install.macosx.php)
+ - [PHP on Linux](https://secure.php.net/manual/install.unix.php)
+ - [PHP on Windows](https://secure.php.net/manual/install.windows.php)
+
+> [!NOTE]
+> We are using [MySQLi](https://www.php.net/manual/en/book.mysqli.php) library to manage connect and query the server in this quickstart.
+
+## Get connection information
+You can get the database server connection information from the Azure portal by following these steps:
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. Navigate to the Azure Databases for MySQL page. You can search for and select **Azure Database for MySQL**.
+
+2. Select your MySQL server (such as **mydemoserver**).
+3. In the **Overview** page, copy the fully qualified server name next to **Server name** and the admin user name next to **Server admin login name**. To copy the server name or host name, hover over it and select the **Copy** icon.
+
+> [!IMPORTANT]
+> - If you forgot your password, you can [reset the password](./how-to-create-manage-server-portal.md#update-admin-password).
+> - Replace the **host, username, password,** and **db_name** parameters with your own values**
+
+## Step 1: Connect to the server
+SSL is enabled by default. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. This code calls:
+- [mysqli_init](https://secure.php.net/manual/mysqli.init.php) to initialize MySQLi.
+- [mysqli_ssl_set](https://www.php.net/manual/en/mysqli.ssl-set.php) to point to the SSL certificate path. This is required for your local environment but not required for App Service Web App or Azure Virtual machines.
+- [mysqli_real_connect](https://secure.php.net/manual/mysqli.real-connect.php) to connect to MySQL.
+- [mysqli_close](https://secure.php.net/manual/mysqli.close.php) to close the connection.
++
+```php
+$host = 'mydemoserver.mysql.database.azure.com';
+$username = 'myadmin@mydemoserver';
+$password = 'your_password';
+$db_name = 'your_database';
+
+//Initializes MySQLi
+$conn = mysqli_init();
+
+mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootG2.crt.pem", NULL, NULL);
+
+// Establish the connection
+mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, NULL, MYSQLI_CLIENT_SSL);
+
+//If connection failed, show the error
+if (mysqli_connect_errno())
+{
+ die('Failed to connect to MySQL: '.mysqli_connect_error());
+}
+```
+[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
+
+## Step 2: Create a Table
+Use the following code to connect. This code calls:
+- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) to run the query.
+```php
+// Run the create table query
+if (mysqli_query($conn, '
+CREATE TABLE Products (
+`Id` INT NOT NULL AUTO_INCREMENT ,
+`ProductName` VARCHAR(200) NOT NULL ,
+`Color` VARCHAR(50) NOT NULL ,
+`Price` DOUBLE NOT NULL ,
+PRIMARY KEY (`Id`)
+);
+')) {
+printf("Table created\n");
+}
+```
+
+## Step 3: Insert data
+Use the following code to insert data by using an **INSERT** SQL statement. This code uses the methods:
+- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared insert statement
+- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) to bind the parameters for each inserted column value.
+- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php)
+- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) to close the statement by using method
++
+```php
+//Create an Insert prepared statement and run it
+$product_name = 'BrandNewProduct';
+$product_color = 'Blue';
+$product_price = 15.5;
+if ($stmt = mysqli_prepare($conn, "INSERT INTO Products (ProductName, Color, Price) VALUES (?, ?, ?)"))
+{
+ mysqli_stmt_bind_param($stmt, 'ssd', $product_name, $product_color, $product_price);
+ mysqli_stmt_execute($stmt);
+ printf("Insert: Affected %d rows\n", mysqli_stmt_affected_rows($stmt));
+ mysqli_stmt_close($stmt);
+}
+
+```
+
+## Step 4: Read data
+Use the following code to read the data by using a **SELECT** SQL statement. The code uses the method:
+- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) execute the **SELECT** query
+- [mysqli_fetch_assoc](https://secure.php.net/manual/mysqli-result.fetch-assoc.php) to fetch the resulting rows.
+
+```php
+//Run the Select query
+printf("Reading data from table: \n");
+$res = mysqli_query($conn, 'SELECT * FROM Products');
+while ($row = mysqli_fetch_assoc($res))
+ {
+ var_dump($row);
+ }
+
+```
++
+## Step 5: Delete data
+Use the following code delete rows by using a **DELETE** SQL statement. The code uses the methods:
+- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared delete statement
+- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) binds the parameters
+- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php) executes the prepared delete statement
+- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) closes the statement
+
+```php
+//Run the Delete statement
+$product_name = 'BrandNewProduct';
+if ($stmt = mysqli_prepare($conn, "DELETE FROM Products WHERE ProductName = ?")) {
+mysqli_stmt_bind_param($stmt, 's', $product_name);
+mysqli_stmt_execute($stmt);
+printf("Delete: Affected %d rows\n", mysqli_stmt_affected_rows($stmt));
+mysqli_stmt_close($stmt);
+}
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/>
+
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
+
+[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-python.md
+
+ Title: 'Quickstart: Connect using Python - Azure Database for MySQL'
+description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for MySQL.
+++++
+ms.devlang: python
+ Last updated : 10/28/2020++
+# Quickstart: Use Python to connect and query data in Azure Database for MySQL
++
+In this quickstart, you connect to an Azure Database for MySQL by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
+
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.
+- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.
+
+ |Action| Connectivity method|How-to guide|
+ |: |: |: |
+ | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./how-to-manage-firewall-using-cli.md)|
+ | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)|
+ | **Configure private link** | Private | [Portal](./how-to-configure-private-link-portal.md) <br/> [CLI](./how-to-configure-private-link-cli.md) |
+
+- [Create a database and non-admin user](./how-to-create-users.md)
+
+## Install Python and the MySQL connector
+
+Install Python and the MySQL connector for Python on your computer by using the following steps:
+
+> [!NOTE]
+> This quickstart is using [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/).
+
+1. Download and install [Python 3.7 or above](https://www.python.org/downloads/) for your OS. Make sure to add Python to your `PATH`, because the MySQL connector requires that.
+
+2. Open a command prompt or `bash` shell, and check your Python version by running `python -V` with the uppercase V switch.
+
+3. The `pip` package installer is included in the latest versions of Python. Update `pip` to the latest version by running `pip install -U pip`.
+
+ If `pip` isn't installed, you can download and install it with `get-pip.py`. For more information, see [Installation](https://pip.pypa.io/en/stable/installing/).
+
+4. Use `pip` to install the MySQL connector for Python and its dependencies:
+
+ ```bash
+ pip install mysql-connector-python
+ ```
+
+[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
+
+## Get connection information
+
+Get the connection information you need to connect to Azure Database for MySQL from the Azure portal. You need the server name, database name, and login credentials.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the portal search bar, search for and select the Azure Database for MySQL server you created, such as **mydemoserver**.
+
+ :::image type="content" source="./media/connect-python/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
+
+1. From the server's **Overview** page, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this page.
+
+ :::image type="content" source="./media/connect-python/azure-database-for-mysql-server-overview-name-login.png" alt-text="Azure Database for MySQL server name 2":::
+
+## Running the Python code samples
+
+For each code example in this article:
+
+1. Create a new file in a text editor.
+2. Add the code example to the file. In the code, replace the `<mydemoserver>`, `<myadmin>`, `<mypassword>`, and `<mydatabase>` placeholders with the values for your MySQL server and database.
+1. SSL is enabled by default on Azure Database for MySQL servers. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. Replace the `ssl_ca` value in the code with path to this file on your computer.
+1. Save the file in a project folder with a *.py* extension, such as *C:\pythonmysql\createtable.py* or */home/username/pythonmysql/createtable.py*.
+1. To run the code, open a command prompt or `bash` shell and change directory into your project folder, for example `cd pythonmysql`. Type the `python` command followed by the file name, for example `python createtable.py`, and press Enter.
+
+ > [!NOTE]
+ > On Windows, if *python.exe* is not found, you may need to add the Python path into your PATH environment variable, or provide the full path to *python.exe*, for example `C:\python27\python.exe createtable.py`.
+
+## Step 1: Create a table and insert data
+
+Use the following code to connect to the server and database, create a table, and load data by using an **INSERT** SQL statement.The code imports the mysql.connector library, and uses the method:
+- [connect()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysql-connector-connect.html) function to connect to Azure Database for MySQL using the [arguments](https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html) in the config collection.
+- [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
+- [cursor.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-close.html) when you are done using a cursor.
+- [conn.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-close.html) to close the connection the connection.
+
+```python
+import mysql.connector
+from mysql.connector import errorcode
+
+# Obtain connection string information from the portal
+
+config = {
+ 'host':'<mydemoserver>.mysql.database.azure.com',
+ 'user':'<myadmin>@<mydemoserver>',
+ 'password':'<mypassword>',
+ 'database':'<mydatabase>',
+ 'client_flags': [mysql.connector.ClientFlag.SSL],
+ 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
+}
+
+# Construct connection string
+
+try:
+ conn = mysql.connector.connect(**config)
+ print("Connection established")
+except mysql.connector.Error as err:
+ if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
+ print("Something is wrong with the user name or password")
+ elif err.errno == errorcode.ER_BAD_DB_ERROR:
+ print("Database does not exist")
+ else:
+ print(err)
+else:
+ cursor = conn.cursor()
+
+ # Drop previous table of same name if one exists
+ cursor.execute("DROP TABLE IF EXISTS inventory;")
+ print("Finished dropping table (if existed).")
+
+ # Create table
+ cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
+ print("Finished creating table.")
+
+ # Insert some data into table
+ cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150))
+ print("Inserted",cursor.rowcount,"row(s) of data.")
+ cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154))
+ print("Inserted",cursor.rowcount,"row(s) of data.")
+ cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100))
+ print("Inserted",cursor.rowcount,"row(s) of data.")
+
+ # Cleanup
+ conn.commit()
+ cursor.close()
+ conn.close()
+ print("Done.")
+```
+
+[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
+
+## Step 2: Read data
+
+Use the following code to connect and read the data by using a **SELECT** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
+
+The code reads the data rows using the [fetchall()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchall.html) method, keeps the result set in a collection row, and uses a `for` iterator to loop over the rows.
+
+```python
+import mysql.connector
+from mysql.connector import errorcode
+
+# Obtain connection string information from the portal
+
+config = {
+ 'host':'<mydemoserver>.mysql.database.azure.com',
+ 'user':'<myadmin>@<mydemoserver>',
+ 'password':'<mypassword>',
+ 'database':'<mydatabase>',
+ 'client_flags': [mysql.connector.ClientFlag.SSL],
+ 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
+}
+
+# Construct connection string
+
+try:
+ conn = mysql.connector.connect(**config)
+ print("Connection established")
+except mysql.connector.Error as err:
+ if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
+ print("Something is wrong with the user name or password")
+ elif err.errno == errorcode.ER_BAD_DB_ERROR:
+ print("Database does not exist")
+ else:
+ print(err)
+else:
+ cursor = conn.cursor()
+
+ # Read data
+ cursor.execute("SELECT * FROM inventory;")
+ rows = cursor.fetchall()
+ print("Read",cursor.rowcount,"row(s) of data.")
+
+ # Print all rows
+ for row in rows:
+ print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2])))
+
+ # Cleanup
+ conn.commit()
+ cursor.close()
+ conn.close()
+ print("Done.")
+```
+
+## Step 3: Update data
+
+Use the following code to connect and update the data by using an **UPDATE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
+
+```python
+import mysql.connector
+from mysql.connector import errorcode
+
+# Obtain connection string information from the portal
+
+config = {
+ 'host':'<mydemoserver>.mysql.database.azure.com',
+ 'user':'<myadmin>@<mydemoserver>',
+ 'password':'<mypassword>',
+ 'database':'<mydatabase>',
+ 'client_flags': [mysql.connector.ClientFlag.SSL],
+ 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
+}
+
+# Construct connection string
+
+try:
+ conn = mysql.connector.connect(**config)
+ print("Connection established")
+except mysql.connector.Error as err:
+ if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
+ print("Something is wrong with the user name or password")
+ elif err.errno == errorcode.ER_BAD_DB_ERROR:
+ print("Database does not exist")
+ else:
+ print(err)
+else:
+ cursor = conn.cursor()
+
+ # Update a data row in the table
+ cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (300, "apple"))
+ print("Updated",cursor.rowcount,"row(s) of data.")
+
+ # Cleanup
+ conn.commit()
+ cursor.close()
+ conn.close()
+ print("Done.")
+```
+
+## Step 4: Delete data
+
+Use the following code to connect and remove data by using a **DELETE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
+
+```python
+import mysql.connector
+from mysql.connector import errorcode
+
+# Obtain connection string information from the portal
+
+config = {
+ 'host':'<mydemoserver>.mysql.database.azure.com',
+ 'user':'<myadmin>@<mydemoserver>',
+ 'password':'<mypassword>',
+ 'database':'<mydatabase>',
+ 'client_flags': [mysql.connector.ClientFlag.SSL],
+ 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
+}
+
+# Construct connection string
+
+try:
+ conn = mysql.connector.connect(**config)
+ print("Connection established")
+except mysql.connector.Error as err:
+ if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
+ print("Something is wrong with the user name or password")
+ elif err.errno == errorcode.ER_BAD_DB_ERROR:
+ print("Database does not exist")
+ else:
+ print(err)
+else:
+ cursor = conn.cursor()
+
+ # Delete a data row in the table
+ cursor.execute("DELETE FROM inventory WHERE name=%(param1)s;", {'param1':"orange"})
+ print("Deleted",cursor.rowcount,"row(s) of data.")
+
+ # Cleanup
+ conn.commit()
+ cursor.close()
+ conn.close()
+ print("Done.")
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/>
+
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
+
+[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-ruby.md
+
+ Title: 'Quickstart: Connect using Ruby - Azure Database for MySQL'
+description: This quickstart provides several Ruby code samples you can use to connect and query data from Azure Database for MySQL.
++++
+ms.devlang: ruby
++ Last updated : 5/26/2020++
+# Quickstart: Use Ruby to connect and query data in Azure Database for MySQL
++
+This quickstart demonstrates how to connect to an Azure Database for MySQL using a [Ruby](https://www.ruby-lang.org) application and the [mysql2](https://rubygems.org/gems/mysql2) gem from Windows, Ubuntu Linux, and Mac platforms. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Ruby and that you are new to working with Azure Database for MySQL.
+
+## Prerequisites
+
+This quickstart uses the resources created in either of these guides as a starting point:
+
+- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)
+- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)
+
+> [!IMPORTANT]
+> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md)
+
+## Install Ruby
+
+Install Ruby, Gem, and the MySQL2 library on your own computer.
+
+### Windows
+
+1. Download and Install the 2.3 version of [Ruby](https://rubyinstaller.org/downloads/).
+2. Launch a new command prompt (cmd) from the Start menu.
+3. Change directory into the Ruby directory for version 2.3. `cd c:\Ruby23-x64\bin`
+4. Test the Ruby installation by running the command `ruby -v` to see the version installed.
+5. Test the Gem installation by running the command `gem -v` to see the version installed.
+6. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`.
+
+### macOS
+
+1. Install Ruby using Homebrew by running the command `brew install ruby`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/#homebrew).
+2. Test the Ruby installation by running the command `ruby -v` to see the version installed.
+3. Test the Gem installation by running the command `gem -v` to see the version installed.
+4. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`.
+
+### Linux (Ubuntu)
+
+1. Install Ruby by running the command `sudo apt-get install ruby-full`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/).
+2. Test the Ruby installation by running the command `ruby -v` to see the version installed.
+3. Install the latest updates for Gem by running the command `sudo gem update --system`.
+4. Test the Gem installation by running the command `gem -v` to see the version installed.
+5. Install the gcc, make, and other build tools by running the command `sudo apt-get install build-essential`.
+6. Install the MySQL client developer libraries by running the command `sudo apt-get install libmysqlclient-dev`.
+7. Build the mysql2 module for Ruby using Gem by running the command `sudo gem install mysql2`.
+
+## Get connection information
+
+Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-ruby/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
+
+## Run Ruby code
+
+1. Paste the Ruby code from the sections below into text files, and then save the files into a project folder with file extension .rb (such as `C:\rubymysql\createtable.rb` or `/home/username/rubymysql/createtable.rb`).
+2. To run the code, launch the command prompt or Bash shell. Change directory into your project folder `cd rubymysql`
+3. Then type the Ruby command followed by the file name, such as `ruby createtable.rb` to run the application.
+4. On the Windows OS, if the Ruby application is not in your path environment variable, you may need to use the full path to launch the node application, such as `"c:\Ruby23-x64\bin\ruby.exe" createtable.rb`
+
+## Connect and create a table
+
+Use the following code to connect and create a table by using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
+
+The code uses a mysql2::client class to connect to MySQL server. Then it calls method ```query()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. Finally, call the ```close()``` to close the connection before terminating.
+
+Replace the `host`, `database`, `username`, and `password` strings with your own values.
+
+```ruby
+require 'mysql2'
+
+begin
+ # Initialize connection variables.
+ host = String('mydemoserver.mysql.database.azure.com')
+ database = String('quickstartdb')
+ username = String('myadmin@mydemoserver')
+ password = String('yourpassword')
+
+ # Initialize connection object.
+ client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
+ puts 'Successfully created connection to database.'
+
+ # Drop previous table of same name if one exists
+ client.query('DROP TABLE IF EXISTS inventory;')
+ puts 'Finished dropping table (if existed).'
+
+ # Drop previous table of same name if one exists.
+ client.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);')
+ puts 'Finished creating table.'
+
+ # Insert some data into table.
+ client.query("INSERT INTO inventory VALUES(1, 'banana', 150)")
+ client.query("INSERT INTO inventory VALUES(2, 'orange', 154)")
+ client.query("INSERT INTO inventory VALUES(3, 'apple', 100)")
+ puts 'Inserted 3 rows of data.'
+
+# Error handling
+
+rescue Exception => e
+ puts e.message
+
+# Cleanup
+
+ensure
+ client.close if client
+ puts 'Done.'
+end
+```
+
+## Read data
+
+Use the following code to connect and read the data by using a **SELECT** SQL statement.
+
+The code uses a mysql2::client class to connect to Azure Database for MySQL with ```new()```method. Then it calls method ```query()``` to run the SELECT commands. Then it calls method ```close()``` to close the connection before terminating.
+
+Replace the `host`, `database`, `username`, and `password` strings with your own values.
+
+```ruby
+require 'mysql2'
+
+begin
+ # Initialize connection variables.
+ host = String('mydemoserver.mysql.database.azure.com')
+ database = String('quickstartdb')
+ username = String('myadmin@mydemoserver')
+ password = String('yourpassword')
+
+ # Initialize connection object.
+ client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
+ puts 'Successfully created connection to database.'
+
+ # Read data
+ resultSet = client.query('SELECT * from inventory;')
+ resultSet.each do |row|
+ puts 'Data row = (%s, %s, %s)' % [row['id'], row['name'], row['quantity']]
+ end
+ puts 'Read ' + resultSet.count.to_s + ' row(s).'
+
+# Error handling
+
+rescue Exception => e
+ puts e.message
+
+# Cleanup
+
+ensure
+ client.close if client
+ puts 'Done.'
+end
+```
+
+## Update data
+
+Use the following code to connect and update the data by using an **UPDATE** SQL statement.
+
+The code uses a [mysql2::client](https://rubygems.org/gems/mysql2-client-general_log) class .new() method to connect to Azure Database for MySQL. Then it calls method ```query()``` to run the UPDATE commands. Then it calls method ```close()``` to close the connection before terminating.
+
+Replace the `host`, `database`, `username`, and `password` strings with your own values.
+
+```ruby
+require 'mysql2'
+
+begin
+ # Initialize connection variables.
+ host = String('mydemoserver.mysql.database.azure.com')
+ database = String('quickstartdb')
+ username = String('myadmin@mydemoserver')
+ password = String('yourpassword')
+
+ # Initialize connection object.
+ client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
+ puts 'Successfully created connection to database.'
+
+ # Update data
+ client.query('UPDATE inventory SET quantity = %d WHERE name = %s;' % [200, '\'banana\''])
+ puts 'Updated 1 row of data.'
+
+# Error handling
+
+rescue Exception => e
+ puts e.message
+
+# Cleanup
+
+ensure
+ client.close if client
+ puts 'Done.'
+end
+```
+
+## Delete data
+
+Use the following code to connect and read the data by using a **DELETE** SQL statement.
+
+The code uses a [mysql2::client](https://rubygems.org/gems/mysql2/) class to connect to MySQL server, run the DELETE command and then close the connection to the server.
+
+Replace the `host`, `database`, `username`, and `password` strings with your own values.
+
+```ruby
+require 'mysql2'
+
+begin
+ # Initialize connection variables.
+ host = String('mydemoserver.mysql.database.azure.com')
+ database = String('quickstartdb')
+ username = String('myadmin@mydemoserver')
+ password = String('yourpassword')
+
+ # Initialize connection object.
+ client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
+ puts 'Successfully created connection to database.'
+
+ # Delete data
+ resultSet = client.query('DELETE FROM inventory WHERE name = %s;' % ['\'orange\''])
+ puts 'Deleted 1 row.'
+
+# Error handling
++
+rescue Exception => e
+ puts e.message
+
+# Cleanup
++
+ensure
+ client.close if client
+ puts 'Done.'
+end
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
+
+> [!div class="nextstepaction"]
+> [Learn more about MySQL2 client](https://rubygems.org/gems/mysql2-client-general_log)
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-workbench.md
+
+ Title: 'Quickstart: Connect - MySQL Workbench - Azure Database for MySQL'
+description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL.
++++++ Last updated : 5/26/2020++
+# Quickstart: Use MySQL Workbench to connect and query data in Azure Database for MySQL
++
+This quickstart demonstrates how to connect to an Azure Database for MySQL using the MySQL Workbench application.
+
+## Prerequisites
+
+This quickstart uses the resources created in either of these guides as a starting point:
+- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)
+- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)
+
+> [!IMPORTANT]
+> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md)
+
+## Install MySQL Workbench
+Download and install MySQL Workbench on your computer from [the MySQL website](https://dev.mysql.com/downloads/workbench/).
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+
+3. Click the server name.
+
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-php/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
+
+## Connect to the server by using MySQL Workbench
+To connect to Azure MySQL Server by using the GUI tool MySQL Workbench:
+
+1. Launch the MySQL Workbench application on your computer.
+
+2. In **Setup New Connection** dialog box, enter the following information on the **Parameters** tab:
++
+| **Setting** | **Suggested value** | **Field description** |
+||||
+| Connection Name | Demo Connection | Specify a label for this connection. |
+| Connection Method | Standard (TCP/IP) | Standard (TCP/IP) is sufficient. |
+| Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you do not remember your server name. |
+| Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. |
+| Username | *server admin login name* | Type in the server admin login username supplied when you created the Azure Database for MySQL earlier. Our example username is myadmin@mydemoserver. Follow the steps in the previous section to get the connection information if you do not remember the username. The format is *username\@servername*.
+| Password | your password | Click **Store in Vault...** button to save the password. |
+
+3. Click **Test Connection** to test if all parameters are correctly configured.
+
+4. Click **OK** to save the connection.
+
+5. In the listing of **MySQL Connections**, click the tile corresponding to your server, and then wait for the connection to be established.
+
+ A new SQL tab opens with a blank editor where you can type your queries.
+
+ > [!NOTE]
+ > By default, SSL connection security is required and enforced on your Azure Database for MySQL server. Although typically no additional configuration with SSL certificates is required for MySQL Workbench to connect to your server, we recommend binding the SSL CA certification with MySQL Workbench. For more information on how to download and bind the certification, see [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). If you need to disable SSL, visit the Azure portal and click the Connection security page to disable the Enforce SSL connection toggle button.
+
+## Create a table, insert data, read data, update data, delete data
+1. Copy and paste the sample SQL code into a blank SQL tab to illustrate some sample data.
+
+ This code creates an empty database named quickstartdb, and then creates a sample table named inventory. It inserts some rows, then reads the rows. It changes the data with an update statement, and reads the rows again. Finally it deletes a row, and then reads the rows again.
+
+ ```sql
+ -- Create a database
+ -- DROP DATABASE IF EXISTS quickstartdb;
+ CREATE DATABASE quickstartdb;
+ USE quickstartdb;
+
+ -- Create a table and insert rows
+ DROP TABLE IF EXISTS inventory;
+ CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
+ INSERT INTO inventory (name, quantity) VALUES ('banana', 150);
+ INSERT INTO inventory (name, quantity) VALUES ('orange', 154);
+ INSERT INTO inventory (name, quantity) VALUES ('apple', 100);
+
+ -- Read
+ SELECT * FROM inventory;
+
+ -- Update
+ UPDATE inventory SET quantity = 200 WHERE id = 1;
+ SELECT * FROM inventory;
+
+ -- Delete
+ DELETE FROM inventory WHERE id = 2;
+ SELECT * FROM inventory;
+ ```
+
+ The screenshot shows an example of the SQL code in SQL Workbench and the output after it has been run.
+
+ :::image type="content" source="media/connect-workbench/3-workbench-sql-tab.png" alt-text="MySQL Workbench SQL Tab to run sample SQL code":::
+
+2. To run the sample SQL Code, click the lightening bolt icon in the toolbar of the **SQL File** tab.
+3. Notice the three tabbed results in the **Result Grid** section in the middle of the page.
+4. Notice the **Output** list at the bottom of the page. The status of each command is shown.
+
+Now, you have connected to Azure Database for MySQL by using MySQL Workbench, and you have queried data using the SQL language.
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-alert-on-metric.md
+
+ Title: Configure metric alerts - Azure portal - Azure Database for MySQL
+description: This article describes how to configure and access metric alerts for Azure Database for MySQL from the Azure portal.
+++++ Last updated : 3/18/2020++
+# Use the Azure portal to set up alerts on metrics for Azure Database for MySQL
++
+This article shows you how to set up Azure Database for MySQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services.
+
+The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met.
+
+You can configure an alert to do the following actions when it triggers:
+* Send email notifications to the service administrator and co-administrators
+* Send email to additional emails that you specify.
+* Call a webhook
+
+You can configure and get information about alert rules using:
+* [Azure portal](../../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal)
+* [Azure CLI](../../azure-monitor/alerts/alerts-metric.md#with-azure-cli)
+* [Azure Monitor REST API](/rest/api/monitor/metricalerts)
+
+## Create an alert rule on a metric from the Azure portal
+1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for MySQL server you want to monitor.
+
+2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown:
+
+ :::image type="content" source="./media/how-to-alert-on-metric/2-alert-rules.png" alt-text="Select Alert Rules":::
+
+3. Select **Add metric alert** (+ icon).
+
+4. The **Create rule** page opens as shown below. Fill in the required information:
+
+ :::image type="content" source="./media/how-to-alert-on-metric/4-add-rule-form.png" alt-text="Add metric alert form":::
+
+5. Within the **Condition** section, select **Add condition**.
+
+6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent".
+
+ :::image type="content" source="./media/how-to-alert-on-metric/6-configure-signal-logic.png" alt-text="Select metric":::
+
+7. Configure the alert logic including the **Condition** (ex. "Greater than"), **Threshold** (ex. 85 percent), **Time Aggregation**, **Period** of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes"), and **Frequency**.
+
+ Select **Done** when complete.
+
+ :::image type="content" source="./media/how-to-alert-on-metric/7-set-threshold-time.png" alt-text="Select metric 2":::
+
+8. Within the **Action Groups** section, select **Create New** to create a new group to receive notifications on the alert.
+
+9. Fill out the "Add action group" form with a name, short name, subscription, and resource group.
+
+10. Configure an **Email/SMS/Push/Voice** action type.
+
+ Choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications.
+
+ Optionally, provide a valid URI in the **Webhook** field if you want it called when the alert fires.
+
+ Select **OK** when completed.
+
+ :::image type="content" source="./media/how-to-alert-on-metric/10-action-group-type.png" alt-text="Action group":::
+
+11. Specify an Alert rule name, Description, and Severity.
+
+ :::image type="content" source="./media/how-to-alert-on-metric/11-name-description-severity.png" alt-text="Action group 2":::
+
+12. Select **Create alert rule** to create the alert.
+
+ Within a few minutes, the alert is active and triggers as previously described.
+
+## Manage your alerts
+Once you have created an alert, you can select it and do the following actions:
+
+* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
+* **Edit** or **Delete** the alert rule.
+* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications.
++
+## Next steps
+* Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md).
+* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive.
mysql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-cli.md
+
+ Title: Auto grow storage - Azure CLI - Azure Database for MySQL
+description: This article describes how you can enable auto grow storage using the Azure CLI in Azure Database for MySQL.
+++++ Last updated : 3/18/2020 ++
+# Auto-grow Azure Database for MySQL storage using the Azure CLI
+
+This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload.
+
+The server [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
+
+## Prerequisites
+
+To complete this how-to guide:
+
+- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md).
++
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Enable MySQL server storage auto-grow
+
+Enable server auto-grow storage on an existing server with the following command:
+
+```azurecli-interactive
+az mysql server update --name mydemoserver --resource-group myresourcegroup --auto-grow Enabled
+```
+
+Enable server auto-grow storage while creating a new server with the following command:
+
+```azurecli-interactive
+az mysql server create --resource-group myresourcegroup --name mydemoserver --auto-grow Enabled --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7
+```
+
+## Next steps
+
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md).
mysql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-portal.md
+
+ Title: Auto grow storage - Azure portal - Azure Database for MySQL
+description: This article describes how you can enable auto grow storage for Azure Database for MySQL using Azure portal
+++++ Last updated : 3/18/2020+
+# Auto grow storage in Azure Database for MySQL using the Azure portal
+
+This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload.
+
+When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
+
+## Prerequisites
+To complete this how-to guide, you need:
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)
+
+## Enable storage auto grow
+
+Follow these steps to set MySQL server storage auto grow:
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server.
+
+2. On the MySQL server page, under **Settings** heading, click **Pricing tier** to open the Pricing tier page.
+
+3. In the Auto-growth section, select **Yes** to enable storage auto grow.
+
+ :::image type="content" source="./media/how-to-auto-grow-storage-portal/3-auto-grow.png" alt-text="Azure Database for MySQL - Settings_Pricing_tier - Auto-growth":::
+
+4. Click **OK** to save the changes.
+
+5. A notification will confirm that auto grow was successfully enabled.
+
+ :::image type="content" source="./media/how-to-auto-grow-storage-portal/5-auto-grow-success.png" alt-text="Azure Database for MySQL - auto-growth success":::
+
+## Next steps
+
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md).
mysql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md
+
+ Title: Auto grow storage - Azure PowerShell - Azure Database for MySQL
+description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for MySQL.
+++++ Last updated : 4/28/2020 ++
+# Auto grow storage in Azure Database for MySQL server using PowerShell
++
+This article describes how you can configure an Azure Database for MySQL server storage to grow
+without impacting the workload.
+
+Storage auto grow prevents your server from
+[reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit) and
+becoming read-only. For servers with 100 GB or less of provisioned storage, the size is increased by
+5 GB when the free space is below 10%. For servers with more than 100 GB of provisioned storage, the
+size is increased by 5% when the free space is below 10 GB. Maximum storage limits apply as
+specified in the storage section of the
+[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage).
+
+> [!IMPORTANT]
+> Remember that storage can only be scaled up, not down.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+ [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
+> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
+
+## Enable MySQL server storage auto grow
+
+Enable server auto grow storage on an existing server with the following command:
+
+```azurepowershell-interactive
+Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -StorageAutogrow Enabled
+```
+
+Enable server auto grow storage while creating a new server with the following command:
+
+```azurepowershell-interactive
+$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
+New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -StorageAutogrow Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to create and manage read replicas in Azure Database for MySQL using PowerShell](how-to-read-replicas-powershell.md).
mysql How To Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-cli.md
+
+ Title: Access audit logs - Azure CLI - Azure Database for MySQL
+description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure CLI.
++++++ Last updated : 6/24/2020 ++
+# Configure and access audit logs in the Azure CLI
++
+You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) from the Azure CLI.
+
+## Prerequisites
+
+To step through this how-to guide:
+
+- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md).
++
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Configure audit logging
+
+> [!IMPORTANT]
+> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted.
+
+Enable and configure audit logging using the following steps:
+
+1. Turn on audit logs by setting the **audit_logs_enabled** parameter to "ON".
+ ```azurecli-interactive
+ az mysql server configuration set --name audit_log_enabled --resource-group myresourcegroup --server mydemoserver --value ON
+ ```
+
+2. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter.
+ ```azurecli-interactive
+ az mysql server configuration set --name audit_log_events --resource-group myresourcegroup --server mydemoserver --value "ADMIN,CONNECTION"
+ ```
+
+3. Add any MySQL users to be excluded from logging by updating the **audit_log_exclude_users** parameter. Specify users by providing their MySQL user name.
+ ```azurecli-interactive
+ az mysql server configuration set --name audit_log_exclude_users --resource-group myresourcegroup --server mydemoserver --value "azure_superuser"
+ ```
+
+4. Add any specific MySQL users to be included for logging by updating the **audit_log_include_users** parameter. Specify users by providing their MySQL user name.
+
+ ```azurecli-interactive
+ az mysql server configuration set --name audit_log_include_users --resource-group myresourcegroup --server mydemoserver --value "sampleuser"
+ ```
+
+## Next steps
+- Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MySQL
+- Learn how to configure audit logs in the [Azure portal](how-to-configure-audit-logs-portal.md)
mysql How To Configure Audit Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-portal.md
+
+ Title: Access audit logs - Azure portal - Azure Database for MySQL
+description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure portal.
+++++ Last updated : 9/29/2020++
+# Configure and access audit logs for Azure Database for MySQL in the Azure portal
++
+You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) and diagnostic settings from the Azure portal.
+
+## Prerequisites
+
+To step through this how-to guide, you need:
+
+- [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)
+
+## Configure audit logging
+
+>[!IMPORTANT]
+> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted.
+
+Enable and configure audit logging.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Select your Azure Database for MySQL server.
+
+1. Under the **Settings** section in the sidebar, select **Server parameters**.
+ :::image type="content" source="./media/how-to-configure-audit-logs-portal/server-parameters.png" alt-text="Server parameters":::
+
+1. Update the **audit_log_enabled** parameter to ON.
+ :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-enabled.png" alt-text="Enable audit logs":::
+
+1. Select the [event types](concepts-audit-logs.md#configure-audit-logging) to be logged by updating the **audit_log_events** parameter.
+ :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-events.png" alt-text="Audit log events":::
+
+1. Add any MySQL users to be included or excluded from logging by updating the **audit_log_exclude_users** and **audit_log_include_users** parameters. Specify users by providing their MySQL user name.
+ :::image type="content" source="./media/how-to-configure-audit-logs-portal/audit-log-exclude-users.png" alt-text="Audit log exclude users":::
+
+1. Once you have changed the parameters, you can click **Save**. Or you can **Discard** your changes.
+ :::image type="content" source="./media/how-to-configure-audit-logs-portal/save-parameters.png" alt-text="Save":::
+
+## Set up diagnostic logs
+
+1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings**.
+
+1. Click on "+ Add diagnostic setting"
+
+1. Provide a diagnostic setting name.
+
+1. Specify which data sinks to send the audit logs (storage account, event hub, and/or Log Analytics workspace).
+
+1. Select "MySqlAuditLogs" as the log type.
+
+1. Once you've configured the data sinks to pipe the audit logs to, you can click **Save**.
+
+1. Access the audit logs by exploring them in the data sinks you configured. It may take up to 10 minutes for the logs to appear.
+
+## Next steps
+
+- Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MySQL
+- Learn how to configure audit logs in the [Azure CLI](how-to-configure-audit-logs-cli.md)
mysql How To Configure Private Link Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-cli.md
+
+ Title: Private Link - Azure CLI - Azure Database for MySQL
+description: Learn how to configure private link for Azure Database for MySQL from Azure CLI
++++++ Last updated : 01/09/2020++
+# Create and manage Private Link for Azure Database for MySQL using CLI
++
+A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure CLI to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint.
+
+> [!NOTE]
+> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
++
+- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
+
+Before you can create any resource, you have to create a resource group to host the Virtual Network. Create a resource group with [az group create](/cli/azure/group). This example creates a resource group named *myResourceGroup* in the *westeurope* location:
+
+```azurecli-interactive
+az group create --name myResourceGroup --location westeurope
+```
+
+## Create a Virtual Network
+
+Create a Virtual Network with [az network vnet create](/cli/azure/network/vnet). This example creates a default Virtual Network named *myVirtualNetwork* with one subnet named *mySubnet*:
+
+```azurecli-interactive
+az network vnet create \
+ --name myVirtualNetwork \
+ --resource-group myResourceGroup \
+ --subnet-name mySubnet
+```
+
+## Disable subnet private endpoint policies
+
+Azure deploys resources to a subnet within a virtual network, so you need to create or update the subnet to disable private endpoint [network policies](../../private-link/disable-private-endpoint-network-policy.md). Update a subnet configuration named *mySubnet* with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update):
+
+```azurecli-interactive
+az network vnet subnet update \
+ --name mySubnet \
+ --resource-group myResourceGroup \
+ --vnet-name myVirtualNetwork \
+ --disable-private-endpoint-network-policies true
+```
+
+## Create the VM
+
+Create a VM with az vm create. When prompted, provide a password to be used as the sign-in credentials for the VM. This example creates a VM named *myVm*:
+
+```azurecli-interactive
+az vm create \
+ --resource-group myResourceGroup \
+ --name myVm \
+ --image Win2019Datacenter
+```
+
+> [!Note]
+> The public IP address of the VM. You use this address to connect to the VM from the internet in the next step.
+
+## Create an Azure Database for MySQL server
+
+Create a Azure Database for MySQL with the az mysql server create command. Remember that the name of your MySQL Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
+
+```azurecli-interactive
+# Create a server in the resource group
+
+az mysql server create \
+--name mydemoserver \
+--resource-group myResourcegroup \
+--location westeurope \
+--admin-user mylogin \
+--admin-password <server_admin_password> \
+--sku-name GP_Gen5_2
+```
+
+> [!NOTE]
+> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
+>
+> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+## Create the Private Endpoint
+
+Create a private endpoint for the MySQL server in your Virtual Network:
+
+```azurecli-interactive
+az network private-endpoint create \
+ --name myPrivateEndpoint \
+ --resource-group myResourceGroup \
+ --vnet-name myVirtualNetwork \
+ --subnet mySubnet \
+ --private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforMySQL/servers" --query "id" -o tsv) \
+ --group-id mysqlServer \
+ --connection-name myConnection
+ ```
+
+## Configure the Private DNS Zone
+
+Create a Private DNS Zone for MySQL server domain and create an association link with the Virtual Network.
+
+```azurecli-interactive
+az network private-dns zone create --resource-group myResourceGroup \
+ --name "privatelink.mysql.database.azure.com"
+az network private-dns link vnet create --resource-group myResourceGroup \
+ --zone-name "privatelink.mysql.database.azure.com"\
+ --name MyDNSLink \
+ --virtual-network myVirtualNetwork \
+ --registration-enabled false
+
+# Query for the network interface ID
+$networkInterfaceId=$(az network private-endpoint show --name myPrivateEndpoint --resource-group myResourceGroup --query 'networkInterfaces[0].id' -o tsv)
+
+az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json
+# Copy the content for privateIPAddress and FQDN matching the Azure database for MySQL name
+
+# Create DNS records
+az network private-dns record-set a create --name myserver --zone-name privatelink.mysql.database.azure.com --resource-group myResourceGroup
+az network private-dns record-set a add-record --record-set-name myserver --zone-name privatelink.mysql.database.azure.com --resource-group myResourceGroup -a <Private IP Address>
+```
+
+> [!NOTE]
+> The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../../dns/dns-operations-recordsets-portal.md).
+
+## Connect to a VM from the internet
+
+Connect to the VM *myVm* from the internet as follows:
+
+1. In the portal's search bar, enter *myVm*.
+
+1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens.
+
+1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
+
+1. Open the *downloaded.rdp* file.
+
+ 1. If prompted, select **Connect**.
+
+ 1. Enter the username and password you specified when creating the VM.
+
+ > [!NOTE]
+ > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
+
+1. Select **OK**.
+
+1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
+
+1. Once the VM desktop appears, minimize it to go back to your local desktop.
+
+## Access the MySQL server privately from the VM
+
+1. In the Remote Desktop ofΓÇ»*myVM*, open PowerShell.
+
+2. Enter ΓÇ»`nslookup mydemomysqlserver.privatelink.mysql.database.azure.com`.
+
+ You'll receive a message similar to this:
+
+ ```azurepowershell
+ Server: UnKnown
+ Address: 168.63.129.16
+ Non-authoritative answer:
+ Name: mydemomysqlserver.privatelink.mysql.database.azure.com
+ Address: 10.1.3.4
+ ```
+
+3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation.
+
+4. In **New connection**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | Connection Name| Select the connection name of your choice.|
+ | Hostname | Select *mydemoserver.privatelink.mysql.database.azure.com* |
+ | Username | Enter username as *username@servername* which is provided during the MySQL server creation. |
+ | Password | Enter a password provided during the MySQL server creation. |
+ ||
+
+5. Select Connect.
+
+6. Browse databases from left menu.
+
+7. (Optionally) Create or query information from the MySQL database.
+
+8. Close the remote desktop connection to myVm.
+
+## Clean up resources
+
+When no longer needed, you can use az group delete to remove the resource group and all the resources it has:
+
+```azurecli-interactive
+az group delete --name myResourceGroup --yes
+```
+
+## Next steps
+
+- Learn more about [What is Azure private endpoint](../../private-link/private-endpoint-overview.md)
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
mysql How To Configure Private Link Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-portal.md
+
+ Title: Private Link - Azure portal - Azure Database for MySQL
+description: Learn how to configure private link for Azure Database for MySQL from Azure portal
+++++ Last updated : 01/09/2020++
+# Create and manage Private Link for Azure Database for MySQL using Portal
++
+A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure portal to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+> [!NOTE]
+> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
+
+## Sign in to Azure
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create an Azure VM
+
+In this section, you will create virtual network and the subnet to host the VM that is used to access your Private Link resource (a MySQL server in Azure).
+
+### Create the virtual network
+In this section, you will create a Virtual Network and the subnet to host the VM that is used to access your Private Link resource.
+
+1. On the upper-left side of the screen, select **Create a resource** > **Networking** > **Virtual network**.
+2. In **Create virtual network**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter *MyVirtualNetwork*. |
+ | Address space | Enter *10.1.0.0/16*. |
+ | Subscription | Select your subscription.|
+ | Resource group | Select **Create new**, enter *myResourceGroup*, then select **OK**. |
+ | Location | Select **West Europe**.|
+ | Subnet - Name | Enter *mySubnet*. |
+ | Subnet - Address range | Enter *10.1.0.0/24*. |
+ |||
+3. Leave the rest as default and select **Create**.
+
+### Create Virtual Machine
+
+1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual Machine**.
+
+2. In **Create a virtual machine - Basics**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | **PROJECT DETAILS** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. You created this in the previous section. |
+ | **INSTANCE DETAILS** | |
+ | Virtual machine name | Enter *myVm*. |
+ | Region | Select **West Europe**. |
+ | Availability options | Leave the default **No infrastructure redundancy required**. |
+ | Image | Select **Windows Server 2019 Datacenter**. |
+ | Size | Leave the default **Standard DS1 v2**. |
+ | **ADMINISTRATOR ACCOUNT** | |
+ | Username | Enter a username of your choosing. |
+ | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
+ | Confirm Password | Reenter password. |
+ | **INBOUND PORT RULES** | |
+ | Public inbound ports | Leave the default **None**. |
+ | **SAVE MONEY** | |
+ | Already have a Windows license? | Leave the default **No**. |
+ |||
+
+1. Select **Next: Disks**.
+
+1. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**.
+
+1. In **Create a virtual machine - Networking**, select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | Virtual network | Leave the default **MyVirtualNetwork**. |
+ | Address space | Leave the default **10.1.0.0/24**.|
+ | Subnet | Leave the default **mySubnet (10.1.0.0/24)**.|
+ | Public IP | Leave the default **(new) myVm-ip**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **HTTP** and **RDP**.|
+ |||
++
+1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+
+1. When you see the **Validation passed** message, select **Create**.
+
+## Create an Azure Database for MySQL
+
+In this section, you will create an Azure Database for MySQL server in Azure.
+
+1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **Azure Database for MySQL**.
+
+1. In **Azure Database for MySQL** provide these information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. You created this in the previous section.|
+ | **Server details** | |
+ |Server name | Enter *myServer*. If this name is taken, create a unique name.|
+ | Admin username| Enter an administrator name of your choosing. |
+ | Password | Enter a password of your choosing. The password must be at least 8 characters long and meet the defined requirements. |
+ | Location | Select an Azure region where you want to want your MySQL Server to reside. |
+ |Version | Select the database version of the MySQL server that is required.|
+ | Compute + Storage| Select the pricing tier that is needed for the server based on the workload. |
+ |||
+
+7. Select **OK**.
+8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+9. When you see the Validation passed message, select **Create**.
+10. When you see the Validation passed message, select Create.
+
+> [!NOTE]
+> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
+> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+## Create a private endpoint
+
+In this section, you will create a MySQL server and add a private endpoint to it.
+
+1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Networking** > **Private Link**.
+
+2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-private-link/private-link-overview.png" alt-text="Private Link overview":::
+
+1. In **Create a private endpoint - Basics**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. You created this in the previous section.|
+ | **Instance Details** | |
+ | Name | Enter *myPrivateEndpoint*. If this name is taken, create a unique name. |
+ |Region|Select **West Europe**.|
+ |||
+
+5. Select **Next: Resource**.
+6. In **Create a private endpoint - Resource**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ |Connection method | Select connect to an Azure resource in my directory.|
+ | Subscription| Select your subscription. |
+ | Resource type | Select **Microsoft.DBforMySQL/servers**. |
+ | Resource |Select *myServer*|
+ |Target sub-resource |Select *mysqlServer*|
+ |||
+7. Select **Next: Configuration**.
+8. In **Create a private endpoint - Configuration**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ |**NETWORKING**| |
+ | Virtual network| Select *MyVirtualNetwork*. |
+ | Subnet | Select *mySubnet*. |
+ |**PRIVATE DNS INTEGRATION**||
+ |Integrate with private DNS zone |Select **Yes**. |
+ |Private DNS Zone |Select *(New)privatelink.mysql.database.azure.com* |
+ |||
+
+ > [!Note]
+ > Use the predefined private DNS zone for your service or provide your preferred DNS zone name. Refer to the [Azure services DNS zone configuration](../../private-link/private-endpoint-dns.md) for details.
+
+1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+2. When you see the **Validation passed** message, select **Create**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-private-link/show-mysql-private-link.png" alt-text="Private Link created":::
+
+ > [!NOTE]
+ > The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../../dns/dns-operations-recordsets-portal.md).
+
+## Connect to a VM using Remote Desktop (RDP)
++
+After you've created **myVm**, connect to it from the internet as follows:
+
+1. In the portal's search bar, enter *myVm*.
+
+1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens.
+
+1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
+
+1. Open the *downloaded.rdp* file.
+
+ 1. If prompted, select **Connect**.
+
+ 1. Enter the username and password you specified when creating the VM.
+
+ > [!NOTE]
+ > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
+
+1. Select **OK**.
+
+1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
+
+1. Once the VM desktop appears, minimize it to go back to your local desktop.
+
+## Access the MySQL server privately from the VM
+
+1. In the Remote Desktop of *myVM*, open PowerShell.
+
+2. EnterΓÇ»`nslookup myServer.privatelink.mysql.database.azure.com`.
+
+ You'll receive a message similar to this:
+ ```azurepowershell
+ Server: UnKnown
+ Address: 168.63.129.16
+ Non-authoritative answer:
+ Name: myServer.privatelink.mysql.database.azure.com
+ Address: 10.1.3.4
+ ```
+ > [!NOTE]
+ > If public access is disabled in the firewall settings in Azure Database for MySQL - Single Server. These ping and telnet tests will succeed regardless of the firewall settings. Those tests will ensure the network connectivity.
+
+3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation.
+
+4. In **New connection**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | Server type| Select **MySQL**.|
+ | Server name| Select *myServer.privatelink.mysql.database.azure.com* |
+ | User name | Enter username as username@servername which is provided during the MySQL server creation. |
+ |Password |Enter a password provided during the MySQL server creation. |
+ |SSL|Select **Required**.|
+ ||
+
+5. Select Connect.
+
+6. Browse databases from left menu.
+
+7. (Optionally) Create or query information from the MySQL server.
+
+8. Close the remote desktop connection to myVm.
+
+## Clean up resources
+When you're done using the private endpoint, MySQL server, and the VM, delete the resource group and all of the resources it contains:
+
+1. Enter *myResourceGroup* in the **Search** box at the top of the portal and select *myResourceGroup* from the search results.
+2. Select **Delete resource group**.
+3. Enter myResourceGroup for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+
+## Next steps
+
+In this how-to, you created a VM on a virtual network, an Azure Database for MySQL, and a private endpoint for private access. You connected to one VM from the internet and securely communicated to the MySQL server using Private Link. To learn more about private endpoints, see [What is Azure private endpoint](../../private-link/private-endpoint-overview.md).
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
mysql How To Configure Server Logs In Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-cli.md
+
+ Title: Access slow query logs - Azure CLI - Azure Database for MySQL
+description: This article describes how to access the slow query logs in Azure Database for MySQL by using the Azure CLI.
++++
+ms.devlang: azurecli
+ Last updated : 4/13/2020 ++
+# Configure and access slow query logs by using Azure CLI
+
+You can download the Azure Database for MySQL slow query logs by using Azure CLI, the Azure command-line utility.
+
+## Prerequisites
+To step through this how-to guide, you need:
+- [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md)
+- The [Azure CLI](/cli/azure/install-azure-cli) or Azure Cloud Shell in the browser
+
+## Configure logging
+You can configure the server to access the MySQL slow query log by taking the following steps:
+1. Turn on slow query logging by setting the **slow\_query\_log** parameter to ON.
+2. Select where to output the logs to using **log\_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**. To send logs only to Azure Monitor Logs, select **None**
+3. Adjust other parameters, such as **long\_query\_time** and **log\_slow\_admin\_statements**.
+
+To learn how to set the value of these parameters through Azure CLI, see [How to configure server parameters](how-to-configure-server-parameters-using-cli.md).
+
+For example, the following CLI command turns on the slow query log, sets the long query time to 10 seconds, and then turns off the logging of the slow admin statement. Finally, it lists the configuration options for your review.
+```azurecli-interactive
+az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver --value ON
+az mysql server configuration set --name log_output --resource-group myresourcegroup --server mydemoserver --value FILE
+az mysql server configuration set --name long_query_time --resource-group myresourcegroup --server mydemoserver --value 10
+az mysql server configuration set --name log_slow_admin_statements --resource-group myresourcegroup --server mydemoserver --value OFF
+az mysql server configuration list --resource-group myresourcegroup --server mydemoserver
+```
+
+## List logs for Azure Database for MySQL server
+If **log_output** is configured to "File", you can access logs directly from the server's local storage. To list the available slow query log files for your server, run the [az mysql server-logs list](/cli/azure/mysql/server-logs#az-mysql-server-logs-list) command.
+
+You can list the log files for server **mydemoserver.mysql.database.azure.com** under the resource group **myresourcegroup**. Then direct the list of log files to a text file called **log\_files\_list.txt**.
+```azurecli-interactive
+az mysql server-logs list --resource-group myresourcegroup --server mydemoserver > log_files_list.txt
+```
+## Download logs from the server
+If **log_output** is configured to "File", you can download individual log files from your server with the [az mysql server-logs download](/cli/azure/mysql/server-logs#az-mysql-server-logs-download) command.
+
+Use the following example to download the specific log file for the server **mydemoserver.mysql.database.azure.com** under the resource group **myresourcegroup** to your local environment.
+```azurecli-interactive
+az mysql server-logs download --name 20170414-mydemoserver-mysql.log --resource-group myresourcegroup --server mydemoserver
+```
+
+## Next steps
+- Learn about [slow query logs in Azure Database for MySQL](concepts-server-logs.md).
mysql How To Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-portal.md
+
+ Title: Access slow query logs - Azure portal - Azure Database for MySQL
+description: This article describes how to configure and access the slow logs in Azure Database for MySQL from the Azure portal.
+++++ Last updated : 3/15/2021++
+# Configure and access slow query logs from the Azure portal
++
+You can configure, list, and download the [Azure Database for MySQL slow query logs](concepts-server-logs.md) from the Azure portal.
+
+## Prerequisites
+The steps in this article require that you have [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md).
+
+## Configure logging
+Configure access to the MySQL slow query log.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Select your Azure Database for MySQL server.
+
+3. Under the **Monitoring** section in the sidebar, select **Server logs**.
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/1-select-server-logs-configure.png" alt-text="Screenshot of Server logs options":::
+
+4. To see the server parameters, select **Click here to enable logs and configure log parameters**.
+
+5. Turn **slow_query_log** to **ON**.
+
+6. Select where to output the logs to using **log_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**.
+
+7. Consider setting "long_query_time" which represents query time threshold for the queries that will be collected in the slow query log file, The minimum and default values of long_query_time are 0 and 10, respectively.
+
+8. Adjust other parameters, such as log_slow_admin_statements to log administrative statements. By default, administrative statements are not logged, nor are queries that do not use indexes for lookups.
+
+9. Select **Save**.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/3-save-discard.png" alt-text="Screenshot of slow query log parameters and save.":::
+
+From the **Server Parameters** page, you can return to the list of logs by closing the page.
+
+## View list and download logs
+After logging begins, you can view a list of available slow query logs, and download individual log files.
+
+1. Open the Azure portal.
+
+2. Select your Azure Database for MySQL server.
+
+3. Under the **Monitoring** section in the sidebar, select **Server logs**. The page shows a list of your log files.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/4-server-logs-list.png" alt-text="Screenshot of Server logs page, with list of logs highlighted":::
+
+ > [!TIP]
+ > The naming convention of the log is **mysql-slow-< your server name>-yyyymmddhh.log**. The date and time used in the file name is the time when the log was issued. Log files are rotated every 24 hours or 7.5 GB, whichever comes first.
+
+4. If needed, use the search box to quickly narrow down to a specific log, based on date and time. The search is on the name of the log.
+
+5. To download individual log files, select the down-arrow icon next to each log file in the table row.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/5-download.png" alt-text="Screenshot of Server logs page, with down-arrow icon highlighted":::
+
+## Set up diagnostic logs
+
+1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings** > **Add diagnostic settings**.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/add-diagnostic-setting.png" alt-text="Screenshot of Diagnostic settings options":::
+
+2. Provide a diagnostic setting name.
+
+3. Specify which data sinks to send the slow query logs (storage account, event hub, or Log Analytics workspace).
+
+4. Select **MySqlSlowLogs** as the log type.
+
+5. After you've configured the data sinks to pipe the slow query logs to, select **Save**.
+
+6. Access the slow query logs by exploring them in the data sinks you configured. It can take up to 10 minutes for the logs to appear.
+
+## Next steps
+- See [Access slow query Logs in CLI](how-to-configure-server-logs-in-cli.md) to learn how to download slow query logs programmatically.
+- Learn more about [slow query logs](concepts-server-logs.md) in Azure Database for MySQL.
+- For more information about the parameter definitions and MySQL logging, see the MySQL documentation on [logs](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html).
mysql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-cli.md
+
+ Title: Configure server parameters - Azure CLI - Azure Database for MySQL
+description: This article describes how to configure the service parameters in Azure Database for MySQL using the Azure CLI command line utility.
++++
+ms.devlang: azurecli
+ Last updated : 10/1/2020 ++
+# Configure server parameters in Azure Database for MySQL using the Azure CLI
+
+You can list, show, and update configuration parameters for an Azure Database for MySQL server by using Azure CLI, the Azure command-line utility. A subset of engine configurations is exposed at the server-level and can be modified.
+
+>[!Note]
+> Server parameters can be updated globally at the server-level, use the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), [PowerShell](./how-to-configure-server-parameters-using-powershell.md), or [Azure portal](./how-to-server-parameters.md)
+
+## Prerequisites
+To step through this how-to guide, you need:
+- [An Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md)
+- [Azure CLI](/cli/azure/install-azure-cli) command-line utility or use the Azure Cloud Shell in the browser.
+
+## List server configuration parameters for Azure Database for MySQL server
+To list all modifiable parameters in a server and their values, run the [az mysql server configuration list](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-list) command.
+
+You can list the server configuration parameters for the server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup**.
+```azurecli-interactive
+az mysql server configuration list --resource-group myresourcegroup --server mydemoserver
+```
+For the definition of each of the listed parameters, see the MySQL reference section on [Server System Variables](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html).
+
+## Show server configuration parameter details
+To show details about a particular configuration parameter for a server, run the [az mysql server configuration show](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-show) command.
+
+This example shows details of the **slow\_query\_log** server configuration parameter for server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.**
+```azurecli-interactive
+az mysql server configuration show --name slow_query_log --resource-group myresourcegroup --server mydemoserver
+```
+## Modify a server configuration parameter value
+You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the MySQL server engine. To update the configuration, use the [az mysql server configuration set](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-set) command.
+
+To update the **slow\_query\_log** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.**
+```azurecli-interactive
+az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver --value ON
+```
+If you want to reset the value of a configuration parameter, omit the optional `--value` parameter, and the service applies the default value. For the example above, it would look like:
+```azurecli-interactive
+az mysql server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver
+```
+This code resets the **slow\_query\_log** configuration to the default value **OFF**.
+
+## Setting parameters not listed
+If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server.
+
+Update the **init\_connect** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup** to set values such as character set.
+```azurecli-interactive
+az mysql server configuration set --name init_connect --resource-group myresourcegroup --server mydemoserver --value "SET character_set_client=utf8;SET character_set_database=utf8mb4;SET character_set_connection=latin1;SET character_set_results=latin1;"
+```
+
+## Working with the time zone parameter
+
+### Populating the time zone tables
+
+The time zone tables on your server can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench.
+
+> [!NOTE]
+> If you are running the `mysql.az_load_timezone` command from MySQL Workbench, you may need to turn off safe update mode first using `SET SQL_SAFE_UPDATES=0;`.
+
+```sql
+CALL mysql.az_load_timezone();
+```
+
+> [!IMPORTANT]
+> You should restart the server to ensure the time zone tables are properly populated. To restart the server, use the [Azure portal](how-to-restart-server-portal.md) or [CLI](how-to-restart-server-cli.md).
+
+To view available time zone values, run the following command:
+
+```sql
+SELECT name FROM mysql.time_zone_name;
+```
+
+### Setting the global level time zone
+
+The global level time zone can be set using the [az mysql server configuration set](/cli/azure/mysql/server/configuration#az-mysql-server-configuration-set) command.
+
+The following command updates the **time\_zone** server configuration parameter of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup** to **US/Pacific**.
+
+```azurecli-interactive
+az mysql server configuration set --name time_zone --resource-group myresourcegroup --server mydemoserver --value "US/Pacific"
+```
+
+### Setting the session level time zone
+
+The session level time zone can be set by running the `SET time_zone` command from a tool like the MySQL command line or MySQL Workbench. The example below sets the time zone to the **US/Pacific** time zone.
+
+```sql
+SET time_zone = 'US/Pacific';
+```
+
+Refer to the MySQL documentation for [Date and Time Functions](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz).
++
+## Next steps
+
+- How to configure [server parameters in Azure portal](how-to-server-parameters.md)
mysql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-powershell.md
+
+ Title: Configure server parameters - Azure PowerShell - Azure Database for MySQL
+description: This article describes how to configure the service parameters in Azure Database for MySQL using PowerShell.
++++
+ms.devlang: azurepowershell
+ Last updated : 10/1/2020+++
+# Configure server parameters in Azure Database for MySQL using PowerShell
++
+You can list, show, and update configuration parameters for an Azure Database for MySQL server using
+PowerShell. A subset of engine configurations is exposed at the server-level and can be modified.
+
+>[!Note]
+> Server parameters can be updated globally at the server-level, use the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), [PowerShell](./how-to-configure-server-parameters-using-powershell.md), or [Azure portal](./how-to-server-parameters.md).
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+ [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
+> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
++
+## List server configuration parameters for Azure Database for MySQL server
+
+To list all modifiable parameters in a server and their values, run the `Get-AzMySqlConfiguration`
+cmdlet.
+
+The following example lists the server configuration parameters for the server **mydemoserver** in
+resource group **myresourcegroup**.
+
+```azurepowershell-interactive
+Get-AzMySqlConfiguration -ResourceGroupName myresourcegroup -ServerName mydemoserver
+```
+
+For the definition of each of the listed parameters, see the MySQL reference section on
+[Server System Variables](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html).
+
+## Show server configuration parameter details
+
+To show details about a particular configuration parameter for a server, run the
+`Get-AzMySqlConfiguration` cmdlet and specify the **Name** parameter.
+
+This example shows details of the **slow\_query\_log** server configuration parameter for server
+**mydemoserver** under resource group **myresourcegroup**.
+
+```azurepowershell-interactive
+Get-AzMySqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver
+```
+
+## Modify a server configuration parameter value
+
+You can also modify the value of a certain server configuration parameter, which updates the
+underlying configuration value for the MySQL server engine. To update the configuration, use the
+`Update-AzMySqlConfiguration` cmdlet.
+
+To update the **slow\_query\_log** server configuration parameter of server
+**mydemoserver** under resource group **myresourcegroup**.
+
+```azurepowershell-interactive
+Update-AzMySqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver -Value On
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Auto grow storage in Azure Database for MySQL server using PowerShell](how-to-auto-grow-storage-powershell.md).
mysql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-sign-in-azure-ad-authentication.md
+
+ Title: Use Azure Active Directory - Azure Database for MySQL
+description: Learn about how to set up Azure Active Directory (Azure AD) for authentication with Azure Database for MySQL
+++++ Last updated : 07/23/2020 +++
+# Use Azure Active Directory for authentication with MySQL
++
+This article will walk you through the steps how to configure Azure Active Directory access with Azure Database for MySQL, and how to connect using an Azure AD token.
+
+> [!IMPORTANT]
+> Azure Active Directory authentication is only available for MySQL 5.7 and newer.
+
+## Setting the Azure AD Admin user
+
+Only an Azure AD Admin user can create/enable users for Azure AD-based authentication. To create an Azure AD Admin user, please follow the following steps
+
+1. In the Azure portal, select the instance of Azure Database for MySQL that you want to enable for Azure AD.
+2. Under Settings, select Active Directory Admin:
+
+![set azure ad administrator][2]
+
+3. Select a valid Azure AD user in the customer tenant to be Azure AD administrator.
+
+> [!IMPORTANT]
+> When setting the administrator, a new user is added to the Azure Database for MySQL server with full administrator permissions.
+
+Only one Azure AD admin can be created per MySQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
+
+After configuring the administrator, you can now sign in:
+
+## Connecting to Azure Database for MySQL using Azure AD
+
+The following high-level diagram summarizes the workflow of using Azure AD authentication with Azure Database for MySQL:
+
+![authentication flow][1]
+
+WeΓÇÖve designed the Azure AD integration to work with common MySQL tools like the mysql CLI, which are not Azure AD aware and only support specifying username and password when connecting to MySQL. We pass the Azure AD token as the password as shown in the picture above.
+
+We currently have tested the following clients:
+
+- MySQLWorkbench
+- MySQL CLI
+
+We have also tested most common application drivers, you can see details at the end of this page.
+
+These are the steps that a user/application will need to do authenticate with Azure AD described below:
+
+### Prerequisites
+
+You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine. Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Step 1: Authenticate with Azure AD
+
+Start by authenticating with Azure AD using the Azure CLI tool. This step is not required in Azure Cloud Shell.
+
+```
+az login
+```
+
+The command will launch a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and the password.
+
+### Step 2: Retrieve Azure AD access token
+
+Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for MySQL.
+
+Example (for Public Cloud):
+
+```azurecli-interactive
+az account get-access-token --resource https://ossrdbms-aad.database.windows.net
+```
+The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
+
+```azurecli-interactive
+az cloud show
+```
+
+For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
+
+```azurecli-interactive
+az account get-access-token --resource-type oss-rdbms
+```
+Using PowerShell, you can use the following command to acquire access token:
+
+```azurepowershell-interactive
+$accessToken = Get-AzAccessToken -ResourceUrl https://ossrdbms-aad.database.windows.net
+$accessToken.Token | out-file C:\temp\MySQLAccessToken.txt
+```
++
+After authentication is successful, Azure AD will return an access token:
+
+```json
+{
+ "accessToken": "TOKEN",
+ "expiresOn": "...",
+ "subscription": "...",
+ "tenant": "...",
+ "tokenType": "Bearer"
+}
+```
+
+The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for MySQL service.
+
+The access token validity is anywhere between ***5 minutes to 60 minutes***. We recommend you get the access token just before initiating the login to Azure Database for MySQL. You can use the following PowerShell command to see the token validity.
+
+```azurepowershell-interactive
+$accessToken.ExpiresOn.DateTime
+```
+
+### Step 3: Use token as password for logging in with MySQL
+
+When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method described above to retrieve the token.
+
+#### Using MySQL CLI
+When using the CLI, you can use this short-hand to connect:
+
+**Example (Linux/macOS):**
+```
+mysql -h mydb.mysql.database.azure.com \
+ --user user@tenant.onmicrosoft.com@mydb \
+ --enable-cleartext-plugin \
+ --password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken`
+```
+#### Using MySQL Workbench
+* Launch MySQL Workbench and Click the Database option, then click "Connect to database"
+* In the hostname field, enter the MySQL FQDN eg. mydb.mysql.database.azure.com
+* In the username field, enter the MySQL Azure Active Directory administrator name and append this with MySQL server name, not the FQDN e.g. user@tenant.onmicrosoft.com@mydb
+* In the password field, click "Store in Vault" and paste in the access token from file e.g. C:\temp\MySQLAccessToken.txt
+* Click the advanced tab and ensure that you check "Enable Cleartext Authentication Plugin"
+* Click OK to connect to the database
+
+#### Important considerations when connecting:
+
+* `user@tenant.onmicrosoft.com` is the name of the Azure AD user or group you are trying to connect as
+* Always append the server name after the Azure AD user/group name (e.g. `@mydb`)
+* Make sure to use the exact way the Azure AD user or group name is spelled
+* Azure AD user and group names are case sensitive
+* When connecting as a group, use only the group name (e.g. `GroupName@mydb`)
+* If the name contains spaces, use `\` before each space to escape it
+
+Note the ΓÇ£enable-cleartext-pluginΓÇ¥ setting ΓÇô you need to use a similar configuration with other clients to make sure the token gets sent to the server without being hashed.
+
+You are now authenticated to your MySQL server using Azure AD authentication.
+
+## Creating Azure AD users in Azure Database for MySQL
+
+To add an Azure AD user to your Azure Database for MySQL database, perform the following steps after connecting (see later section on how to connect):
+
+1. First ensure that the Azure AD user `<user>@yourtenant.onmicrosoft.com` is a valid user in Azure AD tenant.
+2. Sign in to your Azure Database for MySQL instance as the Azure AD Admin user.
+3. Create user `<user>@yourtenant.onmicrosoft.com` in Azure Database for MySQL.
+
+**Example:**
+
+```sql
+CREATE AADUSER 'user1@yourtenant.onmicrosoft.com';
+```
+
+For user names that exceed 32 characters, it is recommended you use an alias instead, to be used when connecting:
+
+Example:
+
+```sql
+CREATE AADUSER 'userWithLongName@yourtenant.onmicrosoft.com' as 'userDefinedShortName';
+```
+> [!NOTE]
+> 1. MySQL ignores leading and trailing spaces so user name should not have any leading or trailing spaces.
+> 2. Authenticating a user through Azure AD does not give the user any permissions to access objects within the Azure Database for MySQL database. You must grant the user the required permissions manually.
+
+## Creating Azure AD groups in Azure Database for MySQL
+
+To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
+
+**Example:**
+
+```sql
+CREATE AADUSER 'Prod_DB_Readonly';
+```
+
+When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username.
+
+## Token Validation
+
+Azure AD authentication in Azure Database for MySQL ensures that the user exists in the MySQL server, and it checks the validity of the token by validating the contents of the token. The following token validation steps are performed:
+
+- Token is signed by Azure AD and has not been tampered with
+- Token was issued by Azure AD for the tenant associated with the server
+- Token has not expired
+- Token is for the Azure Database for MySQL resource (and not another Azure resource)
+
+## Compatibility with application drivers
+
+Most drivers are supported, however make sure to use the settings for sending the password in clear-text, so the token gets sent without modification.
+
+* C/C++
+ * libmysqlclient: Supported
+ * mysql-connector-c++: Supported
+* Java
+ * Connector/J (mysql-connector-java): Supported, must utilize `useSSL` setting
+* Python
+ * Connector/Python: Supported
+* Ruby
+ * mysql2: Supported
+* .NET
+ * mysql-connector-net: Supported, need to add plugin for mysql_clear_password
+ * mysql-net/MySqlConnector: Supported
+* Node.js
+ * mysqljs: Not supported (does not send token in cleartext without patch)
+ * node-mysql2: Supported
+* Perl
+ * DBD::mysql: Supported
+ * Net::MySQL: Not supported
+* Go
+ * go-sql-driver: Supported, add `?tls=true&allowCleartextPasswords=true` to connection string
+
+## Next steps
+
+* Review the overall concepts for [Azure Active Directory authentication with Azure Database for MySQL](concepts-azure-ad-authentication.md)
+
+<!--Image references-->
+
+[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
+[2]: ./media/concepts-azure-ad-authentication/set-azure-ad-admin.png
mysql How To Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md
+
+ Title: Configure SSL - Azure Database for MySQL
+description: Instructions for how to properly configure Azure Database for MySQL and associated applications to correctly use SSL connections
+++++
+ms.devlang: csharp, golang, java, javascript, php, python, ruby
+ Last updated : 07/08/2020+
+# Configure SSL connectivity in your application to securely connect to Azure Database for MySQL
++
+Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application.
+
+## Step 1: Obtain SSL certificate
+
+Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive (this tutorial uses c:\ssl for example).
+**For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
+
+See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
+
+## Step 2: Bind SSL
+
+For specific programming language connection strings, please refer to the [sample code](how-to-configure-ssl.md#sample-code) below.
+
+### Connecting to server using MySQL Workbench over SSL
+
+Configure MySQL Workbench to connect securely over SSL.
+
+1. From the Setup New Connection dialogue, navigate to the **SSL** tab.
+
+1. Update the **Use SSL** field to "Require".
+
+1. In the **SSL CA File:** field, enter the file location of the **BaltimoreCyberTrustRoot.crt.pem**.
+
+ :::image type="content" source="./media/how-to-configure-ssl/mysql-workbench-ssl.png" alt-text="Save SSL configuration":::
+
+For existing connections, you can bind SSL by right-clicking on the connection icon and choose edit. Then navigate to the **SSL** tab and bind the cert file.
+
+### Connecting to server using the MySQL CLI over SSL
+
+Another way to bind the SSL certificate is to use the MySQL command-line interface by executing the following commands.
+
+```bash
+mysql.exe -h mydemoserver.mysql.database.azure.com -u Username@mydemoserver -p --ssl-mode=REQUIRED --ssl-ca=c:\ssl\BaltimoreCyberTrustRoot.crt.pem
+```
+
+> [!NOTE]
+> When using the MySQL command-line interface on Windows, you may receive an error `SSL connection error: Certificate signature check failed`. If this occurs, replace the `--ssl-mode=REQUIRED --ssl-ca={filepath}` parameters with `--ssl`.
+
+## Step 3: Enforcing SSL connections in Azure
+
+### Using the Azure portal
+
+Using the Azure portal, visit your Azure Database for MySQL server, and then click **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting, and then click **Save**. Microsoft recommends to always enable the **Enforce SSL connection** setting for enhanced security.
++
+### Using Azure CLI
+
+You can enable or disable the **ssl-enforcement** parameter by using Enabled or Disabled values respectively in Azure CLI.
+
+```azurecli-interactive
+az mysql server update --resource-group myresource --name mydemoserver --ssl-enforcement Enabled
+```
+
+## Step 4: Verify the SSL connection
+
+Execute the mysql **status** command to verify that you have connected to your MySQL server using SSL:
+
+```dos
+mysql> status
+```
+
+Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is AES256-SHA**
+
+## Sample code
+
+To establish a secure connection to Azure Database for MySQL over SSL from your application, refer to the following code samples:
+
+Refer to the list of [compatible drivers](concepts-compatibility.md) supported by the Azure Database for MySQL service.
+
+### PHP
+
+```php
+$conn = mysqli_init();
+mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/BaltimoreCyberTrustRoot.crt.pem", NULL, NULL);
+mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, MYSQLI_CLIENT_SSL);
+if (mysqli_connect_errno($conn)) {
+die('Failed to connect to MySQL: '.mysqli_connect_error());
+}
+```
+
+### PHP (Using PDO)
+
+```phppdo
+$options = array(
+ PDO::MYSQL_ATTR_SSL_CA => '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'
+);
+$db = new PDO('mysql:host=mydemoserver.mysql.database.azure.com;port=3306;dbname=databasename', 'username@mydemoserver', 'yourpassword', $options);
+```
+
+### Python (MySQLConnector Python)
+
+```python
+try:
+ conn = mysql.connector.connect(user='myadmin@mydemoserver',
+ password='yourpassword',
+ database='quickstartdb',
+ host='mydemoserver.mysql.database.azure.com',
+ ssl_ca='/var/www/html/BaltimoreCyberTrustRoot.crt.pem')
+except mysql.connector.Error as err:
+ print(err)
+```
+
+### Python (PyMySQL)
+
+```python
+conn = pymysql.connect(user='myadmin@mydemoserver',
+ password='yourpassword',
+ database='quickstartdb',
+ host='mydemoserver.mysql.database.azure.com',
+ ssl={'ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'})
+```
+
+### Django (PyMySQL)
+
+```python
+DATABASES = {
+ 'default': {
+ 'ENGINE': 'django.db.backends.mysql',
+ 'NAME': 'quickstartdb',
+ 'USER': 'myadmin@mydemoserver',
+ 'PASSWORD': 'yourpassword',
+ 'HOST': 'mydemoserver.mysql.database.azure.com',
+ 'PORT': '3306',
+ 'OPTIONS': {
+ 'ssl': {'ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'}
+ }
+ }
+}
+```
+
+### Ruby
+
+```ruby
+client = Mysql2::Client.new(
+ :host => 'mydemoserver.mysql.database.azure.com',
+ :username => 'myadmin@mydemoserver',
+ :password => 'yourpassword',
+ :database => 'quickstartdb',
+ :sslca => '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'
+ )
+```
+
+### Golang
+
+```go
+rootCertPool := x509.NewCertPool()
+pem, _ := ioutil.ReadFile("/var/www/html/BaltimoreCyberTrustRoot.crt.pem")
+if ok := rootCertPool.AppendCertsFromPEM(pem); !ok {
+ log.Fatal("Failed to append PEM.")
+}
+mysql.RegisterTLSConfig("custom", &tls.Config{RootCAs: rootCertPool})
+var connectionString string
+connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true&tls=custom","myadmin@mydemoserver" , "yourpassword", "mydemoserver.mysql.database.azure.com", 'quickstartdb')
+db, _ := sql.Open("mysql", connectionString)
+```
+
+### Java (MySQL Connector for Java)
+
+```java
+# generate truststore and keystore in code
+
+String importCert = " -import "+
+ " -alias mysqlServerCACert "+
+ " -file " + ssl_ca +
+ " -keystore truststore "+
+ " -trustcacerts " +
+ " -storepass password -noprompt ";
+String genKey = " -genkey -keyalg rsa " +
+ " -alias mysqlClientCertificate -keystore keystore " +
+ " -storepass password123 -keypass password " +
+ " -dname CN=MS ";
+sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+"));
+sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+"));
+
+# use the generated keystore and truststore
+
+System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file");
+System.setProperty("javax.net.ssl.keyStorePassword","password");
+System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
+System.setProperty("javax.net.ssl.trustStorePassword","password");
+
+url = String.format("jdbc:mysql://%s/%s?serverTimezone=UTC&useSSL=true", 'mydemoserver.mysql.database.azure.com', 'quickstartdb');
+properties.setProperty("user", 'myadmin@mydemoserver');
+properties.setProperty("password", 'yourpassword');
+conn = DriverManager.getConnection(url, properties);
+```
+
+### Java (MariaDB Connector for Java)
+
+```java
+# generate truststore and keystore in code
+
+String importCert = " -import "+
+ " -alias mysqlServerCACert "+
+ " -file " + ssl_ca +
+ " -keystore truststore "+
+ " -trustcacerts " +
+ " -storepass password -noprompt ";
+String genKey = " -genkey -keyalg rsa " +
+ " -alias mysqlClientCertificate -keystore keystore " +
+ " -storepass password123 -keypass password " +
+ " -dname CN=MS ";
+sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+"));
+sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+"));
+
+# use the generated keystore and truststore
++
+System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file");
+System.setProperty("javax.net.ssl.keyStorePassword","password");
+System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
+System.setProperty("javax.net.ssl.trustStorePassword","password");
+
+url = String.format("jdbc:mariadb://%s/%s?useSSL=true&trustServerCertificate=true", 'mydemoserver.mysql.database.azure.com', 'quickstartdb');
+properties.setProperty("user", 'myadmin@mydemoserver');
+properties.setProperty("password", 'yourpassword');
+conn = DriverManager.getConnection(url, properties);
+```
+
+### .NET (MySqlConnector)
+
+```csharp
+var builder = new MySqlConnectionStringBuilder
+{
+ Server = "mydemoserver.mysql.database.azure.com",
+ UserID = "myadmin@mydemoserver",
+ Password = "yourpassword",
+ Database = "quickstartdb",
+ SslMode = MySqlSslMode.VerifyCA,
+ SslCa = "BaltimoreCyberTrustRoot.crt.pem",
+};
+using (var connection = new MySqlConnection(builder.ConnectionString))
+{
+ connection.Open();
+}
+```
+
+### Node.js
+
+```node
+var fs = require('fs');
+var mysql = require('mysql');
+const serverCa = [fs.readFileSync("/var/www/html/BaltimoreCyberTrustRoot.crt.pem", "utf8")];
+var conn=mysql.createConnection({
+ host:"mydemoserver.mysql.database.azure.com",
+ user:"myadmin@mydemoserver",
+ password:"yourpassword",
+ database:"quickstartdb",
+ port:3306,
+ ssl: {
+ rejectUnauthorized: true,
+ ca: serverCa
+ }
+});
+conn.connect(function(err) {
+ if (err) throw err;
+});
+```
+
+## Next steps
+
+* To learn about certificate expiry and rotation, refer [certificate rotation documentation](concepts-certificate-rotation.md)
+* Review various application connectivity options following [Connection libraries for Azure Database for MySQL](concepts-connection-libraries.md)
mysql How To Connect Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-overview-single-server.md
+
+ Title: Connect and query - Single Server MySQL
+description: Links to quickstarts showing how to connect to your Azure My SQL Database Single Server and run queries.
++++++ Last updated : 09/22/2020++
+# Connect and query overview for Azure database for MySQL- Single Server
++
+The following document includes links to examples showing how to connect and query with Azure Database for MySQL Single Server. This guide also includes TLS recommendations and libraries that you can use to connect to the server in supported languages below.
+
+## Quickstarts
+
+| Quickstart | Description |
+|||
+|[MySQL workbench](connect-workbench.md)|This quickstart demonstrates how to use MySQL Workbench Client to connect to a database. You can then use MySQL statements to query, insert, update, and delete data in the database.|
+|[Azure Cloud Shell](./quickstart-create-mysql-server-database-using-azure-cli.md#connect-to-azure-database-for-mysql-server-using-mysql-command-line-client)|This article shows how to run **mysql.exe** in [Azure Cloud Shell](../../cloud-shell/overview.md) to connect to your server and then run statements to query, insert, update, and delete data in the database.|
+|[MySQL with Visual Studio](https://www.mysql.com/why-mysql/windows/visualstudio)|You can use MySQL for Visual Studio for connecting to your MySQL server. MySQL for Visual Studio integrates directly into Server Explorer making it easy to setup new connections and working with database objects.|
+|[PHP](connect-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use MySQL statements to query data.|
+|[Java](connect-java.md)|This quickstart demonstrates how to use Java to connect to a database and then use MySQL statements to query data.|
+|[Node.js](connect-nodejs.md)|This quickstart demonstrates how to use Node.js to create a program to connect to a database and use MySQL statements to query data.|
+|[.NET(C#)](connect-csharp.md)|This quickstart demonstrates how to use.NET (C#) to create a C# program to connect to a database and use MySQL statements to query data.|
+|[Go](connect-go.md)|This quickstart demonstrates how to use Go to connect to a database. Transact-SQL statements to query and modify data are also demonstrated.|
+|[Python](connect-python.md)|This quickstart demonstrates how to use Python to connect to a database and use MySQL statements to query data. |
+|[Ruby](connect-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use MySQL statements to query data.|
+|[C++](connect-cpp.md)|This quickstart demonstrates how to use C+++ to create a program to connect to a database and use query data.|
+
+## TLS considerations for database connectivity
+
+Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to databases in Azure Database for MySQL. No special configuration is necessary but do enforce TLS 1.2 for newly created servers. We recommend if you are using TLS 1.0 and 1.1, then you update the TLS version for your servers. See [How to configure TLS](how-to-tls-configurations.md)
+
+## Libraries
+
+Azure Database for MySQL uses the world's most popular community edition of MySQL database. Hence, it is compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open source community to constantly improve the functionality and usability of MySQL drivers continue.
+
+See what [drivers](concepts-compatibility.md) are compatible with Azure Database for MySQL Single server.
+
+## Next steps
+
+- [Migrate data using dump and restore](concepts-migrate-dump-restore.md)
+- [Migrate data using import and export](concepts-migrate-import-export.md)
mysql How To Connect Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-webapp.md
+
+ Title: Connect to Azure App Service - Azure Database for MySQL
+description: Instructions for how to properly connect an existing Azure App Service to Azure Database for MySQL
+++++ Last updated : 3/18/2020++
+# Connect an existing Azure App Service to Azure Database for MySQL server
+
+This topic explains how to connect an existing Azure App Service to your Azure Database for MySQL server.
+
+## Before you begin
+Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Database for MySQL server. For details, refer to [How to create Azure Database for MySQL server from Portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [How to create Azure Database for MySQL server using CLI](quickstart-create-mysql-server-database-using-azure-cli.md).
+
+Currently there are two solutions to enable access from an Azure App Service to an Azure Database for MySQL. Both solutions involve setting up server-level firewall rules.
+
+## Solution 1 - Allow Azure services
+Azure Database for MySQL provides access security using a firewall to protect your data. When connecting from an Azure App Service to Azure Database for MySQL server, keep in mind that the outbound IPs of App Service are dynamic in nature. Choosing the "Allow access to Azure services" option will allow the app service to connect to the MySQL server.
+
+1. On the MySQL server blade, under the Settings heading, click **Connection Security** to open the Connection Security blade for Azure Database for MySQL.
+
+ :::image type="content" source="./media/how-to-connect-webapp/1-connection-security.png" alt-text="Azure portal - click Connection Security":::
+
+2. Select **ON** in **Allow access to Azure services**, then **Save**.
+ :::image type="content" source="./media/how-to-connect-webapp/allow-azure.png" alt-text="Azure portal - Allow Azure access":::
+
+## Solution 2 - Create a firewall rule to explicitly allow outbound IPs
+You can explicitly add all the outbound IPs of your Azure App Service.
+
+1. On the App Service Properties blade, view your **OUTBOUND IP ADDRESS**.
+
+ :::image type="content" source="./media/how-to-connect-webapp/2-1-outbound-ip-address.png" alt-text="Azure portal - View outbound IPs":::
+
+2. On the MySQL Connection security blade, add outbound IPs one by one.
+
+ :::image type="content" source="./media/how-to-connect-webapp/2-2-add-explicit-ips.png" alt-text="Azure portal - Add explicit IPs":::
+
+3. Remember to **Save** your firewall rules.
+
+Though the Azure App service attempts to keep IP addresses constant over time, there are cases where the IP addresses may change. For example, this can occur when the app recycles or a scale operation occurs, or when new computers are added in Azure regional data centers to increase capacity. When the IP addresses change, the app could experience downtime in the event it can no longer connect to the MySQL server. Keep this consideration in mind when choosing one of the preceding solutions.
+
+## SSL configuration
+Azure Database for MySQL has SSL enabled by default. If your application is not using SSL to connect to the database, then you need to disable SSL on the MySQL server. For details on how to configure SSL, see [Using SSL with Azure Database for MySQL](how-to-configure-ssl.md).
+
+### Django (PyMySQL)
+```python
+DATABASES = {
+ 'default': {
+ 'ENGINE': 'django.db.backends.mysql',
+ 'NAME': 'quickstartdb',
+ 'USER': 'myadmin@mydemoserver',
+ 'PASSWORD': 'yourpassword',
+ 'HOST': 'mydemoserver.mysql.database.azure.com',
+ 'PORT': '3306',
+ 'OPTIONS': {
+ 'ssl': {'ssl-ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'}
+ }
+ }
+}
+```
+
+## Next steps
+For more information about connection strings, refer to [Connection Strings](how-to-connection-string.md).
mysql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-with-managed-identity.md
+
+ Title: Connect with Managed Identity - Azure Database for MySQL
+description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for MySQL
+++++ Last updated : 05/19/2020+++
+# Connect with Managed Identity to Azure Database for MySQL
++
+This article shows you how to use a user-assigned identity for an Azure Virtual Machine (VM) to access an Azure Database for MySQL server. Managed Service Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code.
+
+You learn how to:
+
+- Grant your VM access to an Azure Database for MySQL server
+- Create a user in the database that represents the VM's user-assigned identity
+- Get an access token using the VM identity and use it to query an Azure Database for MySQL server
+- Implement the token retrieval in a C# example application
+
+> [!IMPORTANT]
+> Connecting with Managed Identity is only available for MySQL 5.7 and newer.
+
+## Prerequisites
+
+- If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.
+- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../../articles/role-based-access-control/role-assignments-portal.md).
+- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity
+- You need an Azure Database for MySQL database server that has [Azure AD authentication](how-to-configure-sign-in-azure-ad-authentication.md) configured
+- To follow the C# example, first complete the guide how to [Connect using C#](connect-csharp.md)
+
+## Creating a user-assigned managed identity for your VM
+
+Create an identity in your subscription using the [az identity create](/cli/azure/identity#az-identity-create) command. You can use the same resource group that your virtual machine runs in, or a different one.
+
+```azurecli-interactive
+az identity create --resource-group myResourceGroup --name myManagedIdentity
+```
+
+To configure the identity in the following steps, use the [az identity show](/cli/azure/identity#az-identity-show) command to store the identity's resource ID and client ID in variables.
+
+```azurecli
+# Get resource ID of the user-assigned identity
+
+resourceID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query id --output tsv)
+
+# Get client ID of the user-assigned identity
++
+clientID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query clientId --output tsv)
+```
+
+We can now assign the user-assigned identity to the VM with the [az vm identity assign](/cli/azure/vm/identity#az-vm-identity-assign) command:
+
+```azurecli
+az vm identity assign --resource-group myResourceGroup --name myVM --identities $resourceID
+```
+
+To finish setup, show the value of the Client ID, which you'll need in the next few steps:
+
+```bash
+echo $clientID
+```
+
+## Creating a MySQL user for your Managed Identity
+
+Now, connect as the Azure AD administrator user to your MySQL database, and run the following SQL statements:
+
+```sql
+SET aad_auth_validate_oids_in_tenant = OFF;
+CREATE AADUSER 'myuser' IDENTIFIED BY 'CLIENT_ID';
+```
+
+The managed identity now has access when authenticating with the username `myuser` (replace with a name of your choice).
+
+## Retrieving the access token from Azure Instance Metadata service
+
+Your application can now retrieve an access token from the Azure Instance Metadata service and use it for authenticating with the database.
+
+This token retrieval is done by making an HTTP request to `http://169.254.169.254/metadata/identity/oauth2/token` and passing the following parameters:
+
+- `api-version` = `2018-02-01`
+- `resource` = `https://ossrdbms-aad.database.windows.net`
+- `client_id` = `CLIENT_ID` (that you retrieved earlier)
+
+You'll get back a JSON result that contains an `access_token` field - this long text value is the Managed Identity access token, that you should use as the password when connecting to the database.
+
+For testing purposes, you can run the following commands in your shell. Note you need `curl`, `jq`, and the `mysql` client installed.
+
+```bash
+# Retrieve the access token
++
+accessToken=$(curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=CLIENT_ID' -H Metadata:true | jq -r .access_token)
+
+# Connect to the database
++
+mysql -h SERVER --user USER@SERVER --enable-cleartext-plugin --password=$accessToken
+```
+
+You are now connected to the database you've configured earlier.
+
+## Connecting using Managed Identity in C#
+
+This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for MySQL. Azure Database for MySQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to MySQL, you pass the access token in the password field.
+
+Here's a .NET code example of opening a connection to MySQL using an access token. This code must run on the VM to access the VM's user-assigned managed identity's endpoint. .NET Framework 4.6 or higher or .NET Core 2.2 or higher is required to use the access token method. Replace the values of HOST, USER, DATABASE, and CLIENT_ID.
+
+```csharp
+using System;
+using System.Net;
+using System.IO;
+using System.Collections;
+using System.Collections.Generic;
+using System.Text.Json;
+using System.Text.Json.Serialization;
+using System.Threading.Tasks;
+using MySql.Data.MySqlClient;
+
+namespace Driver
+{
+ class Script
+ {
+ // Obtain connection string information from the portal
+ //
+ private static string Host = "HOST";
+ private static string User = "USER";
+ private static string Database = "DATABASE";
+ private static string ClientId = "CLIENT_ID";
+
+ static async Task Main(string[] args)
+ {
+ //
+ // Get an access token for MySQL.
+ //
+ Console.Out.WriteLine("Getting access token from Azure Instance Metadata service...");
+
+ // Azure AD resource ID for Azure Database for MySQL is https://ossrdbms-aad.database.windows.net/
+ HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=" + ClientId);
+ request.Headers["Metadata"] = "true";
+ request.Method = "GET";
+ string accessToken = null;
+
+ try
+ {
+ // Call managed identities for Azure resources endpoint.
+ HttpWebResponse response = (HttpWebResponse)request.GetResponse();
+
+ // Pipe response Stream to a StreamReader and extract access token.
+ StreamReader streamResponse = new StreamReader(response.GetResponseStream());
+ string stringResponse = streamResponse.ReadToEnd();
+ var list = JsonSerializer.Deserialize<Dictionary<string, string>>(stringResponse);
+ accessToken = list["access_token"];
+ }
+ catch (Exception e)
+ {
+ Console.Out.WriteLine("{0} \n\n{1}", e.Message, e.InnerException != null ? e.InnerException.Message : "Acquire token failed");
+ System.Environment.Exit(1);
+ }
+
+ //
+ // Open a connection to the MySQL server using the access token.
+ //
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = Host,
+ Database = Database,
+ UserID = User,
+ Password = accessToken,
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.Out.WriteLine("Opening connection using access token...");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "SELECT VERSION()";
+
+ using (var reader = await command.ExecuteReaderAsync())
+ {
+ while (await reader.ReadAsync())
+ {
+ Console.WriteLine("\nConnected!\n\nMySQL version: {0}", reader.GetString(0));
+ }
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+When run, this command will give an output like this:
+
+```
+Getting access token from Azure Instance Metadata service...
+Opening connection using access token...
+
+Connected!
+
+MySQL version: 5.7.27
+```
+
+## Next steps
+
+- Review the overall concepts for [Azure Active Directory authentication with Azure Database for MySQL](concepts-azure-ad-authentication.md)
mysql How To Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string-powershell.md
+
+ Title: Generate a connection string with PowerShell - Azure Database for MySQL
+description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for MySQL.
++++++ Last updated : 8/5/2020++
+# How to generate an Azure Database for MySQL connection string with PowerShell
++
+This article demonstrates how to generate a connection string for an Azure Database for MySQL
+server. You can use a connection string to connect to an Azure Database for MySQL from many
+different applications.
+
+## Requirements
+
+This article uses the resources created in the following guide as a starting point:
+
+* [Quickstart: Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md)
+
+## Get the connection string
+
+The `Get-AzMySqlConnectionString` cmdlet is used to generate a connection string for connecting
+applications to Azure Database for MySQL. The following example returns the connection string for a
+PHP client from **mydemoserver**.
+
+```azurepowershell-interactive
+Get-AzMySqlConnectionString -Client PHP -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+```Output
+$con=mysqli_init();mysqli_ssl_set($con, NULL, NULL, {ca-cert filename}, NULL, NULL); mysqli_real_connect($con, "mydemoserver.mysql.database.azure.com", "myadmin@mydemoserver", {your_password}, {your_database}, 3306);
+```
+
+Valid values for the `Client` parameter include:
+
+* ADO&#46;NET
+* JDBC
+* Node.js
+* PHP
+* Python
+* Ruby
+* WebApp
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Customize Azure Database for MySQL server parameters using PowerShell](how-to-configure-server-parameters-using-powershell.md)
mysql How To Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string.md
+
+ Title: Connection strings - Azure Database for MySQL
+description: This document lists the currently supported connection strings for applications to connect with Azure Database for MySQL, including ADO.NET (C#), JDBC, Node.js, ODBC, PHP, Python, and Ruby.
+++++ Last updated : 3/18/2020+++
+# How to connect applications to Azure Database for MySQL
+
+This topic lists the connection string types that are supported by Azure Database for MySQL, together with templates and examples. You might have different parameters and settings in your connection string.
+
+- To obtain the certificate, see [How to configure SSL](./how-to-configure-ssl.md).
+- {your_host} = \<servername>.mysql.database.azure.com
+- {your_user}@{servername} = userID format for authentication correctly. If you only use the userID, the authentication will fail.
+
+## ADO.NET
+```ado.net
+Server={your_host};Port={your_port};Database={your_database};Uid={username@servername};Pwd={your_password};[SslMode=Required;]
+```
+
+In this example, the server name is `mydemoserver`, the database name is `wpdb`, the user name is `WPAdmin`, and the password is `mypassword!2`. As a result, the connection string should be:
+
+```ado.net
+Server= "mydemoserver.mysql.database.azure.com"; Port=3306; Database= "wpdb"; Uid= "WPAdmin@mydemoserver"; Pwd="mypassword!2"; SslMode=Required;
+```
+
+## JDBC
+```jdbc
+String url ="jdbc:mysql://%s:%s/%s[?verifyServerCertificate=true&useSSL=true&requireSSL=true]",{your_host},{your_port},{your_database}"; myDbConn = DriverManager.getConnection(url, {username@servername}, {your_password}";
+```
+
+## Node.js
+```node.js
+var conn = mysql.createConnection({host: {your_host}, user: {username@servername}, password: {your_password}, database: {your_database}, Port: {your_port}[, ssl:{ca:fs.readFileSync({ca-cert filename})}}]);
+```
+
+## ODBC
+```odbc
+DRIVER={MySQL ODBC 5.3 UNICODE Driver};Server={your_host};Port={your_port};Database={your_database};Uid={username@servername};Pwd={your_password}; [sslca={ca-cert filename}; sslverify=1; Option=3;]
+```
+
+## PHP
+```php
+$con=mysqli_init(); [mysqli_ssl_set($con, NULL, NULL, {ca-cert filename}, NULL, NULL);] mysqli_real_connect($con, {your_host}, {username@servername}, {your_password}, {your_database}, {your_port});
+```
+
+## Python
+```python
+cnx = mysql.connector.connect(user={username@servername}, password={your_password}, host={your_host}, port={your_port}, database={your_database}[, ssl_ca={ca-cert filename}, ssl_verify_cert=true])
+```
+
+## Ruby
+```ruby
+client = Mysql2::Client.new(username: {username@servername}, password: {your_password}, database: {your_database}, host: {your_host}, port: {your_port}[, sslca:{ca-cert filename}, sslverify:false, sslcipher:'AES256-SHA'])
+```
+
+## Get the connection string details from the Azure portal
+In the [Azure portal](https://portal.azure.com), go to your Azure Database for MySQL server, and then click **Connection strings** to get the string list for your instance:
+
+The string provides details such as the driver, server, and other database connection parameters. Modify these examples to use your own parameters, such as database name, password, and so on. You can then use this string to connect to the server from your code and applications.
+
+## Next steps
+- For more information about connection libraries, see [Concepts - Connection libraries](./concepts-connection-libraries.md).
mysql How To Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-create-manage-server-portal.md
+
+ Title: Manage server - Azure portal - Azure Database for MySQL
+description: Learn how to manage an Azure Database for MySQL server from the Azure portal.
+++++ Last updated : 1/26/2021++
+# Manage an Azure Database for MySQL server using the Azure portal
++
+This article shows you how to manage your Azure Database for MySQL servers. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+>
+
+## Sign in
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create a server
+
+Visit the [quickstart](quickstart-create-mysql-server-database-using-azure-portal.md) to learn how to create and get started with an Azure Database for MySQL server.
+
+## Scale compute and storage
+
+After server creation you can scale between the General Purpose and Memory Optimized tiers as your needs change. You can also scale compute and memory by increasing or decreasing vCores. Storage can be scaled up (however, you cannot scale storage down).
+
+### Scale between General Purpose and Memory Optimized tiers
+
+You can scale from General Purpose to Memory Optimized and vice-versa. Changing to and from the Basic tier after server creation is not supported.
+
+1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
+
+2. Select **General Purpose** or **Memory Optimized**, depending on what you are scaling to.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/change-pricing-tier.png" alt-text="Screenshot of Azure portal to choose Basic, General Purpose, or Memory Optimized tier in Azure Database for MySQL":::
+
+ > [!NOTE]
+ > Changing tiers causes a server restart.
+
+3. Select **OK** to save changes.
+
+### Scale vCores up or down
+
+1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
+
+2. Change the **vCore** setting by moving the slider to your desired value.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/scaling-compute.png" alt-text="Screenshot of Azure portal to choose vCore option in Azure Database for MySQL":::
+
+ > [!NOTE]
+ > Scaling vCores causes a server restart.
+
+3. Select **OK** to save changes.
+
+### Scale storage up
+
+1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
+
+2. Change the **Storage** setting by moving the slider up to your desired value.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/scaling-storage.png" alt-text="Screenshot of Azure portal to choose Storage scale in Azure Database for MySQL":::
+
+ > [!NOTE]
+ > Storage cannot be scaled down.
+
+3. Select **OK** to save changes.
+
+## Update admin password
+
+You can change the administrator role's password using the Azure portal.
+
+1. Select your server in the Azure portal. In the **Overview** window select **Reset password**.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/overview-reset-password.png" alt-text="Screenshot of Azure portal to reset the password in Azure Database for MySQL":::
+
+2. Enter a new password and confirm the password. The textbox will prompt you about password complexity requirements.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/reset-password.png" alt-text="Screenshot of Azure portal to reset your password and save in Azure Database for MySQL":::
+
+3. Select **OK** to save the new password.
+
+
+> [!IMPORTANT]
+> Resetting server admin password will automatically reset the server admin privileges to default. Consider resetting your server admin password if you accidentally revoked one or more of the server admin privileges.
+
+> [!NOTE]
+> Server admin user has the following privileges by default: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER
+
+## Delete a server
+
+You can delete your server if you no longer need it.
+
+1. Select your server in the Azure portal. In the **Overview** window select **Delete**.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/overview-delete.png" alt-text="Screenshot of Azure portal to Delete the server in Azure Database for MySQL":::
+
+2. Type the name of the server into the input box to confirm that this is the server you want to delete.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/confirm-delete.png" alt-text="Screenshot of Azure portal to confirm the server delete in Azure Database for MySQL":::
+
+ > [!NOTE]
+ > Deleting a server is irreversible.
+
+3. Select **Delete**.
+
+## Next steps
+
+- Learn about [backups and server restore](how-to-restore-server-portal.md)
+- Learn about [tuning and monitoring options in Azure Database for MySQL](concepts-monitoring.md)
mysql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-create-users.md
+
+ Title: How to create users for Azure Database for MySQL
+description: This article describes how to create new user accounts to interact with an Azure Database for MySQL server.
+++++ Last updated : 02/17/2022++
+# Create users in Azure Database for MySQL
++
+This article describes how to create users for Azure Database for MySQL.
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+When you first created your Azure Database for MySQL server, you provided a server admin user name and password. For more information, see this [Quickstart](quickstart-create-mysql-server-database-using-azure-portal.md). You can determine your server admin user name in the Azure portal.
+
+The server admin user has these privileges:
+
+ SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER
+
+After you create an Azure Database for MySQL server, you can use the first server admin account to create more users and grant admin access to them. You can also use the server admin account to create less privileged users that have access to individual database schemas.
+
+> [!NOTE]
+> The SUPER privilege and DBA role aren't supported. Review the [privileges](concepts-limits.md#privileges--data-manipulation-support) in the limitations article to understand what's not supported in the service.
+>
+> Password plugins like `validate_password` and `caching_sha2_password` aren't supported by the service.
+
+## Create a database
+
+1. Get the connection information and admin user name.
+ To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information on the server **Overview** page or on the **Properties** page in the Azure portal.
+
+2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, or HeidiSQL.
+
+> [!NOTE]
+> If you're not sure how to connect, see [connect and query data for Single Server](./connect-workbench.md) or [connect and query data for Flexible Server](../flexible-server/connect-workbench.md).
+
+3. Edit and run the following SQL code. Replace the placeholder value `db_user` with your intended new user name. Replace the placeholder value `testdb` with your database name.
+
+ This SQL code creates a new database named testdb. It then creates a new user in the MySQL service and grants all privileges for the new database schema (testdb.\*) to that user.
+
+ ```sql
+ CREATE DATABASE testdb;
+ ```
+
+## Create a non-admin user
+ Now that the database is created , you can create with a non-admin user with the ``` CREATE USER``` MySQL statement.
+ ``` sql
+ CREATE USER 'db_user'@'%' IDENTIFIED BY 'StrongPassword!';
+
+ GRANT ALL PRIVILEGES ON testdb . * TO 'db_user'@'%';
+
+ FLUSH PRIVILEGES;
+ ```
+
+## Verify the user permissions
+Run the ```SHOW GRANTS``` MySQL statement to view the privileges allowed for user **db_user** on **testdb** database.
+
+ ```sql
+ USE testdb;
+
+ SHOW GRANTS FOR 'db_user'@'%';
+ ```
+
+## Connect to the database with new user
+Sign in to the server, specifying the designated database and using the new user name and password. This example shows the mysql command line. When you use this command, you'll be prompted for the user's password. Use your own server name, database name, and user name. See how to connect for Single server and Flexible server below.
+
+| Server type | Usage |
+| -- | -- |
+| Single Server | ```mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user@mydemoserver -p``` |
+| Flexible Server | ``` mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user -p```|
++
+## Limit privileges for user
+To restrict the type of operations a user can run on the database, you need to explicitly add the operations in the **GRANT** statement. See an example below:
+
+ ```sql
+ CREATE USER 'new_master_user'@'%' IDENTIFIED BY 'StrongPassword!';
+
+ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'new_master_user'@'%' WITH GRANT OPTION;
+
+ FLUSH PRIVILEGES;
+ ```
+
+## About azure_superuser
+
+All Azure Database for MySQL servers are created with a user called "azure_superuser". This is a system account created by Microsoft to manage the server to conduct monitoring, backups, and other regular maintenance. On-call engineers may also use this account to access the server during an incident with certificate authentication and must request access using just-in-time (JIT) processes.
+
+## Next steps
+
+For more information about user account management, see the MySQL product documentation for [User account management](https://dev.mysql.com/doc/refman/5.7/en/access-control.html), [GRANT syntax](https://dev.mysql.com/doc/refman/5.7/en/grant.html), and [Privileges](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html).
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-cli.md
+
+ Title: Data encryption - Azure CLI - Azure Database for MySQL
+description: Learn how to set up and manage data encryption for your Azure Database for MySQL by using the Azure CLI.
+++++ Last updated : 03/30/2020 +++
+# Data encryption for Azure Database for MySQL by using the Azure CLI
++
+Learn how to use the Azure CLI to set up and manage data encryption for your Azure Database for MySQL.
+
+## Prerequisites for Azure CLI
+
+* You must have an Azure subscription and be an administrator on that subscription.
+* Create a key vault and a key to use for a customer-managed key. Also enable purge protection and soft delete on the key vault.
+
+ ```azurecli-interactive
+ az keyvault create -g <resource_group> -n <vault_name> --enable-soft-delete true --enable-purge-protection true
+ ```
+
+* In the created Azure Key Vault, create the key that will be used for the data encryption of the Azure Database for MySQL.
+
+ ```azurecli-interactive
+ az keyvault key create --name <key_name> -p software --vault-name <vault_name>
+ ```
+
+* In order to use an existing key vault, it must have the following properties to use as a customer-managed key:
+
+ * [Soft delete](../../key-vault/general/soft-delete-overview.md)
+
+ ```azurecli-interactive
+ az resource update --id $(az keyvault show --name \ <key_vault_name> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
+ ```
+
+ * [Purge protected](../../key-vault/general/soft-delete-overview.md#purge-protection)
+
+ ```azurecli-interactive
+ az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true
+ ```
+ * Retention days set to 90 days
+ ```azurecli-interactive
+ az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --retention-days 90
+ ```
+
+* The key must have the following attributes to use as a customer-managed key:
+ * No expiration date
+ * Not disabled
+ * Perform **get**, **wrap**, **unwrap** operations
+ * recoverylevel attribute set to **Recoverable** (this requires soft-delete enabled with retention period set to 90 days)
+ * Purge protection enabled
+
+You can verify the above attributes of the key by using the following command:
+
+```azurecli-interactive
+az keyvault key show --vault-name <key_vault_name> -n <key_name>
+```
+* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [data encryption with customer managed keys](concepts-data-encryption-mysql.md#limitations).
+
+## Set the right permissions for key operations
+
+1. There are two ways of getting the managed identity for your Azure Database for MySQL.
+
+ ### Create an new Azure Database for MySQL server with a managed identity.
+
+ ```azurecli-interactive
+ az mysql server create --name -g <resource_group> --location <locations> --storage-size size> -u <user>-p <pwd> --backup-retention <7> --sku-name <sku name> -geo-redundant-backup <Enabled/Disabled> --assign-identity
+ ```
+
+ ### Update an existing the Azure Database for MySQL server to get a managed identity.
+
+ ```azurecli-interactive
+ az mysql server update --name <server name> -g <resource_group> --assign-identity
+ ```
+
+2. Set the **Key permissions** (**Get**, **Wrap**, **Unwrap**) for the **Principal**, which is the name of the MySQL server.
+
+ ```azurecli-interactive
+ az keyvault set-policy --name -g <resource_group> --key-permissions get unwrapKey wrapKey --object-id <principal id of the server>
+ ```
+
+## Set data encryption for Azure Database for MySQL
+
+1. Enable Data encryption for the Azure Database for MySQL using the key created in the Azure Key Vault.
+
+ ```azurecli-interactive
+ az mysql server key create ΓÇôname <server name> -g <resource_group> --kid <key url>
+ ```
+
+ Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>`
+
+## Using Data encryption for restore or replica servers
+
+After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted MySQL server, you can use the following steps to create an encrypted restored server.
+
+### Creating a restored/replica server
+
+* [Create a restore server](how-to-restore-server-cli.md)
+* [Create a read replica server](how-to-read-replicas-cli.md)
+
+### Once the server is restored, revalidate data encryption the restored server
+
+* Assign identity for the replica server
+```azurecli-interactive
+az mysql server update --name <server name> -g <resoure_group> --assign-identity
+```
+
+* Get the existing key that has to be used for the restored/replica server
+
+```azurecli-interactive
+az mysql server key list --name '<server_name>' -g '<resource_group_name>'
+```
+
+* Set the policy for the new identity for the restored/replica server
+
+```azurecli-interactive
+az keyvault set-policy --name <keyvault> -g <resoure_group> --key-permissions get unwrapKey wrapKey --object-id <principl id of the server returned by the step 1>
+```
+
+* Re-validate the restored/replica server with the encryption key
+
+```azurecli-interactive
+az mysql server key create ΓÇôname <server name> -g <resource_group> --kid <key url>
+```
+
+## Additional capability for the key being used for the Azure Database for MySQL
+
+### Get the Key used
+
+```azurecli-interactive
+az mysql server key show --name <server name> -g <resource_group> --kid <key url>
+```
+
+Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>`
+
+### List the Key used
+
+```azurecli-interactive
+az mysql server key list --name <server name> -g <resource_group>
+```
+
+### Drop the key being used
+
+```azurecli-interactive
+az mysql server key delete -g <resource_group> --kid <key url>
+```
+
+## Using an Azure Resource Manager template to enable data encryption
+
+Apart from the Azure portal, you can also enable data encryption on your Azure Database for MySQL server using Azure Resource Manager templates for new and existing servers.
+
+### For a new server
+
+Use one of the pre-created Azure Resource Manager templates to provision the server with data encryption enabled:
+[Example with Data encryption](https://github.com/Azure/azure-mysql/tree/master/arm-templates/ExampleWithDataEncryption)
+
+This Azure Resource Manager template creates an Azure Database for MySQL server and uses the **KeyVault** and **Key** passed as parameters to enable data encryption on the server.
+
+### For an existing server
+
+Additionally, you can use Azure Resource Manager templates to enable data encryption on your existing Azure Database for MySQL servers.
+
+* Pass the Resource ID of the Azure Key Vault key that you copied earlier under the `Uri` property in the properties object.
+
+* Use *2020-01-01-preview* as the API version.
+
+```json
+{
+ "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string"
+ },
+ "serverName": {
+ "type": "string"
+ },
+ "keyVaultName": {
+ "type": "string",
+ "metadata": {
+ "description": "Key vault name where the key to use is stored"
+ }
+ },
+ "keyVaultResourceGroupName": {
+ "type": "string",
+ "metadata": {
+ "description": "Key vault resource group name where it is stored"
+ }
+ },
+ "keyName": {
+ "type": "string",
+ "metadata": {
+ "description": "Key name in the key vault to use as encryption protector"
+ }
+ },
+ "keyVersion": {
+ "type": "string",
+ "metadata": {
+ "description": "Version of the key in the key vault to use as encryption protector"
+ }
+ }
+ },
+ "variables": {
+ "serverKeyName": "[concat(parameters('keyVaultName'), '_', parameters('keyName'), '_', parameters('keyVersion'))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.DBforMySQL/servers",
+ "apiVersion": "2017-12-01",
+ "kind": "",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "name": "[parameters('serverName')]",
+ "properties": {
+ }
+ },
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2019-05-01",
+ "name": "addAccessPolicy",
+ "resourceGroup": "[parameters('keyVaultResourceGroupName')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.DBforMySQL/servers', parameters('serverName'))]"
+ ],
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.KeyVault/vaults/accessPolicies",
+ "name": "[concat(parameters('keyVaultName'), '/add')]",
+ "apiVersion": "2018-02-14-preview",
+ "properties": {
+ "accessPolicies": [
+ {
+ "tenantId": "[subscription().tenantId]",
+ "objectId": "[reference(resourceId('Microsoft.DBforMySQL/servers/', parameters('serverName')), '2017-12-01', 'Full').identity.principalId]",
+ "permissions": {
+ "keys": [
+ "get",
+ "wrapKey",
+ "unwrapKey"
+ ]
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "name": "[concat(parameters('serverName'), '/', variables('serverKeyName'))]",
+ "type": "Microsoft.DBforMySQL/servers/keys",
+ "apiVersion": "2020-01-01-preview",
+ "dependsOn": [
+ "addAccessPolicy",
+ "[resourceId('Microsoft.DBforMySQL/servers', parameters('serverName'))]"
+ ],
+ "properties": {
+ "serverKeyType": "AzureKeyVault",
+ "uri": "[concat(reference(resourceId(parameters('keyVaultResourceGroupName'), 'Microsoft.KeyVault/vaults/', parameters('keyVaultName')), '2018-02-14-preview', 'Full').properties.vaultUri, 'keys/', parameters('keyName'), '/', parameters('keyVersion'))]"
+ }
+ }
+ ]
+}
+
+```
+
+## Next steps
+
+* [Validating data encryption for Azure Database for MySQL](how-to-data-encryption-validation.md)
+* [Troubleshoot data encryption in Azure Database for MySQL](how-to-data-encryption-troubleshoot.md)
+* [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md).
+
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-portal.md
+
+ Title: Data encryption - Azure portal - Azure Database for MySQL
+description: Learn how to set up and manage data encryption for your Azure Database for MySQL by using the Azure portal.
+++++ Last updated : 01/13/2020 +++
+# Data encryption for Azure Database for MySQL by using the Azure portal
++
+Learn how to use the Azure portal to set up and manage data encryption for your Azure Database for MySQL.
+
+## Prerequisites for Azure CLI
+
+* You must have an Azure subscription and be an administrator on that subscription.
+* In Azure Key Vault, create a key vault and a key to use for a customer-managed key.
+* The key vault must have the following properties to use as a customer-managed key:
+ * [Soft delete](../../key-vault/general/soft-delete-overview.md)
+
+ ```azurecli-interactive
+ az resource update --id $(az keyvault show --name \ <key_vault_name> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
+ ```
+
+ * [Purge protected](../../key-vault/general/soft-delete-overview.md#purge-protection)
+
+ ```azurecli-interactive
+ az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true
+ ```
+ * Retention days set to 90 days
+
+ ```azurecli-interactive
+ az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --retention-days 90
+ ```
+
+* The key must have the following attributes to use as a customer-managed key:
+ * No expiration date
+ * Not disabled
+ * Perform **get**, **wrap**, **unwrap** operations
+ * recoverylevel attribute set to **Recoverable** (this requires soft-delete enabled with retention period set to 90 days)
+ * Purge protection enabled
+
+ You can verify the above attributes of the key by using the following command:
+
+ ```azurecli-interactive
+ az keyvault key show --vault-name <key_vault_name> -n <key_name>
+ ```
+
+* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [data encryption with customer managed keys](concepts-data-encryption-mysql.md#limitations).
+## Set the right permissions for key operations
+
+1. In Key Vault, select **Access policies** > **Add Access Policy**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-access-policy-overview.png" alt-text="Screenshot of Key Vault, with Access policies and Add Access Policy highlighted":::
+
+2. Select **Key permissions**, and select **Get**, **Wrap**, **Unwrap**, and the **Principal**, which is the name of the MySQL server. If your server principal can't be found in the list of existing principals, you need to register it. You're prompted to register your server principal when you attempt to set up data encryption for the first time, and it fails.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/access-policy-wrap-unwrap.png" alt-text="Access policy overview":::
+
+3. Select **Save**.
+
+## Set data encryption for Azure Database for MySQL
+
+1. In Azure Database for MySQL, select **Data encryption** to set up the customer-managed key.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/data-encryption-overview.png" alt-text="Screenshot of Azure Database for MySQL, with Data encryption highlighted":::
+
+2. You can either select a key vault and key pair, or enter a key identifier.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/setting-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with data encryption options highlighted":::
+
+3. Select **Save**.
+
+4. To ensure all files (including temp files) are fully encrypted, restart the server.
+
+## Using Data encryption for restore or replica servers
+
+After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted MySQL server, you can use the following steps to create an encrypted restored server.
+
+1. On your server, select **Overview** > **Restore**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore.png" alt-text="Screenshot of Azure Database for MySQL, with Overview and Restore highlighted":::
+
+ Or for a replication-enabled server, under the **Settings** heading, select **Replication**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/mysql-replica.png" alt-text="Screenshot of Azure Database for MySQL, with Replication highlighted":::
+
+2. After the restore operation is complete, the new server created is encrypted with the primary server's key. However, the features and options on the server are disabled, and the server is inaccessible. This prevents any data manipulation, because the new server's identity hasn't yet been given permission to access the key vault.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with Inaccessible status highlighted":::
+
+3. To make the server accessible, revalidate the key on the restored server. Select **Data encryption** > **Revalidate key**.
+
+ > [!NOTE]
+ > The first attempt to revalidate will fail, because the new server's service principal needs to be given access to the key vault. To generate the service principal, select **Revalidate key**, which will show an error but generates the service principal. Thereafter, refer to [these steps](#set-the-right-permissions-for-key-operations) earlier in this article.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-revalidate-data-encryption.png" alt-text="Screenshot of Azure Database for MySQL, with revalidation step highlighted":::
+
+ You will have to give the key vault access to the new server. For more information, see [Assign a Key Vault access policy](../../key-vault/general/assign-access-policy.md?tabs=azure-portal).
+
+4. After registering the service principal, revalidate the key again, and the server resumes its normal functionality.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/restore-successful.png" alt-text="Screenshot of Azure Database for MySQL, showing restored functionality":::
+
+## Next steps
+
+* [Validating data encryption for Azure Database for MySQL](how-to-data-encryption-validation.md)
+* [Troubleshoot data encryption in Azure Database for MySQL](how-to-data-encryption-troubleshoot.md)
+* [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md).
mysql How To Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-troubleshoot.md
+
+ Title: Troubleshoot data encryption - Azure Database for MySQL
+description: Learn how to troubleshoot data encryption in Azure Database for MySQL
+++++ Last updated : 02/13/2020++
+# Troubleshoot data encryption in Azure Database for MySQL
++
+This article describes how to identify and resolve common issues that can occur in Azure Database for MySQL when configured with data encryption using a customer-managed key.
+
+## Introduction
+
+When you configure data encryption to use a customer-managed key in Azure Key Vault, servers require continuous access to the key. If the server loses access to the customer-managed key in Azure Key Vault, it will deny all connections, return the appropriate error message, and change its state to ***Inaccessible*** in the Azure portal.
+
+If you no longer need an inaccessible Azure Database for MySQL server, you can delete it to stop incurring costs. No other actions on the server are permitted until access to the key vault has been restored and the server is available. It's also not possible to change the data encryption option from `Yes`(customer-managed) to `No` (service-managed) on an inaccessible server when it's encrypted with a customer-managed key. You'll have to revalidate the key manually before the server is accessible again. This action is necessary to protect the data from unauthorized access while permissions to the customer-managed key are revoked.
+
+## Common errors that cause the server to become inaccessible
+
+The following misconfigurations cause most issues with data encryption that use Azure Key Vault keys:
+
+- The key vault is unavailable or doesn't exist:
+ - The key vault was accidentally deleted.
+ - An intermittent network error causes the key vault to be unavailable.
+
+- You don't have permissions to access the key vault or the key doesn't exist:
+ - The key expired or was accidentally deleted or disabled.
+ - The managed identity of the Azure Database for MySQL instance was accidentally deleted.
+ - The managed identity of the Azure Database for MySQL instance has insufficient key permissions. For example, the permissions don't include Get, Wrap, and Unwrap.
+ - The managed identity permissions to the Azure Database for MySQL instance were revoked or deleted.
+
+## Identify and resolve common errors
+
+### Errors on the key vault
+
+#### Disabled key vault
+
+- `AzureKeyVaultKeyDisabledMessage`
+- **Explanation**: The operation couldn't be completed on server because the Azure Key Vault key is disabled.
+
+#### Missing key vault permissions
+
+- `AzureKeyVaultMissingPermissionsMessage`
+- **Explanation**: The server doesn't have the required Get, Wrap, and Unwrap permissions to Azure Key Vault. Grant any missing permissions to the service principal with ID.
+
+### Mitigation
+
+- Confirm that the customer-managed key is present in the key vault.
+- Identify the key vault, then go to the key vault in the Azure portal.
+- Ensure that the key URI identifies a key that is present.
+
+## Next steps
+
+[Use the Azure portal to set up data encryption with a customer-managed key on Azure Database for MySQL](how-to-data-encryption-portal.md)
mysql How To Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-validation.md
+
+ Title: How to ensure validation of the Azure Database for MySQL - Data encryption
+description: Learn how to validate the encryption of the Azure Database for MySQL - Data encryption using the customers managed key.
+++++ Last updated : 04/28/2020++
+# Validating data encryption for Azure Database for MySQL
++
+This article helps you validate that data encryption using customer managed key for Azure Database for MySQL is working as expected.
+
+## Check the encryption status
+
+### From portal
+
+1. If you want to verify that the customer's key is used for encryption, follow these steps:
+
+ * In the Azure portal, navigate to the **Azure Key Vault** -> **Keys**
+ * Select the key used for server encryption.
+ * Set the status of the key **Enabled** to **No**.
+
+ After some time (**~15 min**), the Azure Database for MySQL server **Status** should be **Inaccessible**. Any I/O operation done against the server will fail which validates that the server is indeed encrypted with customers key and the key is currently not valid.
+
+ In order to make the server **Available** against, you can revalidate the key.
+
+ * Set the status of the key in the Key Vault to **Yes**.
+ * On the server **Data Encryption**, select **Revalidate key**.
+ * After the revalidation of the key is successful, the server **Status** changes to **Available**.
+
+2. On the Azure portal, if you can ensure that the encryption key is set, then data is encrypted using the customers key used in the Azure portal.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/byok-validate.png" alt-text="Access policy overview":::
+
+### From CLI
+
+1. We can use *az CLI* command to validate the key resources being used for the Azure Database for MySQL server.
+
+ ```azurecli-interactive
+ az mysql server key list --name '<server_name>' -g '<resource_group_name>'
+ ```
+
+ For a server without Data encryption set, this command results in empty set [].
+
+### Azure audit reports
+
+[Audit Reports](https://servicetrust.microsoft.com) can also be reviewed that provides information about the compliance with data protection standards and regulatory requirements.
+
+## Next steps
+
+To learn more about data encryption, see [Azure Database for MySQL data encryption with customer-managed key](concepts-data-encryption-mysql.md).
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-in-replication.md
+
+ Title: Configure Data-in Replication - Azure Database for MySQL
+description: This article describes how to set up Data-in Replication for Azure Database for MySQL.
+++++ Last updated : 04/08/2021++
+# How to configure Azure Database for MySQL Data-in Replication
++
+This article describes how to set up [Data-in Replication](concepts-data-in-replication.md) in Azure Database for MySQL by configuring the source and replica servers. This article assumes that you have some prior experience with MySQL servers and databases.
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+>
+
+To create a replica in the Azure Database for MySQL service, [Data-in Replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+
+Review the [limitations and requirements](concepts-data-in-replication.md#limitations-and-considerations) of Data-in replication before performing the steps in this article.
+
+## Create an Azure Database for MySQL Single Server instance to use as a replica
+
+1. Create a new instance of Azure Database for MySQL Single Server (for example, `replica.mysql.database.azure.com`). Refer to [Create an Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) for server creation. This server is the "replica" server for Data-in Replication.
+
+ > [!IMPORTANT]
+ > The Azure Database for MySQL server must be created in the General Purpose or Memory Optimized pricing tiers as data-in replication is only supported in these tiers.
+ > GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2).
+
+2. Create the same user accounts and corresponding privileges.
+
+ User accounts aren't replicated from the source server to the replica server. If you plan on providing users with access to the replica server, you need to create all accounts and corresponding privileges manually on this newly created Azure Database for MySQL server.
+
+3. Add the source server's IP address to the replica's firewall rules.
+
+ Update firewall rules using the [Azure portal](how-to-manage-firewall-using-portal.md) or [Azure CLI](how-to-manage-firewall-using-cli.md).
+
+4. **Optional** - If you wish to use [GTID-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html) from the source server to the Azure Database for MySQL replica server, you'll need to enable the following server parameters on the Azure Database for MySQL server as shown in the portal image below:
+
+ :::image type="content" source="./media/how-to-data-in-replication/enable-gtid.png" alt-text="Enable GTID on Azure Database for MySQL server":::
+
+## Configure the source MySQL server
+
+The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in Replication. This server is the "source" for Data-in replication.
+
+1. Review the [source server requirements](concepts-data-in-replication.md#requirements) before proceeding.
+
+2. Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or that it has a fully qualified domain name (FQDN).
+
+ Test connectivity to the source server by attempting to connect from a tool such as the MySQL command line hosted on another machine or from the [Azure Cloud Shell](../../cloud-shell/overview.md) available in the Azure portal.
+
+ If your organization has strict security policies and won't allow all IP addresses on the source server to enable communication from Azure to your source server, you can potentially use the command below to determine the IP address of your MySQL server.
+
+ 1. Sign in to your Azure Database for MySQL server using a tool such as the MySQL command line.
+
+ 2. Execute the following query.
+
+ ```bash
+ mysql> SELECT @@global.redirect_server_host;
+ ```
+
+ Below is some sample output:
+
+ ```bash
+ +--+
+ | @@global.redirect_server_host |
+ +--+
+ | e299ae56f000.tr1830.westus1-a.worker.database.windows.net |
+ +--+
+ ```
+
+ 3. Exit from the MySQL command line.
+ 4. To get the IP address, execute the following command in the ping utility:
+
+ ```bash
+ ping <output of step 2b>
+ ```
+
+ For example:
+
+ ```bash
+ C:\Users\testuser> ping e299ae56f000.tr1830.westus1-a.worker.database.windows.net
+ Pinging tr1830.westus1-a.worker.database.windows.net (**11.11.111.111**) 56(84) bytes of data.
+ ```
+
+ 5. Configure your source server's firewall rules to include the previous step's outputted IP address on port 3306.
+
+ > [!NOTE]
+ > This IP address may change due to maintenance / deployment operations. This method of connectivity is only for customers who cannot afford to allow all IP address on 3306 port.
+
+3. Turn on binary logging.
+
+ Check to see if binary logging has been enabled on the source by running the following command:
+
+ ```sql
+ SHOW VARIABLES LIKE 'log_bin';
+ ```
+
+ If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server.
+
+ If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the steps below:
+ 1. Locate your MySQL configuration file (my.cnf) in the source server. For example: /etc/my.cnf
+ 2. Open the configuration file to edit it and locate **mysqld** section in the file.
+ 3. In the mysqld section, add following line:
+
+ ```bash
+ log-bin=mysql-bin.log
+ ```
+
+ 4. Restart the MySQL source server for the changes to take effect.
+ 5. After the server is restarted, verify that binary logging is enabled by running the same query as before:
+
+ ```sql
+ SHOW VARIABLES LIKE 'log_bin';
+ ```
+
+4. Configure the source server settings.
+
+ Data-in Replication requires the parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL.
+
+ ```sql
+ SET GLOBAL lower_case_table_names = 1;
+ ```
+
+ **Optional** - If you wish to use [GTID-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you'll need to check if GTID is enabled on the source server. You can execute following command against your source MySQL server to see if gtid_mode is ON.
+
+ ```sql
+ show variables like 'gtid_mode';
+ ```
+
+ >[!IMPORTANT]
+ > All servers have gtid_mode set to the default value OFF. You don't need to enable GTID on the source MySQL server specifically to set up Data-in Replication. If GTID is already enabled on source server, you can optionally use GTID-based replication to set up Data-in Replication too with Azure Database for MySQL Single Server. You can use file-based replication to set up data-in replication for all servers regardless of the gitd_mode configuration on the source server.
+
+5. Create a new replication role and set up permission.
+
+ Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
+
+ In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
+
+ **SQL Command**
+
+ *Replication with SSL*
+
+ To require SSL for all user connections, use the following command to create a user:
+
+ ```sql
+ CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+ GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL;
+ ```
+
+ *Replication without SSL*
+
+ If SSL isn't required for all connections, use the following command to create a user:
+
+ ```sql
+ CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+ GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';
+ ```
+
+ **MySQL Workbench**
+
+ To create the replication role in MySQL Workbench, open the **Users and Privileges** panel from the **Management** panel, and then select **Add Account**.
+
+ :::image type="content" source="./media/how-to-data-in-replication/users-privileges.png" alt-text="Users and Privileges":::
+
+ Type in the username into the **Login Name** field.
+
+ :::image type="content" source="./media/how-to-data-in-replication/sync-user.png" alt-text="Sync user":::
+
+ Select the **Administrative Roles** panel and then select **Replication Slave** from the list of **Global Privileges**. Then select **Apply** to create the replication role.
+
+ :::image type="content" source="./media/how-to-data-in-replication/replication-slave.png" alt-text="Replication Slave":::
+
+6. Set the source server to read-only mode.
+
+ Before starting to dump out the database, the server needs to be placed in read-only mode. While in read-only mode, the source will be unable to process any write transactions. Evaluate the impact to your business and schedule the read-only window in an off-peak time if necessary.
+
+ ```sql
+ FLUSH TABLES WITH READ LOCK;
+ SET GLOBAL read_only = ON;
+ ```
+
+7. Get binary log file name and offset.
+
+ Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset.
+
+ ```sql
+ show master status;
+ ```
+ The results should appear similar to the following. Make sure to note the binary file name for use in later steps.
+
+ :::image type="content" source="./media/how-to-data-in-replication/master-status.png" alt-text="Master Status Results":::
+
+## Dump and restore the source server
+
+1. Determine which databases and tables you want to replicate into Azure Database for MySQL and perform the dump from the source server.
+
+ You can use mysqldump to dump databases from your primary server. For details, refer to [Dump & Restore](concepts-migrate-dump-restore.md). It's unnecessary to dump the MySQL library and test library.
+
+2. **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you'll need to identify the GTID of the last transaction executed at the primary. You can use the following command to note the GTID of the last transaction executed on the master server.
+
+ ```sql
+ show global variables like 'gtid_executed';
+ ```
+
+3. Set source server to read/write mode.
+
+ After the database has been dumped, change the source MySQL server back to read/write mode.
+
+ ```sql
+ SET GLOBAL read_only = OFF;
+ UNLOCK TABLES;
+ ```
+
+4. Restore dump file to new server.
+
+ Restore the dump file to the server created in the Azure Database for MySQL service. Refer to [Dump & Restore](concepts-migrate-dump-restore.md) for how to restore a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL server from the virtual machine.
+
+5. **Optional** - Note the GTID of the restored server on Azure Database for MySQL to ensure it is same as the primary server. You can use the following command to note the GTID of the GTID purged value on the Azure Database for MySQL replica server. The value of gtid_purged should be same as gtid_executed on master noted in step 2 for GTID-based replication to work.
+
+ ```sql
+ show global variables like 'gtid_purged';
+ ```
+
+## Link source and replica servers to start Data-in Replication
+
+1. Set the source server.
+
+ All Data-in Replication functions are done by stored procedures. You can find all procedures at [Data-in Replication Stored Procedures](./reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench.
+
+ To link two servers and start replication, login to the target replica server in the Azure DB for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure DB for MySQL server.
+
+ ```sql
+ CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>');
+ ```
+
+ **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html),you will need to use the following command to link the two servers
+
+ ```sql
+ call mysql.az_replication_change_master_with_gtid('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_ssl_ca>');
+ ```
+
+ - master_host: hostname of the source server
+ - master_user: username for the source server
+ - master_password: password for the source server
+ - master_port: port number on which source server is listening for connections. (3306 is the default port on which MySQL is listening)
+ - master_log_file: binary log file name from running `show master status`
+ - master_log_pos: binary log position from running `show master status`
+ - master_ssl_ca: CA certificate's context. If not using SSL, pass in empty string.
+
+ It's recommended to pass this parameter in as a variable. For more information, see the following examples.
+
+ > [!NOTE]
+ > If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. For more information, see [Manage firewall rules using the portal](how-to-manage-firewall-using-portal.md) .
+
+ **Examples**
+
+ *Replication with SSL*
+
+ The variable `@cert` is created by running the following MySQL commands:
+
+ ```sql
+ SET @cert = '--BEGIN CERTIFICATE--
+ PLACE YOUR PUBLIC KEY CERTIFICATE'`S CONTEXT HERE
+ --END CERTIFICATE--'
+ ```
+
+ Replication with SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica.
+
+ ```sql
+ CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, @cert);
+ ```
+
+ *Replication without SSL*
+
+ Replication without SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica.
+
+ ```sql
+ CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, '');
+ ```
+
+2. Set up filtering.
+
+ If you want to skip replicating some tables from your master, update the `replicate_wild_ignore_table` server parameter on your replica server. You can provide more than one table pattern using a comma-separated list.
+
+ Review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table) to learn more about this parameter.
+
+ To update the parameter, you can use the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md).
+
+3. Start replication.
+
+ Call the `mysql.az_replication_start` stored procedure to start replication.
+
+ ```sql
+ CALL mysql.az_replication_start;
+ ```
+
+4. Check replication status.
+
+ Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
+
+ ```sql
+ show slave status;
+ ```
+
+ If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates.
+
+## Other useful stored procedures for Data-in Replication operations
+
+### Stop replication
+
+To stop replication between the source and replica server, use the following stored procedure:
+
+```sql
+CALL mysql.az_replication_stop;
+```
+
+### Remove replication relationship
+
+To remove the relationship between source and replica server, use the following stored procedure:
+
+```sql
+CALL mysql.az_replication_remove_master;
+```
+
+### Skip replication error
+
+To skip a replication error and allow replication to continue, use the following stored procedure:
+
+```sql
+CALL mysql.az_replication_skip_counter;
+```
+
+ **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), use the following stored procedure to skip a transaction
+
+```sql
+call mysql. az_replication_skip_gtid_transaction(ΓÇÿ<transaction_gtid>ΓÇÖ)
+```
+
+The procedure can skip the transaction for the given GTID. If the GTID format is not right or the GTID transaction has already been executed, the procedure will fail to execute. The GTID for a transaction can be determined by parsing the binary log to check the transaction events. MySQL provides a utility [mysqlbinlog](https://dev.mysql.com/doc/refman/5.7/en/mysqlbinlog.html) to parse binary logs and display their contents in text format, which can be used to identify GTID of the transaction.
+
+>[!Important]
+>This procedure can be only used to skip one transaction, and can't be used to skip gtid set or set gtid_purged.
+
+To skip the next transaction after the current replication position, use the following command to identify the GTID of next transaction as shown below.
+
+```sql
+SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos][LIMIT [offset,] row_count]
+```
+
+ :::image type="content" source="./media/how-to-data-in-replication/show-binary-log.png" alt-text="Show binary log results":::
+
+## Next steps
+
+- Learn more about [Data-in Replication](concepts-data-in-replication.md) for Azure Database for MySQL.
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-decide-on-right-migration-tools.md
+
+ Title: "Select the right tools for migration to Azure Database for MySQL"
+description: "This topic provides a decision table which helps customers in picking the right tools for migrating into Azure Database for MySQL"
+++++++ Last updated : 10/12/2021++
+# Select the right tools for migration to Azure Database for MySQL
++
+## Overview
+
+Migrations are multi-step projects that are tough to pull off. Migrating database servers across platforms involves more than data and schema migration. There are also several other components, such as server configuration parameters, networking, access control rules, etc., to move. These are required to ensure that the functionality of the database server in the new target platform mimics the source.
+
+For detailed information and use cases about migrating databases to Azure Database for MySQL, you can refer to the [Database Migration Guide](../migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md). This document provides pointers that will help you successfully plan and execute a MySQL migration to Azure.
+
+In general, migrations can be categorized as either offline or online.
+
+- With an offline migration, the source server is taken offline and a dump and restore of the databases is performed on the target server.
+
+- With an online migration (migration with minimal downtime), the source server allows updates, and the migration solution will take care of replicating the ongoing changes between the source and target server along with the initial dump and restore on the target.
+
+If your application can afford some downtime, offline migrations are always the preferred choice, as they are simple and easy to execute. However, if your application can only afford minimal downtime, an online migration is the best choice. Migrations of the majority of OLTP systems, such as payment processing and e-commerce, fall into this category.
+
+## Decision table
+
+To help you with selecting the right tools for migrating to Azure Database for MySQL, consider the detail in the following table.
+
+| Scenarios | Recommended Tools | Links |
+|-|||
+| Offline Migrations to move databases >= 1 TB | Dump and Restore using **MyDumper/MyLoader** + High Compute VM | [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md) <br><br> [Best Practices for migrating large databases to Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699)|
+| Offline Migrations to move databases < 1TB | If network bandwidth between source and target is good (e.g: Highspeed express route), use **Azure DMS** (database migration service) <br><br> **-OR-** <br><br> If you have low network bandwidth between source and Azure, use **Mydumper/Myloader + High compute VM** to take advantage of compression settings to efficiently move data over low speed networks <br><br> **-OR-** <br><br> Use **mysqldump** and **MySQL Workbench Export/Import** utility to perform offline migrations for smaller databases. | [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS - Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md)<br><br> [Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench](how-to-migrate-rds-mysql-workbench.md)<br><br> [Import and export - Azure Database for MySQL](concepts-migrate-import-export.md)|
+| Online Migration | **Mydumper/Myloader with Data-in replication** <br><br> **Mysqldump with data-in replication** can be considered for small databases( less than 100GB). These methods are applicable to both external and intra-platform migrations. | [Configure Data-in replication - Azure Database for MySQL Flexible Server](../flexible-server/how-to-data-in-replication.md) <br><br> [Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime](how-to-migrate-single-flexible-minimum-downtime.md) |
+|Single to Flexible Server Migrations | **Offline**: Custom shell script hosted in [GitHub](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) This script also moves other server components such as security settings and server parameter configurations. <br><br>**Online**: **Mydumper/Myloader with Data-in replication** | [Migrate from Azure Database for MySQL - Single Server to Flexible Server in 5 easy steps!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057)<br><br> [Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime](how-to-migrate-single-flexible-minimum-downtime.md)|
+
+## Next steps
+* [Migrate MySQL on-premises to Azure Database for MySQL](../migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md)
+
+<br><br>
mysql How To Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-deny-public-network-access.md
+
+ Title: Deny Public Network Access - Azure portal - Azure Database for MySQL
+description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for MySQL
+++++ Last updated : 03/10/2020++
+# Deny Public Network Access in Azure Database for MySQL using Azure portal
++
+This article describes how you can configure an Azure Database for MySQL server to deny all public configurations and allow only connections through private endpoints to further enhance the network security.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+* An [Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md) with General Purpose or Memory Optimized pricing tier
+
+## Set Deny Public Network Access
+
+Follow these steps to set MySQL server Deny Public Network Access:
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server.
+
+1. On the MySQL server page, under **Settings**, click **Connection security** to open the connection security configuration page.
+
+1. In **Deny Public Network Access**, select **Yes** to enable deny public access for your MySQL server.
+
+ :::image type="content" source="./media/how-to-deny-public-network-access/setting-deny-public-network-access.PNG" alt-text="Azure Database for MySQL Deny network access":::
+
+1. Click **Save** to save the changes.
+
+1. A notification will confirm that connection security setting was successfully enabled.
+
+ :::image type="content" source="./media/how-to-deny-public-network-access/setting-deny-public-network-access-success.png" alt-text="Azure Database for MySQL Deny network access success":::
+
+## Next steps
+
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md).
mysql How To Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-double-encryption.md
+
+ Title: Infrastructure double encryption - Azure portal - Azure Database for MySQL
+description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for MySQL.
+++++ Last updated : 06/30/2020++
+# Infrastructure double encryption for Azure Database for MySQL
++
+Learn how to use the how set up and manage Infrastructure double encryption for your Azure Database for MySQL.
+
+## Prerequisites
+
+* You must have an Azure subscription and be an administrator on that subscription.
+* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [infrastructure double encryption](concepts-infrastructure-double-encryption.md#limitations).
+
+## Create an Azure Database for MySQL server with Infrastructure Double encryption - Portal
+
+Follow these steps to create an Azure Database for MySQL server with Infrastructure double encryption from Azure portal:
+
+1. Select **Create a resource** (+) in the upper-left corner of the portal.
+
+2. Select **Databases** > **Azure Database for MySQL**. You can also enter **MySQL** in the search box to find the service.
+
+ :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/2-navigate-to-mysql.png" alt-text="Azure Database for MySQL option":::
+
+3. Provide the basic information of the server. Select **Additional settings** and enabled the **Infrastructure double encryption** checkbox to set the parameter.
+
+ :::image type="content" source="./media/how-to-double-encryption/infrastructure-encryption-selected.png" alt-text="Azure Database for MySQL selections":::
+
+4. Select **Review + create** to provision the server.
+
+ :::image type="content" source="./media/how-to-double-encryption/infrastructure-encryption-summary.png" alt-text="Azure Database for MySQL summary":::
+
+5. Once the server is created you can validate the infrastructure double encryption by checking the status in the **Data encryption** server blade.
+
+ :::image type="content" source="./media/how-to-double-encryption/infrastructure-encryption-validation.png" alt-text="Azure Database for MySQL validation":::
+
+## Create an Azure Database for MySQL server with Infrastructure Double encryption - CLI
+
+Follow these steps to create an Azure Database for MySQL server with Infrastructure double encryption from CLI:
+
+This example creates a resource group named `myresourcegroup` in the `westus` location.
+
+```azurecli-interactive
+az group create --name myresourcegroup --location westus
+```
+The following example creates a MySQL 5.7 server in West US named `mydemoserver` in your resource group `myresourcegroup` with server admin login `myadmin`. This is a **Gen 4** **General Purpose** server with **2 vCores**. This will also enabled infrastructure double encryption for the server created. Substitute the `<server_admin_password>` with your own value.
+
+```azurecli-interactive
+az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7 --infrastructure-encryption <Enabled/Disabled>
+```
+
+## Next steps
+
+ To learn more about data encryption, see [Azure Database for MySQL data Infrastructure double encryption](concepts-Infrastructure-double-encryption.md).
mysql How To Fix Corrupt Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-fix-corrupt-database.md
+
+ Title: Resolve database corruption - Azure Database for MySQL
+description: In this article, you'll learn about how to fix database corruption problems in Azure Database for MySQL.
+++++ Last updated : 09/21/2020++
+# Troubleshoot database corruption in Azure Database for MySQL
++
+Database corruption can cause downtime for your application. It's also critical to resolve corruption problems in time to avoid data loss. When database corruption occurs, you'll see this error in your server logs: `InnoDB: Database page corruption on disk or a failed.`
+
+In this article, you'll learn how to resolve database or table corruption problems. Azure Database for MySQL uses the InnoDB engine. It features automated corruption checking and repair operations. InnoDB checks for corrupt pages by running checksums on every page it reads. If it finds a checksum discrepancy, it will automatically stop the MySQL server.
+
+Try the following options to quickly mitigate your database corruption problems.
+
+## Restart your MySQL server
+
+You typically notice a database or table is corrupt when your application accesses the table or database. InnoDB features a crash recovery mechanism that can resolve most problems when the server is restarted. So restarting the server can help the server recover from a crash that caused the database to be in bad state.
+
+## Use the dump and restore method
+
+We recommend that you resolve corruption problems by using a *dump and restore* method. This method involves:
+
+1. Accessing the corrupt table.
+2. Using the mysqldump utility to create a logical backup of the table. The backup will retain the table structure and the data within it.
+3. Reloading the table into the database.
+
+### Back up your database or tables
+
+> [!Important]
+>
+> - Make sure you have configured a firewall rule to access the server from your client machine. For more information, see [configure a firewall rule on Single Server](how-to-manage-firewall-using-portal.md) and [configure a firewall rule on Flexible Server](../flexible-server/how-to-connect-tls-ssl.md).
+> - Use SSL option `--ssl-cert` for mysqldump if you have SSL enabled.
+
+Create a backup file from the command line by using mysqldump. Use this command:
+
+```
+$ mysqldump [--ssl-cert=/path/to/pem] -h [host] -u [uname] -p[pass] [dbname] > [backupfile.sql]
+```
+
+Parameter descriptions:
+- `[ssl-cert=/path/to/pem]`: The path to the SSL certificate. Download the SSL certificate on your client machine and set the path in it in the command. Don't use this parameter if SSL is disabled.
+- `[host]`: Your Azure Database for MySQL server.
+- `[uname]`: Your server admin user name.
+- `[pass]`: The password for your admin user.
+- `[dbname]`: The name of your database.
+- `[backupfile.sql]`: The file name of your database backup.
+
+> [!Important]
+> - For Single Server, use the format `admin-user@servername` to replace `myserveradmin` in the following commands.
+> - For Flexible Server, use the format `admin-user` to replace `myserveradmin` in the following commands.
+
+If a specific table is corrupt, select specific tables in your database to back up:
+```
+$ mysqldump --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p testdb table1 table2 > testdb_tables_backup.sql
+```
+
+To back up one or more databases, use the `--database` switch and list the database names, separated by spaces:
+
+```
+$ mysqldump --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
+```
+
+### Restore your database or tables
+
+The following steps show how to restore your database or tables. After you create the backup file, you can restore the tables or databases by using the mysql utility. Run this command:
+
+```
+mysql --ssl-cert=</path/to/pem> -h [hostname] -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
+```
+Here's an example that restores `testdb` from a backup file created with mysqldump:
+
+> [!Important]
+> - For Single Server, use the format `admin-user@servername` to replace `myserveradmin` in the following command.
+> - For Flexible Server, use the format ```admin-user``` to replace `myserveradmin` in the following command.
+
+```
+$ mysql --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p testdb < testdb_backup.sql
+```
+
+## Next steps
+If the preceding steps don't resolve the problem, you can always restore the entire server:
+- [Restore server in Azure Database for MySQL - Single Server](how-to-restore-server-portal.md)
+- [Restore server in Azure Database for MySQL - Flexible Server](../flexible-server/how-to-restore-server-portal.md)
+++
mysql How To Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-major-version-upgrade.md
+
+ Title: Major version upgrade in Azure Database for MySQL - Single Server
+description: This article describes how you can upgrade major version for Azure Database for MySQL - Single Server
+++++ Last updated : 1/28/2021+
+# Major version upgrade in Azure Database for MySQL Single Server
++
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
+>
+
+> [!IMPORTANT]
+> Major version upgrade for Azure database for MySQL Single Server is in public preview.
+
+This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL single server.
+
+This feature will enable customers to perform in-place upgrades of their MySQL 5.6 servers to MySQL 5.7 with a click of button without any data movement or the need of any application connection string changes.
+
+> [!Note]
+> * Major version upgrade is only available for major version upgrade from MySQL 5.6 to MySQL 5.7.
+> * The server will be unavailable throughout the upgrade operation. It is therefore recommended to perform upgrades during your planned maintenance window. You can consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
+
+## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure portal
+
+Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.6 server using Azure portal
+
+> [!IMPORTANT]
+> We recommend to perform upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](how-to-restore-server-portal.md#point-in-time-restore).
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 server.
+
+2. From the **Overview** page, click the **Upgrade** button in the toolbar.
+
+3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 server to 5.7 server.
+
+ :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade":::
+
+4. A notification will confirm that upgrade is successful.
++
+## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure CLI
+
+Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.6 server using Azure CLI
+
+> [!IMPORTANT]
+> We recommend to perform upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](how-to-restore-server-cli.md#server-point-in-time-restore).
+
+1. Install [Azure CLI for Windows](/cli/azure/install-azure-cli) or use Azure CLI in [Azure Cloud Shell](../../cloud-shell/overview.md) to run the upgrade commands.
+
+ This upgrade requires version 2.16.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
+
+2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command:
+
+ ```azurecli
+ az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --target-server-version 5.7"
+ ```
+
+ The command prompt shows the "-Running" message. After this message is no longer displayed, the version upgrade is complete.
+
+## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 on read replica using Azure portal
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 read replica server.
+
+2. From the **Overview** page, click the **Upgrade** button in the toolbar.
+
+3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 read replica server to 5.7 server.
+
+ :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade":::
+
+4. A notification will confirm that upgrade is successful.
+
+5. From the **Overview** page, confirm that your Azure database for MySQL read replica server version is 5.7.
+
+6. Now go to your primary server and [Perform major version upgrade](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-using-azure-portal) on it.
+
+## Perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replicas
+
+You can perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 by utilizing read replicas. The idea is to upgrade the read replica of your server to 5.7 first and later failover your application to point to read replica and make it a new primary.
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6.
+
+2. Create a [read replica](./concepts-read-replicas.md#create-a-replica) from your primary server.
+
+3. [Upgrade your read replica](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-on-read-replica-using-azure-portal) to version 5.7.
+
+4. Once you confirm that the replica server is running on version 5.7, stop your application from connecting to your primary server.
+
+5. Check replication status, and make sure replica is all caught up with primary so all the data is in sync and ensure there are no new operations performed in primary.
+
+ Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
+
+ ```sql
+ SHOW SLAVE STATUS\G
+ ```
+
+ If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm `Seconds_Behind_Master` is "0" it's safe to stop replication.
+
+6. Promote your read replica to primary by [stopping replication](./how-to-read-replicas-portal.md#stop-replication-to-a-replica-server).
+
+7. Point your application to the new primary (former replica) which is running server 5.7. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
+
+> [!Note]
+> This scenario will have downtime during steps 4, 5 and 6 only.
++
+## Frequently asked questions
+
+### When will this upgrade feature be GA as we have MySQL v5.6 in our production environment that we need to upgrade?
+
+The GA of this feature is planned before MySQL v5.6 retirement. However, the feature is production ready and fully supported by Azure so you should run it with confidence on your environment. As a recommended best practice, we strongly suggest you to run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production. For more information, see [how to perform point-in-time restore](how-to-restore-server-portal.md#point-in-time-restore) to create a point in time copy of your server.
+
+### Will this cause downtime of the server and if so, how long?
+
+Yes, the server will be unavailable during the upgrade process so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server.The upgrades of Basic SKU servers are expected to take longer time as it is on standard storage platform. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. Consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
+
+### What will happen if we do not choose to upgrade our MySQL v5.6 server before February 5, 2021?
+
+You can still continue running your MySQL v5.6 server as before. Azure **will never** perform force upgrade on your server. However, the restrictions documented in [Azure Database for MySQL versioning policy](concepts-version-policy.md) will apply.
+
+## Next steps
+
+Learn about [Azure Database for MySQL versioning policy](concepts-version-policy.md).
mysql How To Manage Firewall Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-cli.md
+
+ Title: Manage firewall rules - Azure CLI - Azure Database for MySQL
+description: This article describes how to create and manage Azure Database for MySQL firewall rules using Azure CLI command-line.
++++
+ms.devlang: azurecli
+ Last updated : 3/18/2020 +++
+# Create and manage Azure Database for MySQL firewall rules by using the Azure CLI
+
+Server-level firewall rules can be used to manage access to an Azure Database for MySQL Server from a specific IP address or a range of IP addresses. Using convenient Azure CLI commands, you can create, update, delete, list, and show firewall rules to manage your server. For an overview of Azure Database for MySQL firewalls, see [Azure Database for MySQL server firewall rules](./concepts-firewall-rules.md).
+
+Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure CLI](how-to-manage-vnet-using-cli.md).
+
+## Prerequisites
+* [Install Azure CLI](/cli/azure/install-azure-cli).
+* An [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-cli.md).
+
+## Firewall rule commands:
+The **az mysql server firewall-rule** command is used from the Azure CLI to create, delete, list, show, and update firewall rules.
+
+Commands:
+- **create**: Create an Azure MySQL server firewall rule.
+- **delete**: Delete an Azure MySQL server firewall rule.
+- **list**: List the Azure MySQL server firewall rules.
+- **show**: Show the details of an Azure MySQL server firewall rule.
+- **update**: Update an Azure MySQL server firewall rule.
+
+## Sign in to Azure and list your Azure Database for MySQL Servers
+Securely connect Azure CLI with your Azure account by using the **az login** command.
+
+1. From the command-line, run the following command:
+ ```azurecli
+ az login
+ ```
+ This command outputs a code to use in the next step.
+
+2. Use a web browser to open the page [https://aka.ms/devicelogin](https://aka.ms/devicelogin), and then enter the code.
+
+3. At the prompt, sign in using your Azure credentials.
+
+4. After your login is authorized, a list of subscriptions is printed in the console. Copy the ID of the desired subscription to set the current subscription to use. Use the [az account set](/cli/azure/account#az-account-set) command.
+ ```azurecli-interactive
+ az account set --subscription <your subscription id>
+ ```
+
+5. List the Azure Databases for MySQL servers for your subscription and resource group if you are unsure of the names. Use the [az mysql server list](/cli/azure/mysql/server#az-mysql-server-list) command.
+
+ ```azurecli-interactive
+ az mysql server list --resource-group myresourcegroup
+ ```
+
+ Note the name attribute in the listing, which you need to specify the MySQL server to work on. If needed, confirm the details for that server and using the name attribute to ensure it is correct. Use the [az mysql server show](/cli/azure/mysql/server#az-mysql-server-show) command.
+
+ ```azurecli-interactive
+ az mysql server show --resource-group myresourcegroup --name mydemoserver
+ ```
+
+## List firewall rules on Azure Database for MySQL Server
+Using the server name and the resource group name, list the existing server firewall rules on the server. Use the [az mysql server firewall list](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-list) command. Notice that the server name attribute is specified in the **--server** switch and not in the **--name** switch.
+```azurecli-interactive
+az mysql server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver
+```
+The output lists the rules, if any, in JSON format (by default). You can use the **--output table** switch to output the results in a more readable table format.
+```azurecli-interactive
+az mysql server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver --output table
+```
+## Create a firewall rule on Azure Database for MySQL Server
+Using the Azure MySQL server name and the resource group name, create a new firewall rule on the server. Use the [az mysql server firewall create](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-create) command. Provide a name for the rule, as well as the start IP and end IP (to provide access to a range of IP addresses) for the rule.
+```azurecli-interactive
+az mysql server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.15
+```
+
+To allow access for a single IP address, provide the same IP address as the Start IP and End IP, as in this example.
+```azurecli-interactive
+az mysql server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 1.1.1.1 --end-ip-address 1.1.1.1
+```
+
+To allow applications from Azure IP addresses to connect to your Azure Database for MySQL server, provide the IP address 0.0.0.0 as the Start IP and End IP, as in this example.
+```azurecli-interactive
+az mysql server firewall-rule create --resource-group myresourcegroup --server mysql --name "AllowAllWindowsAzureIps" --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
+```
+
+> [!IMPORTANT]
+> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+>
+
+Upon success, each create command output lists the details of the firewall rule you have created, in JSON format (by default). If there is a failure, the output shows error message text instead.
+
+## Update a firewall rule on Azure Database for MySQL server
+Using the Azure MySQL server name and the resource group name, update an existing firewall rule on the server. Use the [az mysql server firewall update](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-update) command. Provide the name of the existing firewall rule as input, as well as the start IP and end IP attributes to update.
+```azurecli-interactive
+az mysql server firewall-rule update --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.1
+```
+Upon success, the command output lists the details of the firewall rule you have updated, in JSON format (by default). If there is a failure, the output shows error message text instead.
+
+> [!NOTE]
+> If the firewall rule does not exist, the rule is created by the update command.
+
+## Show firewall rule details on Azure Database for MySQL Server
+Using the Azure MySQL server name and the resource group name, show the existing firewall rule details from the server. Use the [az mysql server firewall show](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-show) command. Provide the name of the existing firewall rule as input.
+```azurecli-interactive
+az mysql server firewall-rule show --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1
+```
+Upon success, the command output lists the details of the firewall rule you have specified, in JSON format (by default). If there is a failure, the output shows error message text instead.
+
+## Delete a firewall rule on Azure Database for MySQL Server
+Using the Azure MySQL server name and the resource group name, remove an existing firewall rule from the server. Use the [az mysql server firewall delete](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-delete) command. Provide the name of the existing firewall rule.
+```azurecli-interactive
+az mysql server firewall-rule delete --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1
+```
+Upon success, there is no output. Upon failure, error message text displays.
+
+## Next steps
+- Understand more about [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md).
+- [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md).
+- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure CLI](how-to-manage-vnet-using-cli.md).
mysql How To Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-portal.md
+
+ Title: Manage firewall rules - Azure portal - Azure Database for MySQL
+description: Create and manage Azure Database for MySQL firewall rules using the Azure portal
+++++ Last updated : 3/18/2020+
+# Create and manage Azure Database for MySQL firewall rules by using the Azure portal
+
+Server-level firewall rules can be used to manage access to an Azure Database for MySQL Server from a specified IP address or a range of IP addresses.
+
+Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md).
+
+## Create a server-level firewall rule in the Azure portal
+
+1. On the MySQL server page, under Settings heading, click **Connection Security** to open the Connection Security page for the Azure Database for MySQL.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security":::
+
+2. Click **Add My IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/2-add-my-ip.png" alt-text="Azure portal - click Add My IP":::
+
+3. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Therefore, you may need to change the Start IP and End IP to make the rule function as expected.
+
+ Use a search engine or other online tool to check your own IP address. For example, search "what is my IP address".
+
+4. Add additional address ranges. In the firewall rules for the Azure Database for MySQL, you can specify a single IP address or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the Start IP and End IP fields. Opening the firewall enables administrators, users, and application to access any database on the MySQL server to which they have valid credentials.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/4-specify-addresses.png" alt-text="Azure portal - firewall rules":::
+
+5. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules is successful.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/5-save-firewall-rule.png" alt-text="Azure portal - click Save":::
+
+## Connecting from Azure
+To allow applications from Azure to connect to your Azure Database for MySQL server, Azure connections must be enabled. For example, to host an Azure Web Apps application, or an application that runs in an Azure VM, or to connect from an Azure Data Factory data management gateway. The resources do not need to be in the same Virtual Network (VNet) or Resource Group for the firewall rule to enable those connections. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. There are a couple of methods to enable these types of connections. A firewall setting with starting and ending address equal to 0.0.0.0 indicates these connections are allowed. Alternatively, you can set the **Allow access to Azure services** option to **ON** in the portal from the **Connection security** pane and hit **Save**. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL server.
+
+> [!IMPORTANT]
+> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+>
+
+## Manage existing server-level firewall rules by using the Azure portal
+Repeat the steps to manage the firewall rules.
+* To add the current computer, click **+ Add My IP**. Click **Save** to save the changes.
+* To add additional IP addresses, type in the **RULE NAME**, **START IP**, and **END IP**. Click **Save** to save the changes.
+* To modify an existing rule, click any of the fields in the rule, and then modify. Click **Save** to save the changes.
+* To delete an existing rule, click the ellipsis […], and then click **Delete**. Click **Save** to save the changes.
++
+## Next steps
+- Similarly, you can script to [Create and manage Azure Database for MySQL firewall rules using Azure CLI](how-to-manage-firewall-using-cli.md).
+- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md).
+- For help in connecting to an Azure Database for MySQL server, see [Connection libraries for Azure Database for MySQL](./concepts-connection-libraries.md).
mysql How To Manage Single Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-single-server-cli.md
+
+ Title: Manage server - Azure CLI - Azure Database for MySQL
+description: Learn how to manage an Azure Database for MySQL server from the Azure CLI.
+++++ Last updated : 9/22/2020++
+# Manage an Azure Database for MySQL Single server using the Azure CLI
++
+This article shows you how to manage your Single servers deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
+
+## Prerequisites
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+You'll need to log in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+```azurecli-interactive
+az login
+```
+
+Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
+
+```azurecli
+az account set --subscription <subscription id>
+```
+
+If you have not already created a server , refer to this [quickstart](quickstart-create-mysql-server-database-using-azure-cli.md) to create one.
+
+## Scale compute and storage
+You can scale up your pricing tier , compute and storage easily using the following command. You can see all the server operation you can perform [az mysql server overview](/cli/azure/mysql/server)
+
+```azurecli-interactive
+az mysql server update --resource-group myresourcegroup --name mydemoserver --sku-name GP_Gen5_4 --storage-size 6144
+```
+
+Here are the details for arguments above :
+
+**Setting** | **Sample value** | **Description**
+||
+name | mydemoserver | Enter a unique name for your Azure Database for MySQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
+resource-group | myresourcegroup | Provide the name of the Azure resource group.
+sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information.
+storage-size | 6144 | The storage capacity of the server (unit is megabytes). Minimum 5120 and increases in 1024 increments.
+
+> [!Important]
+> - Storage can be scaled up (however, you cannot scale storage down)
+> - Scaling up from Basic to General purpose or Memory optimized pricing tier is not supported. You can manually scale up with either [using a bash script](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/upgrade-from-basic-to-general-purpose-or-memory-optimized-tiers/ba-p/830404) or [using MySQL Workbench](https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-scale-up-azure-database-for-mysql-from-basic-tier-to/ba-p/369134)
++
+## Manage MySQL databases on a server
+You can use any of these commands to create, delete , list and view database properties of a database on your server
+
+| Cmdlet | Usage| Description |
+| | | |
+|[az mysql db create](/cli/azure/sql/db#az-mysql-db-create)|```az mysql db create -g myresourcegroup -s mydemoserver -n mydatabasename``` |Creates a database|
+|[az mysql db delete](/cli/azure/sql/db#az-mysql-db-delete)|```az mysql db delete -g myresourcegroup -s mydemoserver -n mydatabasename```|Delete your database from your server. This command does not delete your server. |
+|[az mysql db list](/cli/azure/sql/db#az-mysql-db-list)|```az mysql db list -g myresourcegroup -s mydemoserver```|lists all the databases on the server|
+|[az mysql db show](/cli/azure/sql/db#az-mysql-db-show)|```az mysql db show -g myresourcegroup -s mydemoserver -n mydatabasename```|Shows more details of the database|
+
+## Update admin password
+You can change the administrator role's password with this command
+```azurecli-interactive
+az mysql server update --resource-group myresourcegroup --name mydemoserver --admin-password <new-password>
+```
+
+> [!Important]
+> Make sure password is minimum 8 characters and maximum 128 characters.
+> Password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
+
+## Delete a server
+If you would just like to delete the MySQL Single server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
+
+```azurecli-interactive
+az mysql server delete --resource-group myresourcegroup --name mydemoserver
+```
+
+## Next steps
+- [Restart a server](how-to-restart-server-cli.md)
+- [Restore a server in a bad state](how-to-restore-server-cli.md)
+- [Monitor and tune the server](concepts-monitoring.md)
mysql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-cli.md
+
+ Title: Manage VNet endpoints - Azure CLI - Azure Database for MySQL
+description: This article describes how to create and manage Azure Database for MySQL VNet service endpoints and rules using Azure CLI command line.
++++
+ms.devlang: azurecli
+ Last updated : 02/10/2022 ++
+# Create and manage Azure Database for MySQL VNet service endpoints using Azure CLI
+
+Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL.
+++
+> [!NOTE]
+> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
+> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server.
+
+## Configure Vnet service endpoints for Azure Database for MySQL
+
+The [az network vnet](/cli/azure/network/vnet) commands are used to configure Virtual Networks.
+
+If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. The account must have the necessary permissions to create a virtual network and service endpoint.
+Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
+
+To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
+
+Learn more about [built-in roles](../../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../../role-based-access-control/custom-roles.md).
+
+VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource providers registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+> [!IMPORTANT]
+> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
+
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
mysql How To Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-portal.md
+
+ Title: Manage VNet endpoints - Azure portal - Azure Database for MySQL
+description: Create and manage Azure Database for MySQL VNet service endpoints and rules using the Azure portal
+++++ Last updated : 3/18/2020+
+# Create and manage Azure Database for MySQL VNet service endpoints and VNet rules by using the Azure portal
+
+Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL.
+
+> [!NOTE]
+> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
+> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server.
++
+## Create a VNet rule and enable service endpoints in the Azure portal
+
+1. On the MySQL server page, under the Settings heading, click **Connection Security** to open the Connection Security pane for Azure Database for MySQL.
+
+2. Ensure that the Allow access to Azure services control is set to **OFF**.
+
+> [!Important]
+> If you leave the control set to ON, your Azure MySQL Database server accepts communication from any subnet. Leaving the control set to ON might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for MySQL, together can reduce your security surface area.
+
+3. Next, click on **+ Adding existing virtual network**. If you do not have an existing VNet you can click **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../../virtual-network/quick-create-portal.md)
+
+ :::image type="content" source="./media/how-to-manage-vnet-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security":::
+
+4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then click **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag.
+
+ :::image type="content" source="./media/how-to-manage-vnet-using-portal/2-configure-vnet.png" alt-text="Azure portal - configure VNet":::
+
+ The account must have the necessary permissions to create a virtual network and service endpoint.
+
+ Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
+
+ To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
+
+ Learn more about [built-in roles](../../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../../role-based-access-control/custom-roles.md).
+
+ VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource providers registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+ > [!IMPORTANT]
+ > It is highly recommended to read this article about service endpoint configurations and considerations before configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
+ >
+
+5. Once enabled, click **OK** and you will see that VNet service endpoints are enabled along with a VNet rule.
+
+ :::image type="content" source="./media/how-to-manage-vnet-using-portal/3-vnet-service-endpoints-enabled-vnet-rule-created.png" alt-text="VNet service endpoints enabled and VNet rule created":::
+
+## Next steps
+- Similarly, you can script to [Enable VNet service endpoints and create a VNET rule for Azure Database for MySQL using Azure CLI](how-to-manage-vnet-using-cli.md).
+- For help in connecting to an Azure Database for MySQL server, see [Connection libraries for Azure Database for MySQL](./concepts-connection-libraries.md)
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
mysql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-online.md
+
+ Title: Minimal-downtime migration - Azure Database for MySQL
+description: This article describes how to perform a minimal-downtime migration of a MySQL database to Azure Database for MySQL.
++++++ Last updated : 6/19/2021++
+# Minimal-downtime migration to Azure Database for MySQL
++
+You can perform MySQL migrations to Azure Database for MySQL with minimal downtime by using Data-in replication, which limits the amount of downtime that is incurred by the application.
+
+You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
+
+## Overview
+
+Using Data-in replication, you can configure the source as your primary and the target as your replica, so that there's continuous synching of any new transactions to Azure while the application remains running. After the data catches up on the target Azure side, you stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then update your connection string to point to Azure. When you're finished, your application will be live on Azure!
+
+## Next steps
+
+- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
mysql How To Migrate Rds Mysql Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-data-in-replication.md
+
+ Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication
+description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using Data-in Replication.
+++++ Last updated : 09/24/2021++
+# Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication
++
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+You can use methods such as MySQL dump and restore, MySQL Workbench Export and Import, or Azure Database Migration Service to migrate your MySQL databases to Azure Database for MySQL. By using a combination of open-source tools such as mysqldump or mydumper and myloader with Data-in Replication, you can migrate your workloads with minimum downtime.
+
+Data-in Replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as *events* to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and execute the events in the binary log on the replica's local database.
+
+If you set up [Data-in Replication](../flexible-server/concepts-data-in-replication.md) to synchronize data from a source MySQL server to a target MySQL server, you can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
+
+In this tutorial, you'll learn how to set up Data-in Replication between a source server that runs Amazon Relational Database Service (RDS) for MySQL and a target server that runs Azure Database for MySQL.
+
+## Performance considerations
+
+Before you begin this tutorial, consider the performance implications of the location and capacity of the client computer you'll use to perform the operation.
+
+### Client location
+
+Perform dump or restore operations from a client computer that's launched in the same location as the database server:
+
+- For Azure Database for MySQL servers, the client machine should be in the same virtual network and the same availability zone as the target database server.
+- For source Amazon RDS database instances, the client instance should exist in the same Amazon Virtual Private Cloud and availability zone as the source database server.
+In the preceding case, you can move dump files between client machines by using file transfer protocols like FTP or SFTP or upload them to Azure Blob Storage. To reduce the total migration time, compress files before you transfer them.
+
+### Client capacity
+
+No matter where the client computer is located, it requires adequate compute, I/O, and network capacity to perform the requested operations. The general recommendations are:
+
+- If the dump or restore involves real-time processing of data, for example, compression or decompression, choose an instance class with at least one CPU core per dump or restore thread.
+- Ensure there's enough network bandwidth available to the client instance. Use instance types that support the accelerated networking feature. For more information, see the "Accelerated Networking" section in the [Azure Virtual Machine Networking Guide](../../virtual-network/create-vm-accelerated-networking-cli.md).
+- Ensure that the client machine's storage layer provides the expected read/write capacity. We recommend that you use an Azure virtual machine with Premium SSD storage.
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+- Install the [mysqlclient](https://dev.mysql.com/downloads/) on your client computer to create a dump, and perform a restore operation on your target Azure Database for MySQL server.
+- For larger databases, install [mydumper and myloader](https://centminmod.com/mydumper.html) for parallel dumping and restoring of databases.
+
+ > [!NOTE]
+ > Mydumper can only run on Linux distributions. For more information, see [How to install mydumper](https://github.com/maxbube/mydumper#how-to-install-mydumpermyloader).
+
+- Create an instance of Azure Database for MySQL server that runs version 5.7 or 8.0.
+
+ > [!IMPORTANT]
+ > If your target is Azure Database for MySQL Flexible Server with zone-redundant high availability (HA), note that Data-in Replication isn't supported for this configuration. As a workaround, during server creation set up zone-redundant HA:
+ >
+ > 1. Create the server with zone-redundant HA enabled.
+ > 1. Disable HA.
+ > 1. Follow the article to set up Data-in Replication.
+ > 1. Post-cutover, remove the Data-in Replication configuration.
+ > 1. Enable HA.
+
+Ensure that several parameters and features are configured and set up properly, as described:
+
+- For compatibility reasons, have the source and target database servers on the same MySQL version.
+- Have a primary key in each table. A lack of primary keys on tables can slow the replication process.
+- Make sure the character set of the source and the target database are the same.
+- Set the `wait_timeout` parameter to a reasonable time. The time depends on the amount of data or workload you want to import or migrate.
+- Verify that all your tables use InnoDB. The Azure Database for MySQL server only supports the InnoDB storage engine.
+- For tables with many secondary indexes or for tables that are large, the effects of performance overhead are visible during restore. Modify the dump files so that the `CREATE TABLE` statements don't include secondary key definitions. After you import the data, re-create secondary indexes to avoid the performance penalty during the restore process.
+
+Finally, to prepare for Data-in Replication:
+
+- Verify that the target Azure Database for MySQL server can connect to the source Amazon RDS for MySQL server over port 3306.
+- Ensure that the source Amazon RDS for MySQL server allows both inbound and outbound traffic on port 3306.
+- Make sure you provide [site-to-site connectivity](../../vpn-gateway/tutorial-site-to-site-portal.md) to your source server by using either [Azure ExpressRoute](../../expressroute/expressroute-introduction.md) or [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Azure Virtual Network documentation](../../virtual-network/index.yml). Also see the quickstart articles with step-by-step details.
+- Configure your source database server's network security groups to allow the target Azure Database for MySQL's server IP address.
+
+> [!IMPORTANT]
+> If the source Amazon RDS for MySQL instance has GTID_mode set to ON, the target instance of Azure Database for MySQL Flexible Server must also have GTID_mode set to ON.
+
+## Configure the target instance of Azure Database for MySQL
+
+To configure the target instance of Azure Database for MySQL, which is the target for Data-in Replication:
+
+1. Set the `max_allowed_packet` parameter value to the maximum of **1073741824**, which is 1 GB. This value prevents any overflow issues related to long rows.
+1. Set the `slow_query_log`, `general_log`, `audit_log_enabled`, and `query_store_capture_mode` parameters to **OFF** during the migration to help eliminate any overhead related to query logging.
+1. Scale up the compute size of the target Azure Database for MySQL server to the maximum of 64 vCores. This size provides more compute resources when you restore the database dump from the source server.
+
+ You can always scale back the compute to meet your application demands after the migration is complete.
+
+1. Scale up the storage size to get more IOPS during the migration or increase the maximum IOPS for the migration.
+
+ > [!NOTE]
+ > Available maximum IOPS are determined by compute size. For more information, see the IOPS section in [Compute and storage options in Azure Database for MySQL - Flexible Server](../flexible-server/concepts-compute-storage.md#iops).
+
+## Configure the source Amazon RDS for MySQL server
+
+To prepare and configure the MySQL server hosted in Amazon RDS, which is the *source* for Data-in Replication:
+
+1. Confirm that binary logging is enabled on the source Amazon RDS for MySQL server. Check that automated backups are enabled, or ensure that a read replica exists for the source Amazon RDS for MySQL server.
+
+1. Ensure that the binary log files on the source server are retained until after the changes are applied on the target instance of Azure Database for MySQL.
+
+ With Data-in Replication, Azure Database for MySQL doesn't manage the replication process.
+
+1. To check the binary log retention on the source Amazon RDS server to determine the number of hours the binary logs are retained, call the `mysql.rds_show_configuration` stored procedure:
+
+ ```
+ mysql> call mysql.rds_show_configuration;
+ ++-+--+
+ | name | value | description |
+ ++-+--+
+ | binlog retention hours | 24 | binlog retention hours specifies the duration in hours before binary logs are automatically deleted. |
+ | source delay | 0 | source delay specifies replication delay in seconds between current instance and its master. |
+ | target delay | 0 | target delay specifies replication delay in seconds between current instance and its future read-replica. |
+ ++- +--+
+ 3 rows in set (0.00 sec)
+ ```
+1. To configure the binary log retention period, run the `rds_set_configuration` stored procedure to ensure that the binary logs are retained on the source server for the desired length of time. For example:
+
+ ```
+ Mysql> Call mysql.rds_set_configuration(ΓÇÿbinlog retention hours', 96);
+ ```
+
+ If you're creating a dump and then restoring, the preceding command helps you to quickly catch up with the delta changes.
+
+ > [!NOTE]
+ > Ensure there's ample disk space to store the binary logs on the source server based on the retention period defined.
+
+There are two ways to capture a dump of data from the source Amazon RDS for MySQL server. One approach involves capturing a dump of data directly from the source server. The other approach involves capturing a dump from an Amazon RDS for MySQL read replica.
+
+- To capture a dump of data directly from the source server:
+
+ 1. Ensure that you stop the writes from the application for a few minutes to get a transactionally consistent dump of data.
+
+ You can also temporarily set the `read_only` parameter to a value of **1** so that writes aren't processed when you're capturing a dump of data.
+
+ 1. After you stop the writes on the source server, collect the binary log file name and offset by running the command `Mysql> Show master status;`.
+ 1. Save these values to start replication from your Azure Database for MySQL server.
+ 1. To create a dump of the data, execute `mysqldump` by running the following command:
+
+ ```
+ $ mysqldump -h hostname -u username -p ΓÇôsingle-transaction ΓÇôdatabases dbnames ΓÇôorder-by-primary> dumpname.sql
+ ```
+
+- If stopping writes on the source server isn't an option or the performance of dumping data isn't acceptable on the source server, capture a dump on a replica server:
+
+ 1. Create an Amazon MySQL read replica with the same configuration as the source server. Then create the dump there.
+ 1. Let the Amazon RDS for MySQL read replica catch up with the source Amazon RDS for MySQL server.
+ 1. When the replica lag reaches **0** on the read replica, stop replication by calling the `mysql.rds_stop_replication` stored procedure.
+
+ ```
+ Mysql> call mysql.rds_stop_replication;
+ ```
+
+ 1. With replication stopped, connect to the replica. Then run the `SHOW SLAVE STATUS` command to retrieve the current binary log file name from the **Relay_Master_Log_File** field and the log file position from the **Exec_Master_Log_Pos** field.
+ 1. Save these values to start replication from your Azure Database for MySQL server.
+ 1. To create a dump of the data from the Amazon RDS for MySQL read replica, execute `mysqldump` by running the following command:
+
+ ```
+ $ mysqldump -h hostname -u username -p ΓÇôsingle-transaction ΓÇôdatabases dbnames ΓÇôorder-by-primary> dumpname.sql
+ ```
+
+ > [!NOTE]
+ > You can also use mydumper for capturing a parallelized dump of your data from your source Amazon RDS for MySQL database. For more information, see [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md).
+
+## Link source and replica servers to start Data-in Replication
+
+1. To restore the database by using mysql native restore, run the following command:
+
+ ```
+ $ mysql -h <target_server> -u <targetuser> -p < dumpname.sql
+ ```
+
+ > [!NOTE]
+ > If you're instead using myloader, see [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md).
+
+1. Sign in to the source Amazon RDS for MySQL server, and set up a replication user. Then grant the necessary privileges to this user.
+
+ - If you're using SSL, run the following commands:
+
+ ```
+ Mysql> CREATE USER 'syncuser'@'%' IDENTIFIED BY 'userpassword';
+ Mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'syncuser'@'%' REQUIRE SSL;
+ Mysql> SHOW GRANTS FOR syncuser@'%';
+ ```
+
+ - If you're not using SSL, run the following commands:
+
+ ```
+ Mysql> CREATE USER 'syncuser'@'%' IDENTIFIED BY 'userpassword';
+ Mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'syncuser'@'%';
+ Mysql> SHOW GRANTS FOR syncuser@'%';
+ ```
+
+ All Data-in Replication functions are done by stored procedures. For information about all procedures, see [Data-in Replication stored procedures](reference-stored-procedures.md#data-in-replication-stored-procedures). You can run these stored procedures in the MySQL shell or MySQL Workbench.
+
+1. To link the Amazon RDS for MySQL source server and the Azure Database for MySQL target server, sign in to the target Azure Database for MySQL server. Set the Amazon RDS for MySQL server as the source server by running the following command:
+
+ ```
+ CALL mysql.az_replication_change_master('source_server','replication_user_name','replication_user_password',3306,'<master_bin_log_file>',master_bin_log_position,'<master_ssl_ca>');
+ ```
+
+1. To start replication between the source Amazon RDS for MySQL server and the target Azure Database for MySQL server, run the following command:
+
+ ```
+ Mysql> CALL mysql.az_replication_start;
+ ```
+
+1. To check the status of the replication, on the replica server, run the following command:
+
+ ```
+ Mysql> show slave status\G
+ ```
+
+ If the state of the `Slave_IO_Running` and `Slave_SQL_Running` parameters is **Yes**, replication has started and is in a running state.
+
+1. Check the value of the `Seconds_Behind_Master` parameter to determine how delayed the target server is.
+
+ If the value is **0**, the target has processed all updates from the source server. If the value is anything other than **0**, the target server is still processing updates.
+
+## Ensure a successful cutover
+
+To ensure a successful cutover:
+
+1. Configure the appropriate logins and database-level permissions in the target Azure Database for MySQL server.
+1. Stop writes to the source Amazon RDS for MySQL server.
+1. Ensure that the target Azure Database for MySQL server has caught up with the source server and that the `Seconds_Behind_Master` value is **0** from `show slave status`.
+1. Call the stored procedure `mysql.az_replication_stop` to stop the replication because all changes have been replicated to the target Azure Database for MySQL server.
+1. Call `mysql.az_replication_remove_master` to remove the Data-in Replication configuration.
+1. Redirect clients and client applications to the target Azure Database for MySQL server.
+
+At this point, the migration is complete. Your applications are connected to the server running Azure Database for MySQL.
+
+## Next steps
+
+- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201). It contains a demo that shows how to migrate MySQL apps to Azure Database for MySQL.
mysql How To Migrate Rds Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-workbench.md
+
+ Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench
+description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using the MySQL Workbench Migration Wizard.
+++++ Last updated : 05/21/2021++
+# Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench
++
+You can use various utilities, such as MySQL Workbench Export/Import, Azure Database Migration Service (DMS), and MySQL dump and restore, to migrate Amazon RDS for MySQL to Azure Database for MySQL. However, using the MySQL Workbench Migration Wizard provides an easy and convenient way to move your Amazon RDS for MySQL databases to Azure Database for MySQL.
+
+With the Migration Wizard, you can conveniently select which schemas and objects to migrate. It also allows you to view server logs to identify errors and bottlenecks in real time. As a result, you can edit and modify tables or database structures and objects during the migration process when an error is detected, and then resume migration without having to restart from scratch.
+
+> [!NOTE]
+> You can also use the Migration Wizard to migrate other sources, such as Microsoft SQL Server, Oracle, PostgreSQL, MariaDB, etc., which are outside the scope of this article.
+
+## Prerequisites
+
+Before you start the migration process, it's recommended that you ensure that several parameters and features are configured and set up properly, as described below.
+
+- Make sure the Character set of the source and target databases are the same.
+- Set the wait timeout to a reasonable time depending on the amount data or workload you want to import or migrate.
+- Set the `max_allowed_packet parameter` to a reasonable amount depending on the size of the database you want to import or migrate.
+- Verify that all of your tables use InnoDB, as Azure Database for MySQL Server only supports the InnoDB Storage engine.
+- Remove, replace, or modify all triggers, stored procedures, and other functions containing root user or super user definers (Azure Database for MySQL doesnΓÇÖt support the Super user privilege). To replace the definers with the name of the admin user that is running the import process, run the following command:
+
+ ```
+ DELIMITER; ;/*!50003 CREATE*/ /*!50017 DEFINER=`root`@`127.0.0.1`*/ /*!50003
+ DELIMITER;
+ /* Modified to */
+ DELIMITER;
+ /*!50003 CREATE*//*!50017 DEFINER=`AdminUserName`@`ServerName`*/ /*!50003
+ DELIMITER;
+
+ ```
+
+- If User Defined Functions (UDFs) are running on your database server, you need to delete the privilege for the mysql database. To determine if any UDFs are running on your server, use the following query:
+
+ ```
+ SELECT * FROM mysql.func;
+ ```
+
+ If you discover that UDFs are running, you can drop the UDFs by using the following query:
+
+ ```
+ DROP FUNCTION your_UDFunction;
+ ```
+
+- Make sure that the server on which the tool is running, and ultimately the export location, has ample disk space and compute power (vCores, CPU, and Memory) to perform the export operation, especially when exporting a very large database.
+- Create a path between the on-premises or AWS instance and Azure Database for MySQL if the workload is behind firewalls or other network security layers.
+
+## Begin the migration process
+
+1. To start the migration process, sign in to MySQL Workbench, and then select the home icon.
+2. In the left-hand navigation bar, select the Migration Wizard icon, as shown in the screenshot below.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/begin-the-migration.png" alt-text="MySQL Workbench start screen":::
+
+ The **Overview** page of the Migration Wizard is displayed, as shown below.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/migration-wizard-welcome.png" alt-text="MySQL Workbench Migration Wizard welcome page":::
+
+3. Determine if you have an ODBC driver for MySQL Server installed by selecting **Open ODBC Administrator**.
+
+ In our case, on the **Drivers** tab, youΓÇÖll notice that there are already two MySQL Server ODBC drivers installed.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/obdc-administrator-page.png" alt-text="ODBC Data Source Administrator page":::
+
+ If a MySQL ODBC driver isn't installed, use the MySQL Installer you used to install MySQL Workbench to install the driver. For more information about MySQL ODBC driver installation, see the following resources:
+
+ - [MySQL :: MySQL Connector/ODBC Developer Guide :: 4.1 Installing Connector/ODBC on Windows](https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation-binary-windows.html)
+ - [ODBC Driver for MySQL: How to Install and Set up Connection (Step-by-step) ΓÇô {coding}Sight (codingsight.com)](https://codingsight.com/install-and-configure-odbc-drivers-for-mysql/)
+
+4. Close the **ODBC Data Source Administrator** dialog box, and then continue with the migration process.
+
+## Configure source database server connection parameters
+
+1. On the **Overview** page, select **Start Migration**.
+
+ The **Source Selection** page appears. Use this page to provide information about the RDBMS you're migrating from and the parameters for the connection.
+
+2. In the **Database System** field, select **MySQL**.
+3. In the **Stored Connection** field, select one of the saved connection settings for that RDBMS.
+
+ You can save connections by marking the checkbox at the bottom of the page and providing a name of your preference.
+
+4. In the **Connection Method** field, select **Standard TCP/IP**.
+5. In the **Hostname** field, specify the name of your source database server.
+6. In the **Port** field, specify **3306**, and then enter the username and password for connecting to the server.
+7. In the **Database** field, enter the name of the database you want to migrate if you know it; otherwise leave this field blank.
+8. Select **Test Connection** to check the connection to your MySQL Server instance.
+
+ If youΓÇÖve entered the correct parameters, a message appears indicating a successful connection attempt.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/source-connection-parameters.png" alt-text="Source database connection parameters page":::
+
+9. Select **Next**.
+
+## Configure target database server connection parameters
+
+1. On the **Target Selection** page, set the parameters to connect to your target MySQL Server instance using a process similar to that for setting up the connection to the source server.
+2. To verify a successful connection, select **Test Connection**.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/target-connection-parameters.png" alt-text="Target database connection parameters page":::
+
+3. Select **Next**.
+
+## Select the schemas to migrate
+
+The Migration Wizard will communicate to your MySQL Server instance and fetch a list of schemas from the source server.
+
+1. Select **Show logs** to view this operation.
+
+ The screenshot below shows how the schemas are being retrieved from the source database server.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/retrieve-schemas.png" alt-text="Fetch schemas list page":::
+
+2. Select **Next** to verify that all the schemas were successfully fetched.
+
+ The screenshot below shows the list of fetched schemas.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/schemas-selection.png" alt-text="Schemas selection page":::
+
+ You can only migrate schemas that appear in this list.
+
+3. Select the schemas that you want to migrate, and then select **Next**.
+
+## Object migration
+
+Next, specify the object(s) that you want to migrate.
+
+1. Select **Show Selection**, and then, under **Available Objects**, select and add the objects that you want to migrate.
+
+ When you've added the objects, they'll appear under **Objects to Migrate**, as shown in the screenshot below.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/source-objects.png" alt-text="Source objects selection page":::
+
+ In this scenario, weΓÇÖve selected all table objects.
+
+2. Select **Next**.
+
+## Edit data
+
+In this section, you have the option of editing the objects that you want to migrate.
+
+1. On the **Manual Editing** page, notice the **View** drop-down menu in the top-right corner.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/manual-editing.png" alt-text="Manual Editing selection page":::
+
+ The **View** drop-down box includes three items:
+
+ - **All Objects** ΓÇô Displays all objects. With this option, you can manually edit the generated SQL before applying them to the target database server. To do this, select the object and select Show Code and Messages. You can see (and edit!) the generated MySQL code that corresponds to the selected object.
+ - **Migration problems** ΓÇô Displays any problems that occurred during the migration, which you can review and verify.
+ - **Column Mapping** ΓÇô Displays column mapping information. You can use this view to edit the name and change column of the target object.
+
+2. Select **Next**.
+
+## Create the target database
+
+1. Select the **Create schema in target RDBMS** check box.
+
+ You can also choose to keep already existing schemas, so they won't be modified or updated.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/create-target-database.png" alt-text="Target Creation Options page":::
+
+ In this article, we've chosen to create the schema in target RDBMS, but you can also select the **Create a SQL script file** check box to save the file on your local computer or for other purposes.
+
+2. Select **Next**.
+
+## Run the MySQL script to create the database objects
+
+Since we've elected to create schema in the target RDBMS, the migrated SQL script will be executed in the target MySQL server. You can view its progress as shown in the screenshot below:
++
+1. After the creation of the schemas and their objects completes, select **Next**.
+
+ On the **Create Target Results** page, youΓÇÖre presented with a list of the objects created and notification of any errors that were encountered while creating them, as shown in the following screenshot.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/create-target-results.png" alt-text="Create Target Results page":::
+
+2. Review the detail on this page to verify that everything completed as intended.
+
+ For this article, we donΓÇÖt have any errors. If there's no need to address any error messages, you can edit the migration script.
+
+3. In the **Object** box, select the object that you want to edit.
+4. Under **SQL CREATE script for selected object**, modify your SQL script, and then select **Apply** to save the changes.
+5. Select **Recreate Objects** to run the script including your changes.
+
+ If the script fails, you may need to edit the generated script. You can then manually fix the SQL script and run everything again. In this article, weΓÇÖre not changing anything, so weΓÇÖll leave the script as it is.
+
+6. Select **Next**.
+
+## Transfer data
+
+This part of the process moves data from the source MySQL Server database instance into your newly created target MySQL database instance. Use the **Data Transfer Setup** page to configure this process.
++
+This page provides options for setting up the data transfer. For the purposes of this article, weΓÇÖll accept the default values.
+
+1. To begin the actual process of transferring data, select **Next**.
+
+ The progress of the data transfer process appears as shown in the following screenshot.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/bulk-data-transfer.png" alt-text="Bulk Data Transfer page":::
+
+ > [!NOTE]
+ > The duration of the data transfer process is directly related to the size of the database you're migrating. The larger the source database, the longer the process will take, potentially up to a few hours for larger databases.
+
+2. After the transfer completes, select **Next**.
+
+ The **Migration Report** page appears, providing a report summarizing the whole process, as shown on the screenshot below:
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/migration-report.png" alt-text="Migration Progress Report page":::
+
+3. Select **Finish** to close the Migration Wizard.
+
+ The migration is now successfully completed.
+
+## Verify consistency of the migrated schemas and tables
+
+1. Next, log into your MySQL target database instance to verify that the migrated schemas and tables are consistent with your MySQL source database.
+
+ In our case, you can see that all schemas (sakila, moda, items, customer, clothes, world, and world_x) from the Amazon RDS for MySQL: **MyjolieDB** database have been successfully migrated to the Azure Database for MySQL: **azmysql** instance.
+
+2. To verify the table and rows counts, run the following query on both instances:
+
+ `SELECT COUNT (*) FROM sakila.actor;`
+
+ You can see from the screenshot below that the row count for Amazon RDS MySQL is 200, which matches the Azure Database for MySQL instance.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/table-row-size-source.png" alt-text="Table and Row size source database":::
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/table-row-size-target.png" alt-text="Table and Row size target database":::
+
+ While you can run the above query on every single schema and table, that will be quite a bit of work if youΓÇÖre dealing with hundreds of thousands or even millions of tables. You can use the queries below to verify the schema (database) and table size instead.
+
+3. To check the database size, run the following query:
+
+ ```
+ SELECT table_schema AS "Database",
+ ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size (MB)"
+ FROM information_schema.TABLES
+ GROUP BY table_schema;
+ ```
+
+4. To check the table size, run the following query:
+
+ ```
+ SELECT table_name AS "Table",
+ ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)"
+ FROM information_schema.TABLES
+ WHERE table_schema = "database_name"
+ ORDER BY (data_length + index_length) DESC;
+ ```
+
+ You see from the screenshots below that schema (database) size from the Source Amazon RDS MySQL instance is the same as that of the target Azure Database for MySQL Instance.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/database-size-source.png" alt-text="Database size source database":::
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/database-size-target.png" alt-text="Database size target database":::
+
+ Since the schema (database) sizes are the same in both instances, it's not really necessary to check individual table sizes. In any case, you can always use the above query to check your table sizes, as necessary.
+
+ YouΓÇÖve now confirmed that your migration completed successfully.
+
+## Next steps
+
+- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201), which contains a demo showing how to migrate MySQL apps to Azure Database for MySQL.
mysql How To Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-single-flexible-minimum-downtime.md
+
+ Title: "Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime"
+description: This article describes how to perform a minimal-downtime migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server.
+++++ Last updated : 06/18/2021++
+# Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime
+
+You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimum downtime to your applications by using a combination of open-source tools such as mydumper/myloader and Data-in replication.
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+Data-in replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as ΓÇ£eventsΓÇ¥ to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and to execute the events in the binary log on the replica's local database.
+
+If you set up Data-in replication to synchronize data from one instance of Azure Database for MySQL to another, you can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
+
+In this tutorial, youΓÇÖll use mydumper/myloader and Data-in replication to migrate a sample database ([classicmodels](https://www.mysqltutorial.org/mysql-sample-database.aspx)) from an instance of Azure Database for MySQL - Single Server to an instance of Azure Database for MySQL - Flexible Server, and then synchronize data.
+
+In this tutorial, you learn how to:
+
+* Configure Network Settings for Data-in replication for different scenarios.
+* Configure Data-in replication between the primary and replica.
+* Test the replication.
+* Cutover to complete the migration.
+
+## Prerequisites
+
+To complete this tutorial, you need:
+
+* An instance of Azure Database for MySQL Single Server running version 5.7 or 8.0.
+ > [!Note]
+ > If you're running Azure Database for MySQL Single Server version 5.6, upgrade your instance to 5.7 and then configure data in replication. To learn more, see [Major version upgrade in Azure Database for MySQL - Single Server](how-to-major-version-upgrade.md).
+* An instance of Azure Database for MySQL Flexible Server. For more information, see the article [Create an instance in Azure Database for MySQL Flexible Server](../flexible-server/quickstart-create-server-portal.md).
+ > [!Note]
+ > Configuring Data-in replication for zone redundant high availability servers is not supported. If you would like to have zone redundant HA for your target server, then perform these steps:
+ >
+ > 1. Create the server with Zone redundant HA enabled
+ > 2. Disable HA
+ > 3. Follow the article to setup data-in replication
+ > 4. Post cutover remove the Data-in replication configuration
+ > 5. Enable HA
+ >
+ > *Make sure that **[GTID_Mode](./concepts-read-replicas.md#global-transaction-identifier-gtid)** has the same setting on the source and target servers.*
+
+* To connect and create a database using MySQL Workbench. For more information, see the article [Use MySQL Workbench to connect and query data](../flexible-server/connect-workbench.md).
+* To ensure that you have an Azure VM running Linux in same region (or on the same VNet, with private access) that hosts your source and target databases.
+* To install mysql client or MySQL Workbench (the client tools) on your Azure VM. Ensure that you can connect to both the primary and replica server. For the purposes of this article, mysql client is installed.
+* To install mydumper/myloader on your Azure VM. For more information, see the article [mydumper/myloader](concepts-migrate-mydumper-myloader.md).
+* To download and run the sample database script for the [classicmodels](https://www.mysqltutorial.org/wp-content/uploads/2018/03/mysqlsampledatabase.zip) database on the source server.
+* Configure [binlog_expire_logs_seconds](./concepts-server-parameters.md#binlog_expire_logs_seconds) on the source server to ensure that binlogs arenΓÇÖt purged before the replica commit the changes. Post successful cutover you can reset the value.
+
+## Configure networking requirements
+
+To configure the Data-in replication, you need to ensure that the target can connect to the source over port 3306. Based on the type of endpoint set up on the source, perform the appropriate following steps.
+
+* If a public endpoint is enabled on the source, then ensure that the target can connect to the source by enabling ΓÇ£Allow access to Azure servicesΓÇ¥ in the firewall rule. To learn more, see [Firewall rules - Azure Database for MySQL](./concepts-firewall-rules.md#connecting-from-azure).
+* If a private endpoint and ΓÇ£[Deny public access](concepts-data-access-security-private-link.md#deny-public-access-for-azure-database-for-mysql)ΓÇ¥ is enabled on the source, then install the private link in the same VNet that hosts the target. To learn more, see [Private Link - Azure Database for MySQL](concepts-data-access-security-private-link.md).
+
+## Configure Data-in replication
+
+To configure Data in replication, perform the following steps:
+
+1. Sign in to the Azure VM on which you installed the mysql client tool.
+
+2. Connect to the source and target using the mysql client tool.
+
+3. Use the mysql client tool to determine whether log_bin is enabled on the source by running the following command:
+
+ ```sql
+ SHOW VARIABLES LIKE 'log_bin';
+ ```
+
+ > [!Note]
+ > With Azure Database for MySQL Single Server with the large storage, which supports up to 16TB, this enabled by default.
+
+ > [!Tip]
+ > With Azure Database for MySQL Single Server, which supports up to 4TB, this is not enabled by default. However, if you promote a [read replica](how-to-read-replicas-portal.md) for the source server and then delete read replica, the parameter will be set to ON.
+
+4. Based on the SSL enforcement for the source server, create a user in the source server with the replication permission by running the appropriate command.
+
+ If youΓÇÖre using SSL, run the following command:
+
+ ```sql
+ CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+ GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL;
+ ```
+
+ If youΓÇÖre not using SSL, run the following command:
+
+ ```sql
+ CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+ GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';
+ ```
+
+5. To back up the database using mydumper, run the following command on the Azure VM where we installed the mydumper\myloader:
+
+ ```bash
+ $ mydumper --host=<primary_server>.mysql.database.azure.com --user=<username>@<primary_server> --password=<Password> --outputdir=./backup --rows=100 -G -E -R -z --trx-consistency-only --compress --build-empty-files --threads=16 --compress-protocol --ssl --regex '^(classicmodels\.)' -L mydumper-logs.txt
+ ```
+
+ > [!Tip]
+ > The option **--trx-consistency-only** is a required for transactional consistency while we take backup.
+ >
+ > * The mydumper equivalent of mysqldumpΓÇÖs --single-transaction.
+ > * Useful if all your tables are InnoDB.
+ > * The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction.
+ > * Offers the shortest duration of global locking
+
+ The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction.
+
+ The variables in this command are explained below:
+
+ * **--host:** Name of the primary server
+ * **--user:** Name of a user (in the format username@servername since the primary server is running Azure Database for MySQL - Single Server). You can use server admin or a user having SELECT and RELOAD permissions.
+ * **--Password:** Password of the user above
+
+ For more information about using mydumper, see [mydumper/myloader](concepts-migrate-mydumper-myloader.md)
+
+6. Read the metadata file to determine the binary log file name and offset by running the following command:
+
+ ```bash
+ $ cat ./backup/metadata
+ ```
+
+ In this command, **./backup** refers to the output directory used in the command in the previous step.
+
+ The results should appear as shown in the following image:
+
+ :::image type="content" source="./media/how-to-migrate-single-flexible-minimum-downtime/metadata.png" alt-text="Continuous sync with the Azure Database Migration Service":::
+
+ Make sure to note the binary file name for use in later steps.
+
+7. Restore the database using myloader by running the following command:
+
+ ```bash
+ $ myloader --host=<servername>.mysql.database.azure.com --user=<username> --password=<Password> --directory=./backup --queries-per-transaction=100 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt
+ ```
+
+ The variables in this command are explained below:
+
+ * **--host:** Name of the replica server
+ * **--user:** Name of a user. You can use server admin or a user with read\write permission capable of restoring the schemas and data to the database
+ * **--Password:** Password for the user above
+
+8. Depending on the SSL enforcement on the primary server, connect to the replica server using the mysql client tool and perform the following the steps.
+
+ * If SSL enforcement is enabled, then:
+
+ i. Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem).
+
+ ii. Open the file in notepad and paste the contents to the section ΓÇ£PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HEREΓÇ£.
+
+ ```sql
+ SET @cert = '--BEGIN CERTIFICATE--
+ PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HERE
+ --END CERTIFICATE--'
+ ```
+
+ iii. To configure Data in replication, run the following command:
+
+ ```sql
+ CALL mysql.az_replication_change_master('<Primary_server>.mysql.database.azure.com', '<username>@<primary_server>', '<Password>', 3306, '<File_Name>', <Position>, @cert);
+ ```
+
+ > [!Note]
+ > Determine the position and file name from the information obtained in step 6.
+
+ * If SSL enforcement isn't enabled, then run the following command:
+
+ ```sql
+ CALL mysql.az_replication_change_master('<Primary_server>.mysql.database.azure.com', '<username>@<primary_server>', '<Password>', 3306, '<File_Name>', <Position>, ΓÇÿΓÇÖ);
+ ```
+
+9. To start replication from the replica server, call the below stored procedure.
+
+ ```sql
+ call mysql.az_replication_start;
+ ```
+
+10. To check the replication status, on the replica server, run the following command:
+
+ ```sql
+ show slave status \G;
+ ```
+
+ > [!Note]
+ > If you're using MySQL Workbench the \G modifier is not required.
+
+ If the state of *Slave_IO_Running* and *Slave_SQL_Running* are Yes and the value of *Seconds_Behind_Master* is 0, then replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value is something other than 0, then the replica is processing updates.
+
+## Testing the replication (optional)
+
+To confirm that Data-in replication is working properly, you can verify that the changes to the tables in primary were replicated to the replica.
+
+1. Identify a table to use for testing, for example the Customers table, and then confirm that the number of entries it contains is the same on the primary and replica servers by running the following command on each:
+
+ ```
+ select count(*) from customers;
+ ```
+
+2. Make a note of the entry count for later comparison.
+
+ To test replication, try adding some data to the customer tables on the primary server and see then verify that the new data is replicated. In this case, youΓÇÖll add two rows to a table on the primary server and then confirm that they are replicated on the replica server.
+
+3. In the Customers table on the primary server, insert rows by running the following command:
+
+ ```sql
+ insert into `customers`(`customerNumber`,`customerName`,`contactLastName`,`contactFirstName`,`phone`,`addressLine1`,`addressLine2`,`city`,`state`,`postalCode`,`country`,`salesRepEmployeeNumber`,`creditLimit`) values
+ (<ID>,'name1','name2','name3 ','11.22.5555','54, Add',NULL,'Add1',NULL,'44000','country',1370,'21000.00');
+ ```
+
+4. To check the replication status, call the *show slave status \G* to confirm that replication is working as expected.
+
+5. To confirm that the count is the same, on the replica server, run the following command:
+
+ ```sql
+ select count(*) from customers;
+ ```
+
+## Ensure a successful cutover
+
+To ensure a successful cutover, perform the following tasks:
+
+1. Configure the appropriate server-level firewall and virtual network rules to connect to target Server. You can compare the firewall rules for the [source](how-to-manage-firewall-using-portal.md) and [target](../flexible-server/how-to-manage-firewall-portal.md#create-a-firewall-rule-after-server-is-created) from the portal.
+2. Configure appropriate logins and database level permissions in the target server. You can run *SELECT * FROM mysql.user;* on the source and target servers to compare.
+3. Make sure that all the incoming connections to Azure Database for MySQL Single Server are stopped.
+ > [!Tip]
+ > You can set the Azure Database for MySQL Single Server to read only.
+4. Ensure that the replica has caught up with the primary by running *show slave status \G* and confirming that the value for the *Seconds_Behind_Master* parameter is 0.
+5. Redirect clients and client applications to the target instance of Azure Database for MySQL Flexible Server.
+6. Perform the final cutover by running the mysql.az_replication_stop stored procedure, which will stop replication from the replica server.
+7. *Call mysql.az_replication_remove_master* to remove the Data-in replication configuration.
+
+At this point, your applications are connected to the new Azure Database for MySQL Flexible server and changes in the source will no longer replicate to the target.
+
+## Next steps
+
+* Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](../flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](../flexible-server/how-to-data-in-replication.md)
+* Learn more about [troubleshooting common errors in Azure Database for MySQL](how-to-troubleshoot-common-errors.md).
+* Learn more about [migrating MySQL to Azure Database for MySQL offline using Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md).
mysql How To Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-move-regions-portal.md
+
+ Title: Move Azure regions - Azure portal - Azure Database for MySQL
+description: Move an Azure Database for MySQL server from one Azure region to another using a read replica and the Azure portal.
++++++ Last updated : 06/26/2020
+#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
++
+# Move an Azure Database for MySQL server to another region by using the Azure portal
++
+There are various scenarios for moving an existing Azure Database for MySQL server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning.
+
+You can use an Azure Database for MySQL [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
+
+> [!NOTE]
+> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
+
+## Prerequisites
+
+- The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
+
+- Make sure that your Azure Database for MySQL source server is in the Azure region that you want to move from.
+
+## Prepare to move
+
+To create a cross-region read replica server in the target region using the Azure portal, use the following steps:
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+1. Select the existing Azure Database for MySQL server that you want to use as the source server. This action opens the **Overview** page.
+1. Select **Replication** from the menu, under **SETTINGS**.
+1. Select **Add Replica**.
+1. Enter a name for the replica server.
+1. Select the location for the replica server. The default location is the same as the source server's. Verify that you've selected the target location where you want the replica to be deployed.
+1. Select **OK** to confirm creation of the replica. During replica creation, data is copied from the source server to the replica. Create time may last several minutes or more, in proportion to the size of the source server.
+
+>[!NOTE]
+> When you create a replica, it doesn't inherit the VNet service endpoints of the source server. These rules must be set up independently for the replica.
+
+## Move
+
+> [!IMPORTANT]
+> The standalone server can't be made into a replica again.
+> Before you stop replication on a read replica, ensure the replica has all the data that you require.
+
+Stopping replication to the replica server, causes it to become a standalone server. To stop replication to the replica from the Azure portal, use the following steps:
+
+1. Once the replica has been created, locate and select your Azure Database for MySQL source server.
+1. Select **Replication** from the menu, under **SETTINGS**.
+1. Select the replica server.
+1. Select **Stop replication**.
+1. Confirm you want to stop replication by clicking **OK**.
+
+## Clean up source server
+
+You may want to delete the source Azure Database for MySQL server. To do so, use the following steps:
+
+1. Once the replica has been created, locate and select your Azure Database for MySQL source server.
+1. In the **Overview** window, select **Delete**.
+1. Type in the name of the source server to confirm you want to delete.
+1. Select **Delete**.
+
+## Next steps
+
+In this tutorial, you moved an Azure Database for MySQL server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
+
+- Learn more about [read replicas](concepts-read-replicas.md)
+- Learn more about [managing read replicas in the Azure portal](how-to-read-replicas-portal.md)
+- Learn more about [business continuity](concepts-business-continuity.md) options
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-cli.md
+
+ Title: Manage read replicas - Azure CLI, REST API - Azure Database for MySQL
+description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure CLI or REST API.
+++++ Last updated : 06/17/2020 +++
+# How to create and manage read replicas in Azure Database for MySQL using the Azure CLI and REST API
++
+In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
+
+## Azure CLI
+You can create and manage read replicas using the Azure CLI.
+
+### Prerequisites
+
+- [Install Azure CLI 2.0](/cli/azure/install-azure-cli)
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md) that will be used as the source server.
+
+> [!IMPORTANT]
+> The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
+
+### Create a read replica
+
+> [!IMPORTANT]
+> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
+>
+>If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
+
+A read replica server can be created using the following command:
+
+```azurecli-interactive
+az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup
+```
+
+The `az mysql server replica create` command requires the following parameters:
+
+| Setting | Example value | Description  |
+| | | |
+| resource-group |  myresourcegroup |  The resource group where the replica server will be created to.  |
+| name | mydemoreplicaserver | The name of the new replica server that is created. |
+| source-server | mydemoserver | The name or ID of the existing source server to replicate from. |
+
+To create a cross region read replica, use the `--location` parameter. The CLI example below creates the replica in West US.
+
+```azurecli-interactive
+az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup --location westus
+```
+
+> [!NOTE]
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+
+> [!NOTE]
+> * The `az mysql server replica create` command has `--sku-name` argument which allows you to specify the sku (`{pricing_tier}_{compute generation}_{vCores}`) while you create a replica using Azure CLI. </br>
+> * The primary server and read replica should be on same pricing tier (General Purpose or Memory Optimized). </br>
+> * The replica server configuration can also be changed after it has been created. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the master.
++
+### List replicas for a source server
+
+To view all replicas for a given source server, run the following command:
+
+```azurecli-interactive
+az mysql server replica list --server-name mydemoserver --resource-group myresourcegroup
+```
+
+The `az mysql server replica list` command requires the following parameters:
+
+| Setting | Example value | Description  |
+| | | |
+| resource-group |  myresourcegroup |  The resource group where the replica server will be created to.  |
+| server-name | mydemoserver | The name or ID of the source server. |
+
+### Stop replication to a replica server
+
+> [!IMPORTANT]
+> Stopping replication to a server is irreversible. Once replication has stopped between a source and replica, it cannot be undone. The replica server then becomes a standalone server and now supports both read and writes. This server cannot be made into a replica again.
+
+Replication to a read replica server can be stopped using the following command:
+
+```azurecli-interactive
+az mysql server replica stop --name mydemoreplicaserver --resource-group myresourcegroup
+```
+
+The `az mysql server replica stop` command requires the following parameters:
+
+| Setting | Example value | Description  |
+| | | |
+| resource-group |  myresourcegroup |  The resource group where the replica server exists.  |
+| name | mydemoreplicaserver | The name of the replica server to stop replication on. |
+
+### Delete a replica server
+
+Deleting a read replica server can be done by running the **[az mysql server delete](/cli/azure/mysql/server)** command.
+
+```azurecli-interactive
+az mysql server delete --resource-group myresourcegroup --name mydemoreplicaserver
+```
+
+### Delete a source server
+
+> [!IMPORTANT]
+> Deleting a source server stops replication to all replica servers and deletes the source server itself. Replica servers become standalone servers that now support both read and writes.
+
+To delete a source server, you can run the **[az mysql server delete](/cli/azure/mysql/server)** command.
+
+```azurecli-interactive
+az mysql server delete --resource-group myresourcegroup --name mydemoserver
+```
++
+## REST API
+You can create and manage read replicas using the [Azure REST API](/rest/api/azure/).
+
+### Create a read replica
+You can create a read replica by using the [create API](/rest/api/mysql/flexibleserver/servers/create):
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{replicaName}?api-version=2017-12-01
+```
+
+```json
+{
+ "location": "southeastasia",
+ "properties": {
+ "createMode": "Replica",
+ "sourceServerId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}"
+ }
+}
+```
+
+> [!NOTE]
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+
+A replica is created by using the same compute and storage settings as the master. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and back-up retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
++
+> [!IMPORTANT]
+> Before a source server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
+
+### List replicas
+You can view the list of replicas of a source server using the [replica list API](/rest/api/mysql/flexibleserver/replicas/list-by-server):
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}/Replicas?api-version=2017-12-01
+```
+
+### Stop replication to a replica server
+You can stop replication between a source server and a read replica by using the [update API](/rest/api/mysql/flexibleserver/servers/update).
+
+After you stop replication to a source server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{masterServerName}?api-version=2017-12-01
+```
+
+```json
+{
+ "properties": {
+ "replicationRole":"None"
+ }
+}
+```
+
+### Delete a source or replica server
+To delete a source or replica server, you use the [delete API](/rest/api/mysql/flexibleserver/servers/delete):
+
+When you delete a source server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/servers/{serverName}?api-version=2017-12-01
+```
+
+### Known issue
+
+There are two generations of storage which the servers in General Purpose and Memory Optimized tier use, General purpose storage v1 (Supports up to 4-TB) & General purpose storage v2 (Supports up to 16-TB storage).
+Source server and the replica server should have same storage type. As [General purpose storage v2](./concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) is not available in all regions, please make sure you choose the correct replica region while you use location with the CLI or REST API for read replica creation. On how to identify the storage type of your source server refer to link [How can I determine which storage type my server is running on](./concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on).
+
+If you choose a region where you cannot create a read replica for your source server, you will encounter the issue where the deployment will keep running as shown in the figure below and then will timeout with the error *ΓÇ£The resource provision operation did not complete within the allowed timeout period.ΓÇ¥*
+
+[ :::image type="content" source="media/how-to-read-replicas-cli/replica-cli-known-issue.png" alt-text="Read replica cli error.":::](media/how-to-read-replicas-cli/replica-cli-known-issue.png#lightbox)
+
+## Next steps
+
+- Learn more about [read replicas](concepts-read-replicas.md)
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-portal.md
+
+ Title: Manage read replicas - Azure portal - Azure Database for MySQL
+description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure portal.
+++++ Last updated : 06/17/2020 ++
+# How to create and manage read replicas in Azure Database for MySQL using the Azure portal
++
+In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure portal.
+
+## Prerequisites
+
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md) that will be used as the source server.
+
+> [!IMPORTANT]
+> The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
+
+## Create a read replica
+
+> [!IMPORTANT]
+> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
+>
+>If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
+
+A read replica server can be created using the following steps:
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+2. Select the existing Azure Database for MySQL server that you want to use as a master. This action opens the **Overview** page.
+
+3. Select **Replication** from the menu, under **SETTINGS**.
+
+4. Select **Add Replica**.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/add-replica-1.png" alt-text="Azure Database for MySQL - Replication":::
+
+5. Enter a name for the replica server.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/replica-name.png" alt-text="Azure Database for MySQL - Replica name":::
+
+6. Select the location for the replica server. The default location is the same as the source server's.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/replica-location.png" alt-text="Azure Database for MySQL - Replica location":::
+
+ > [!NOTE]
+ > To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+
+7. Select **OK** to confirm creation of the replica.
+
+> [!NOTE]
+> Read replicas are created with the same server configuration as the master. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the master.
+
+Once the replica server has been created, it can be viewed from the **Replication** blade.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/list-replica.png" alt-text="Azure Database for MySQL - List replicas":::
+
+## Stop replication to a replica server
+
+> [!IMPORTANT]
+> Stopping replication to a server is irreversible. Once replication has stopped between a source and replica, it cannot be undone. The replica server then becomes a standalone server and now supports both read and writes. This server cannot be made into a replica again.
+
+To stop replication between a source and a replica server from the Azure portal, use the following steps:
+
+1. In the Azure portal, select your source Azure Database for MySQL server.
+
+2. Select **Replication** from the menu, under **SETTINGS**.
+
+3. Select the replica server you wish to stop replication for.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/stop-replication-select.png" alt-text="Azure Database for MySQL - Stop replication select server":::
+
+4. Select **Stop replication**.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/stop-replication.png" alt-text="Azure Database for MySQL - Stop replication":::
+
+5. Confirm you want to stop replication by clicking **OK**.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/stop-replication-confirm.png" alt-text="Azure Database for MySQL - Stop replication confirm":::
+
+## Delete a replica server
+
+To delete a read replica server from the Azure portal, use the following steps:
+
+1. In the Azure portal, select your source Azure Database for MySQL server.
+
+2. Select **Replication** from the menu, under **SETTINGS**.
+
+3. Select the replica server you wish to delete.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/delete-replica-select.png" alt-text="Azure Database for MySQL - Delete replica select server":::
+
+4. Select **Delete replica**
+
+ :::image type="content" source="./media/how-to-read-replica-portal/delete-replica.png" alt-text="Azure Database for MySQL - Delete replica":::
+
+5. Type the name of the replica and click **Delete** to confirm deletion of the replica.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/delete-replica-confirm.png" alt-text="Azure Database for MySQL - Delete replica confirm":::
+
+## Delete a source server
+
+> [!IMPORTANT]
+> Deleting a source server stops replication to all replica servers and deletes the source server itself. Replica servers become standalone servers that now support both read and writes.
+
+To delete a source server from the Azure portal, use the following steps:
+
+1. In the Azure portal, select your source Azure Database for MySQL server.
+
+2. From the **Overview**, select **Delete**.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/delete-master-overview.png" alt-text="Azure Database for MySQL - Delete master":::
+
+3. Type the name of the source server and click **Delete** to confirm deletion of the source server.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/delete-master-confirm.png" alt-text="Azure Database for MySQL - Delete master confirm":::
+
+## Monitor replication
+
+1. In the [Azure portal](https://portal.azure.com/), select the replica Azure Database for MySQL server you want to monitor.
+
+2. Under the **Monitoring** section of the sidebar, select **Metrics**:
+
+3. Select **Replication lag in seconds** from the dropdown list of available metrics.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/monitor-select-replication-lag-1.png" alt-text="Select Replication lag":::
+
+4. Select the time range you wish to view. The image below selects a 30 minute time range.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/monitor-replication-lag-time-range-1.png" alt-text="Select time range":::
+
+5. View the replication lag for the selected time range. The image below displays the last 30 minutes.
+
+ :::image type="content" source="./media/how-to-read-replica-portal/monitor-replication-lag-time-range-thirty-mins-1.png" alt-text="Select time range 30 minutes":::
+
+## Next steps
+
+- Learn more about [read replicas](concepts-read-replicas.md)
mysql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-powershell.md
+
+ Title: Manage read replicas - Azure PowerShell - Azure Database for MySQL
+description: Learn how to set up and manage read replicas in Azure Database for MySQL using PowerShell.
+++++ Last updated : 06/17/2020 +++
+# How to create and manage read replicas in Azure Database for MySQL using PowerShell
++
+In this article, you learn how to create and manage read replicas in the Azure Database for MySQL
+service using PowerShell. To learn more about read replicas, see the
+[overview](concepts-read-replicas.md).
+
+## Azure PowerShell
+
+You can create and manage read replicas using PowerShell.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+ [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
+> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
++
+> [!IMPORTANT]
+> The read replica feature is only available for Azure Database for MySQL servers in the General
+> Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing
+> tiers.
+>
+>If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
+
+### Create a read replica
+
+> [!IMPORTANT]
+> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
+
+A read replica server can be created using the following command:
+
+```azurepowershell-interactive
+Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ New-AzMySqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup
+```
+
+The `New-AzMySqlReplica` command requires the following parameters:
+
+| Setting | Example value | Description  |
+| | | |
+| ResourceGroupName |  myresourcegroup |  The resource group where the replica server is created.  |
+| Name | mydemoreplicaserver | The name of the new replica server that is created. |
+
+To create a cross region read replica, use the **Location** parameter. The following example creates
+a replica in the **West US** region.
+
+```azurepowershell-interactive
+Get-AzMySqlServer -Name mrdemoserver -ResourceGroupName myresourcegroup |
+ New-AzMySqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -Location westus
+```
+
+> [!NOTE]
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+
+By default, read replicas are created with the same server configuration as the source unless the
+**Sku** parameter is specified.
+
+> [!NOTE]
+> It is recommended that the replica server's configuration should be kept at equal or greater
+> values than the source to ensure the replica is able to keep up with the master.
+
+### List replicas for a source server
+
+To view all replicas for a given source server, run the following command:
+
+```azurepowershell-interactive
+Get-AzMySqlReplica -ResourceGroupName myresourcegroup -ServerName mydemoserver
+```
+
+The `Get-AzMySqlReplica` command requires the following parameters:
+
+| Setting | Example value | Description  |
+| | | |
+| ResourceGroupName |  myresourcegroup |  The resource group where the replica server will be created to.  |
+| ServerName | mydemoserver | The name or ID of the source server. |
+
+### Delete a replica server
+
+Deleting a read replica server can be done by running the `Remove-AzMySqlServer` cmdlet.
+
+```azurepowershell-interactive
+Remove-AzMySqlServer -Name mydemoreplicaserver -ResourceGroupName myresourcegroup
+```
+
+### Delete a source server
+
+> [!IMPORTANT]
+> Deleting a source server stops replication to all replica servers and deletes the source server
+> itself. Replica servers become standalone servers that now support both read and writes.
+
+To delete a source server, you can run the `Remove-AzMySqlServer` cmdlet.
+
+```azurepowershell-interactive
+Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+### Known Issue
+
+There are two generations of storage which the servers in General Purpose and Memory Optimized tier use, General purpose storage v1 (Supports up to 4-TB) & General purpose storage v2 (Supports up to 16-TB storage).
+Source server and the replica server should have same storage type. As [General purpose storage v2](./concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) is not available in all regions, please make sure you choose the correct replica region while you use location with the PowerShell for read replica creation. On how to identify the storage type of your source server refer to link [How can I determine which storage type my server is running on](./concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on).
+
+If you choose a region where you cannot create a read replica for your source server, you will encounter the issue where the deployment will keep running as shown in the figure below and then will timeout with the error *ΓÇ£The resource provision operation did not complete within the allowed timeout period.ΓÇ¥*
+
+[ :::image type="content" source="media/how-to-read-replicas-powershell/replica-powershell-known-issue.png" alt-text="Read replica cli error":::](media/how-to-read-replicas-powershell/replica-powershell-known-issue.png#lightbox)
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Restart Azure Database for MySQL server using PowerShell](how-to-restart-server-powershell.md)
mysql How To Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md
+
+ Title: Connect with redirection - Azure Database for MySQL
+description: This article describes how you can configure you application to connect to Azure Database for MySQL with redirection.
+++++ Last updated : 6/8/2020++
+# Connect to Azure Database for MySQL with redirection
++
+This topic explains how to connect an application your Azure Database for MySQL server with redirection mode. Redirection aims to reduce network latency between client applications and MySQL servers by allowing applications to connect directly to backend server nodes.
+
+## Before you begin
+Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Database for MySQL server with engine version 5.6, 5.7, or 8.0.
+
+For details, refer to how to create an Azure Database for MySQL server using the [Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).
+
+> [!IMPORTANT]
+> Redirection is currently not supported with [Private Link for Azure Database for MySQL](concepts-data-access-security-private-link.md).
+
+## Enable redirection
+
+On your Azure Database for MySQL server, configure the `redirect_enabled` parameter to `ON` to allow connections with redirection mode. To update this server parameter, use the [Azure portal](how-to-server-parameters.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md).
+
+## PHP
+
+Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft.
+
+The mysqlnd_azure extension is available to add to PHP applications through PECL and it is highly recommended to install and configure the extension through the officially published [PECL package](https://pecl.php.net/package/mysqlnd_azure).
+
+> [!IMPORTANT]
+> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview.
+
+### Redirection logic
+
+>[!IMPORTANT]
+> Redirection logic/behavior beginning version 1.1.0 was updated and **it is recommended to use version 1.1.0+**.
+
+The redirection behavior is determined by the value of `mysqlnd_azure.enableRedirect`. The table below outlines the behavior of redirection based on the value of this parameter beginning in **version 1.1.0+**.
+
+If you are using an older version of the mysqlnd_azure extension (version 1.0.0-1.0.3), the redirection behavior is determined by the value of `mysqlnd_azure.enabled`. The valid values are `off` (acts similarly as the behavior outlined in the table below) and `on` (acts like `preferred` in the table below).
+
+|**mysqlnd_azure.enableRedirect value**| **Behavior**|
+|-|-|
+|`off` or `0`|Redirection will not be used. |
+|`on` or `1`|- If the connection does not use SSL on the driver side, no connection will be made. The following error will be returned: *"mysqlnd_azure.enableRedirect is on, but SSL option is not set in connection string. Redirection is only possible with SSL."*<br>- If SSL is used on the driver side, but redirection is not supported on the server, the first connection is aborted and the following error is returned: *"Connection aborted because redirection is not enabled on the MySQL server or the network package doesn't meet redirection protocol."*<br>- If the MySQL server supports redirection, but the redirected connection failed for any reason, also abort the first proxy connection. Return the error of the redirected connection.|
+|`preferred` or `2`<br> (default value)|- mysqlnd_azure will use redirection if possible.<br>- If the connection does not use SSL on the driver side, the server does not support redirection, or the redirected connection fails to connect for any non-fatal reason while the proxy connection is still a valid one, it will fall back to the first proxy connection.|
+
+The subsequent sections of the document will outline how to install the `mysqlnd_azure` extension using PECL and set the value of this parameter.
+
+### Ubuntu Linux
+
+#### Prerequisites
+- PHP versions 7.2.15+ and 7.3.2+
+- PHP PEAR
+- php-mysql
+- Azure Database for MySQL server
+
+1. Install [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) with [PECL](https://pecl.php.net/package/mysqlnd_azure). It is recommended to use version 1.1.0+.
+
+ ```bash
+ sudo pecl install mysqlnd_azure
+ ```
+
+2. Locate the extension directory (`extension_dir`) by running the below:
+
+ ```bash
+ php -i | grep "extension_dir"
+ ```
+
+3. Change directories to the returned folder and ensure `mysqlnd_azure.so` is located in this folder.
+
+4. Locate the folder for .ini files by running the below:
+
+ ```bash
+ php -i | grep "dir for additional .ini files"
+ ```
+
+5. Change directories to this returned folder.
+
+6. Create a new .ini file for `mysqlnd_azure`. Make sure the alphabet order of the name is after that of mysqnld, since the modules are loaded according to the name order of the ini files. For example, if `mysqlnd` .ini is named `10-mysqlnd.ini`, name the mysqlnd ini as `20-mysqlnd-azure.ini`.
+
+7. Within the new .ini file, add the following lines to enable redirection.
+
+ ```bash
+ extension=mysqlnd_azure
+ mysqlnd_azure.enableRedirect = on/off/preferred
+ ```
+
+### Windows
+
+#### Prerequisites
+- PHP versions 7.2.15+ and 7.3.2+
+- php-mysql
+- Azure Database for MySQL server
+
+1. Determine if you are running a x64 or x86 version of PHP by running the following command:
+
+ ```cmd
+ php -i | findstr "Thread"
+ ```
+
+2. Download the corresponding x64 or x86 version of the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) DLL from [PECL](https://pecl.php.net/package/mysqlnd_azure) that matches your version of PHP. It is recommended to use version 1.1.0+.
+
+3. Extract the zip file and find the DLL named `php_mysqlnd_azure.dll`.
+
+4. Locate the extension directory (`extension_dir`) by running the below command:
+
+ ```cmd
+ php -i | find "extension_dir"
+ ```
+
+5. Copy the `php_mysqlnd_azure.dll` file into the directory returned in step 4.
+
+6. Locate the PHP folder containing the `php.ini` file using the following command:
+
+ ```cmd
+ php -i | find "Loaded Configuration File"
+ ```
+
+7. Modify the `php.ini` file and add the following extra lines to enable redirection.
+
+ Under the Dynamic Extensions section:
+ ```cmd
+ extension=mysqlnd_azure
+ ```
+
+ Under the Module Settings section:
+ ```cmd
+ [mysqlnd_azure]
+ mysqlnd_azure.enableRedirect = on/off/preferred
+ ```
+
+### Confirm redirection
+
+You can also confirm redirection is configured with the below sample PHP code. Create a PHP file called `mysqlConnect.php` and paste the below code. Update the server name, username, and password with your own.
+
+ ```php
+<?php
+$host = '<yourservername>.mysql.database.azure.com';
+$username = '<yourusername>@<yourservername>';
+$password = '<yourpassword>';
+$db_name = 'testdb';
+ echo "mysqlnd_azure.enableRedirect: ", ini_get("mysqlnd_azure.enableRedirect"), "\n";
+ $db = mysqli_init();
+ //The connection must be configured with SSL for redirection test
+ $link = mysqli_real_connect ($db, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL);
+ if (!$link) {
+ die ('Connect error (' . mysqli_connect_errno() . '): ' . mysqli_connect_error() . "\n");
+ }
+ else {
+ echo $db->host_info, "\n"; //if redirection succeeds, the host_info will differ from the hostname you used used to connect
+ $res = $db->query('SHOW TABLES;'); //test query with the connection
+ print_r ($res);
+ $db->close();
+ }
+?>
+ ```
+
+## Next steps
+For more information about connection strings, see [Connection Strings](how-to-connection-string.md).
mysql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-cli.md
+
+ Title: Restart server - Azure CLI - Azure Database for MySQL
+description: This article describes how you can restart an Azure Database for MySQL server using the Azure CLI.
+++++ Last updated : 3/18/2020 +++
+# Restart Azure Database for MySQL server using the Azure CLI
+
+This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation.
+
+The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
+
+The time required to complete a restart depends on the MySQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart.
+
+## Prerequisites
+
+To complete this how-to guide:
+
+- You need an [Azure Database for MySQL server](quickstart-create-server-up-azure-cli.md).
+
+
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Restart the server
+
+Restart the server with the following command:
+
+```azurecli-interactive
+az mysql server restart --name mydemoserver --resource-group myresourcegroup
+```
+
+## Next steps
+
+Learn about [how to set parameters in Azure Database for MySQL](how-to-configure-server-parameters-using-cli.md)
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-portal.md
+
+ Title: Restart server - Azure portal - Azure Database for MySQL
+description: This article describes how you can restart an Azure Database for MySQL server using the Azure portal.
+++++ Last updated : 3/18/2020++
+# Restart Azure Database for MySQL server using Azure portal
+
+This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation.
+
+The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
+
+The time required to complete a restart depends on the MySQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart.
+
+## Prerequisites
+To complete this how-to guide, you need:
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md)
+
+## Perform server restart
+
+The following steps restart the MySQL server:
+
+1. In the Azure portal, select your Azure Database for MySQL server.
+
+2. In the toolbar of the server's **Overview** page, click **Restart**.
+
+ :::image type="content" source="./media/how-to-restart-server-portal/2-server.png" alt-text="Azure Database for MySQL - Overview - Restart button":::
+
+3. Click **Yes** to confirm restarting the server.
+
+ :::image type="content" source="./media/how-to-restart-server-portal/3-restart-confirm.png" alt-text="Azure Database for MySQL - Restart confirm":::
+
+4. Observe that the server status changes to "Restarting".
+
+ :::image type="content" source="./media/how-to-restart-server-portal/4-restarting-status.png" alt-text="Azure Database for MySQL - Restart status":::
+
+5. Confirm server restart is successful.
+
+ :::image type="content" source="./media/how-to-restart-server-portal/5-restart-success.png" alt-text="Azure Database for MySQL - Restart success":::
+
+## Next steps
+
+[Quickstart: Create Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)
mysql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md
+
+ Title: Restart server - Azure PowerShell - Azure Database for MySQL
+description: This article describes how you can restart an Azure Database for MySQL server using PowerShell.
+++++ Last updated : 4/28/2020 +++
+# Restart Azure Database for MySQL server using PowerShell
++
+This topic describes how you can restart an Azure Database for MySQL server. You may need to restart
+your server for maintenance reasons, which causes a short outage during the operation.
+
+The server restart is blocked if the service is busy. For example, the service may be processing a
+previously requested operation such as scaling vCores.
+
+The amount of time required to complete a restart depends on the MySQL recovery process. To reduce
+the restart time, we recommend you minimize the amount of activity occurring on the server before
+the restart.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+ [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
+> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
++
+## Restart the server
+
+Restart the server with the following command:
+
+```azurepowershell-interactive
+Restart-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md)
mysql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-dropped-server.md
+
+ Title: Restore a deleted Azure Database for MySQL server
+description: This article describes how to restore a deleted server in Azure Database for MySQL using the Azure portal.
+++++ Last updated : 10/09/2020++
+# Restore a deleted Azure Database for MySQL server
++
+When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MySQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
+
+## Pre-requisites
+To restore a deleted Azure Database for MySQL server, you need following:
+- Azure Subscription name hosting the original server
+- Location where the server was created
+
+## Steps to restore
+
+1. Go to the [Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade) from Monitor blade in Azure portal.
+
+2. In Activity Log, click on **Add filter** as shown and set following filters for the
+
+ - **Subscription** = Your Subscription hosting the deleted server
+ - **Resource Type** = Azure Database for MySQL servers (Microsoft.DBforMySQL/servers)
+ - **Operation** = Delete MySQL Server (Microsoft.DBforMySQL/servers/delete)
+
+ [![Activity log filtered for delete MySQL server operation](./media/how-to-restore-dropped-server/activity-log.png)](./media/how-to-restore-dropped-server/activity-log.png#lightbox)
+
+ 3. Double Click on the Delete MySQL Server event and click on the JSON tab and note the "resourceId" and "submissionTimestamp" attributes in JSON output. The resourceId is in the following format: /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBforMySQL/servers/deletedserver.
+
+ 4. Go to [Create Server REST API Page](/rest/api/mysql/singleserver/servers(2017-12-01)/create) and click on "Try It" tab highlighted in green and login in with your Azure account.
+
+ 5. Provide the resourceGroupName, serverName (deleted server name), subscriptionId, derived from resourceId attribute captured in Step 3, while api-version is pre-populated as shown in image.
+
+ [![Create server using REST API](./media/how-to-restore-dropped-server/create-server-from-rest-api.png)](./media/how-to-restore-dropped-server/create-server-from-rest-api.png#lightbox)
+
+ 6. Scroll below on Request Body section and paste the following:
+
+ ```json
+ {
+ "location": "Dropped Server Location",
+ "properties":
+ {
+ "restorePointInTime": "submissionTimestamp - 15 minutes",
+ "createMode": "PointInTimeRestore",
+ "sourceServerId": "resourceId"
+ }
+ }
+ ```
+7. Replace the following values in the above request body:
+ * "Dropped server Location" with the Azure region where the deleted server was originally created
+ * "submissionTimestamp", and "resourceId" with the values captured in Step 3.
+ * For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
+
+8. If you see Response Code 201 or 202, the restore request is successfully submitted.
+
+9. The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for
+ - **Subscription** = Your Subscription
+ - **Resource Type** = Azure Database for MySQL servers (Microsoft.DBforMySQL/servers)
+ - **Operation** = Update MySQL Server Create
+
+## Next steps
+- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a deleted server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system.
+- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/preventing-the-disaster-of-accidental-deletion-for-your-mysql/ba-p/825222).
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-cli.md
+
+ Title: Backup and restore - Azure CLI - Azure Database for MySQL
+description: Learn how to backup and restore a server in Azure Database for MySQL by using the Azure CLI.
++++
+ms.devlang: azurecli
+ Last updated : 3/27/2020 ++
+# How to back up and restore a server in Azure Database for MySQL using the Azure CLI
++
+Azure Database for MySQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
+
+## Prerequisites
+
+To complete this how-to guide:
+
+- You need an [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-cli.md).
++
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Set backup configuration
+
+You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation.
+
+> [!NOTE]
+> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched.
+>
+
+While creating a server via the `az mysql server create` command, the `--geo-redundant-backup` parameter decides your Backup Redundancy Option. If `Enabled`, geo redundant backups are taken. Or if `Disabled` locally redundant backups are taken.
+
+The backup retention period is set by the parameter `--backup-retention`.
+
+For more information about setting these values during create, see the [Azure Database for MySQL server CLI Quickstart](quickstart-create-mysql-server-database-using-azure-cli.md).
+
+The backup retention period of a server can be changed as follows:
+
+```azurecli-interactive
+az mysql server update --name mydemoserver --resource-group myresourcegroup --backup-retention 10
+```
+
+The preceding example changes the backup retention period of mydemoserver to 10 days.
+
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the next section.
+
+## Server point-in-time restore
+You can restore the server to a previous point in time. The restored data is copied to a new server, and the existing server is left as is. For example, if a table is accidentally dropped at noon today, you can restore to the time just before noon. Then, you can retrieve the missing table and data from the restored copy of the server.
+
+To restore the server, use the Azure CLI [az mysql server restore](/cli/azure/mysql/server#az-mysql-server-restore) command.
+
+### Run the restore command
+
+To restore the server, at the Azure CLI command prompt, enter the following command:
+
+```azurecli-interactive
+az mysql server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time 2018-03-13T13:59:00Z --source-server mydemoserver
+```
+
+The `az mysql server restore` command requires the following parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+| resource-group |  myresourcegroup |  The resource group where the source server exists.  |
+| name | mydemoserver-restored | The name of the new server that is created by the restore command. |
+| restore-point-in-time | 2018-03-13T13:59:00Z | Select a point in time to restore to. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as `2018-03-13T05:59:00-08:00`. You can also use the UTC Zulu format, for example, `2018-03-13T13:59:00Z`. |
+| source-server | mydemoserver | The name or ID of the source server to restore from. |
+
+When you restore a server to an earlier point in time, a new server is created. The original server and its databases from the specified point in time are copied to the new server.
+
+The location and pricing tier values for the restored server remain the same as the original server.
+
+After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
+
+Additionally, after the restore operation finishes, there are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation
+* time_zone - This value to set to DEFAULT value **SYSTEM**
+* event_scheduler - The event_scheduler is set to **OFF** on the restored server
+
+You will need to copy over the value from the primary server and set it on the restored server by reconfiguring the [server parameter](how-to-server-parameters.md)
+
+The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
+
+## Geo restore
+If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MySQL is available.
+
+To create a server using a geo redundant backup, use the Azure CLI `az mysql server georestore` command.
+
+> [!NOTE]
+> When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated.
+>
+
+To geo restore the server, at the Azure CLI command prompt, enter the following command:
+
+```azurecli-interactive
+az mysql server georestore --resource-group myresourcegroup --name mydemoserver-georestored --source-server mydemoserver --location eastus --sku-name GP_Gen5_8
+```
+This command creates a new server called *mydemoserver-georestored* in East US that will belong to *myresourcegroup*. It is a General Purpose, Gen 5 server with 8 vCores. The server is created from the geo-redundant backup of *mydemoserver*, which is also in the resource group *myresourcegroup*
+
+If you want to create the new server in a different resource group from the existing server, then in the `--source-server` parameter you would qualify the server name as in the following example:
+
+```azurecli-interactive
+az mysql server georestore --resource-group newresourcegroup --name mydemoserver-georestored --source-server "/subscriptions/$<subscription ID>/resourceGroups/$<resource group ID>/providers/Microsoft.DBforMySQL/servers/mydemoserver" --location eastus --sku-name GP_Gen5_8
+
+```
+
+The `az mysql server georestore` command requires the following parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+|resource-group| myresourcegroup | The name of the resource group the new server will belong to.|
+|name | mydemoserver-georestored | The name of the new server. |
+|source-server | mydemoserver | The name of the existing server whose geo redundant backups are used. |
+|location | eastus | The location of the new server. |
+|sku-name| GP_Gen5_8 | This parameter sets the pricing tier, compute generation, and number of vCores of the new server. GP_Gen5_8 maps to a General Purpose, Gen 5 server with 8 vCores.|
+
+When creating a new server by a geo restore, it inherits the same storage size and pricing tier as the source server. These values cannot be changed during creation. After the new server is created, its storage size can be scaled up.
+
+After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
+
+The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
+
+## Next steps
+- Learn more about the service's [backups](concepts-backup.md)
+- Learn about [replicas](concepts-read-replicas.md)
+- Learn more about [business continuity](concepts-business-continuity.md) options
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-portal.md
+
+ Title: Backup and restore - Azure portal - Azure Database for MySQL
+description: This article describes how to restore a server in Azure Database for MySQL using the Azure portal.
+++++ Last updated : 6/30/2020++
+# How to backup and restore a server in Azure Database for MySQL using the Azure portal
++
+## Backup happens automatically
+Azure Database for MySQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
+
+## Prerequisites
+To complete this how-to guide, you need:
+- An [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-portal.md)
+
+## Set backup configuration
+
+You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation, in the **Pricing Tier** window.
+
+> [!NOTE]
+> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched.
+>
+
+While creating a server through the Azure portal, the **Pricing Tier** window is where you select either **Locally Redundant** or **Geographically Redundant** backups for your server. This window is also where you select the **Backup Retention Period** - how long (in number of days) you want the server backups stored for.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/pricing-tier.png" alt-text="Pricing Tier - Choose Backup Redundancy":::
+
+For more information about setting these values during create, see the [Azure Database for MySQL server quickstart](quickstart-create-mysql-server-database-using-azure-portal.md).
+
+The backup retention period can be changed on a server through the following steps:
+1. Sign into the [Azure portal](https://portal.azure.com/).
+2. Select your Azure Database for MySQL server. This action opens the **Overview** page.
+3. Select **Pricing Tier** from the menu, under **SETTINGS**. Using the slider you can change the **Backup Retention Period** to your preference between 7 and 35 days.
+In the screenshot below it has been increased to 34 days.
+
+4. Click **OK** to confirm the change.
+
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
+
+## Point-in-time restore
+Azure Database for MySQL allows you to restore the server back to a point-in-time and into to a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server.
+
+For example, if a table was accidentally dropped at noon today, you could restore to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level.
+
+The following steps restore the sample server to a point-in-time:
+1. In the Azure portal, select your Azure Database for MySQL server.
+
+2. In the toolbar of the server's **Overview** page, select **Restore**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/2-server.png" alt-text="Azure Database for MySQL - Overview - Restore button":::
+
+3. Fill out the Restore form with the required information:
+
+ :::image type="content" source="./media/how-to-restore-server-portal/3-restore.png" alt-text="Azure Database for MySQL - Restore information":::
+ - **Restore point**: Select the point-in-time you want to restore to.
+ - **Target server**: Provide a name for the new server.
+ - **Location**: You cannot select the region. By default it is same as the source server.
+ - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
+
+4. Click **OK** to restore the server to restore to a point-in-time.
+
+5. Once the restore finishes, locate the new server that is created to verify the data was restored as expected.
+
+The new server created by point-in-time restore has the same server admin login name and password that was valid for the existing server at the point-in-time chose. You can change the password from the new server's **Overview** page.
+
+Additionally, after the restore operation finishes, there are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation
+* time_zone - This value to set to DEFAULT value **SYSTEM**
+* event_scheduler - The event_scheduler is set to **OFF** on the restored server
+
+You will need to copy over the value from teh primary server and set it on the restored server by reconfiguring the [server parameter](how-to-server-parameters.md)
+
+The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
+
+## Geo restore
+If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MySQL is available.
+
+1. Select the **Create a resource** button (+) in the upper-left corner of the portal. Select **Databases** > **Azure Database for MySQL**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/1-navigate-to-mysql.png" alt-text="Navigate to Azure Database for MySQL.":::
+
+2. Provide the subscription, resource group, and name of the new server.
+
+3. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo redundant backups enabled.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/3-geo-restore.png" alt-text="Select data source.":::
+
+ > [!NOTE]
+ > When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated.
+ >
+
+4. Select the **Backup** dropdown.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/4-geo-restore-backup.png" alt-text="Select backup dropdown.":::
+
+5. Select the source server to restore from.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/5-select-backup.png" alt-text="Select backup.":::
+
+6. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/6-accept-backup.png" alt-text="Continue with backup.":::
+
+7. Fill out the rest of the form with your preferences. You can select any **Location**.
+
+ After selecting the location, you can select **Configure server** to update the **Compute Generation** (if available in the region you have chosen), number of **vCores**, **Backup Retention Period**, and **Backup Redundancy Option**. Changing **Pricing Tier** (Basic, General Purpose, or Memory Optimized) or **Storage** size during restore is not supported.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/7-create.png" alt-text="Fill form.":::
+
+8. Select **Review + create** to review your selections.
+
+9. Select **Create** to provision the server. This operation may take a few minutes.
+
+The new server created by geo restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
+
+The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
+
+## Next steps
+- Learn more about the service's [backups](concepts-backup.md)
+- Learn about [replicas](concepts-read-replicas.md)
+- Learn more about [business continuity](concepts-business-continuity.md) options
mysql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-powershell.md
+
+ Title: Backup and restore - Azure PowerShell - Azure Database for MySQL
+description: Learn how to backup and restore a server in Azure Database for MySQL by using Azure PowerShell.
++++
+ms.devlang: azurepowershell
+ Last updated : 4/28/2020++
+# How to back up and restore an Azure Database for MySQL server using PowerShell
++
+Azure Database for MySQL servers is backed up periodically to enable restore features. Using this
+feature you may restore the server and all its databases to an earlier point-in-time, on a new
+server.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+ [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
+> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet.
++
+## Set backup configuration
+
+At server creation, you make the choice between configuring your server for either locally redundant
+or geographically redundant backups.
+
+> [!NOTE]
+> After a server is created, the kind of redundancy it has, geographically redundant vs locally
+> redundant, can't be changed.
+
+While creating a server via the `New-AzMySqlServer` command, the **GeoRedundantBackup**
+parameter decides your backup redundancy option. If **Enabled**, geo redundant backups are taken. Or
+if **Disabled**, locally redundant backups are taken.
+
+The backup retention period is set by the **BackupRetentionDay** parameter.
+
+For more information about setting these values during server creation, see
+[Create an Azure Database for MySQL server using PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md).
+
+The backup retention period of a server can be changed as follows:
+
+```azurepowershell-interactive
+Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -BackupRetentionDay 10
+```
+
+The preceding example changes the backup retention period of mydemoserver to 10 days.
+
+The backup retention period governs how far back a point-in-time restore can be retrieved, since
+it's based on available backups. Point-in-time restore is described further in the next section.
+
+## Server point-in-time restore
+
+You can restore the server to a previous point-in-time. The restored data is copied to a new server,
+and the existing server is left unchanged. For example, if a table is accidentally dropped, you can
+restore to the time just the drop occurred. Then, you can retrieve the missing table and data from
+the restored copy of the server.
+
+To restore the server, use the `Restore-AzMySqlServer` PowerShell cmdlet.
+
+### Run the restore command
+
+To restore the server, run the following example from PowerShell.
+
+```azurepowershell-interactive
+$restorePointInTime = (Get-Date).AddMinutes(-10)
+Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Restore-AzMySqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore
+```
+
+The **PointInTimeRestore** parameter set of the `Restore-AzMySqlServer` cmdlet requires the
+following parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+| ResourceGroupName |  myresourcegroup |  The resource group where the source server exists.  |
+| Name | mydemoserver-restored | The name of the new server that is created by the restore command. |
+| RestorePointInTime | 2020-03-13T13:59:00Z | Select a point in time to restore. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as **2020-03-13T05:59:00-08:00**. You can also use the UTC Zulu format, for example, **2018-03-13T13:59:00Z**. |
+| UsePointInTimeRestore | `<SwitchParameter>` | Use point-in-time mode to restore. |
+
+When you restore a server to an earlier point-in-time, a new server is created. The original server
+and its databases from the specified point-in-time are copied to the new server.
+
+The location and pricing tier values for the restored server remain the same as the original server.
+
+After the restore process finishes, locate the new server and verify that the data is restored as
+expected. The new server has the same server admin login name and password that was valid for the
+existing server at the time the restore was started. The password can be changed from the new
+server's **Overview** page.
+
+The new server created during a restore does not have the VNet service endpoints that existed on the
+original server. These rules must be set up separately for the new server. Firewall rules from the
+original server are restored.
+
+## Geo restore
+
+If you configured your server for geographically redundant backups, a new server can be created from
+the backup of the existing server. This new server can be created in any region that Azure Database
+for MySQL is available.
+
+To create a server using a geo redundant backup, use the `Restore-AzMySqlServer` command with the
+**UseGeoRestore** parameter.
+
+> [!NOTE]
+> When a server is first created it may not be immediately available for geo restore. It may take a
+> few hours for the necessary metadata to be populated.
+
+To geo restore the server, run the following example from PowerShell:
+
+```azurepowershell-interactive
+Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Restore-AzMySqlServer -Name mydemoserver-georestored -ResourceGroupName myresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore
+```
+
+This example creates a new server called **mydemoserver-georestored** in the East US region that
+belongs to **myresourcegroup**. It is a General Purpose, Gen 5 server with 8 vCores. The server is
+created from the geo-redundant backup of **mydemoserver**, also in the resource group
+**myresourcegroup**.
+
+To create the new server in a different resource group from the existing server, specify the new
+resource group name using the **ResourceGroupName** parameter as shown in the following example:
+
+```azurepowershell-interactive
+Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Restore-AzMySqlServer -Name mydemoserver-georestored -ResourceGroupName newresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore
+```
+
+The **GeoRestore** parameter set of the `Restore-AzMySqlServer` cmdlet requires the following
+parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+|ResourceGroupName | myresourcegroup | The name of the resource group the new server belongs to.|
+|Name | mydemoserver-georestored | The name of the new server. |
+|Location | eastus | The location of the new server. |
+|UseGeoRestore | `<SwitchParameter>` | Use geo mode to restore. |
+
+When creating a new server using geo restore, it inherits the same storage size and pricing tier as
+the source server unless the **Sku** parameter is specified.
+
+After the restore process finishes, locate the new server and verify that the data is restored as
+expected. The new server has the same server admin login name and password that was valid for the
+existing server at the time the restore was started. The password can be changed from the new
+server's **Overview** page.
+
+The new server created during a restore does not have the VNet service endpoints that existed on the
+original server. These rules must be set up separately for this new server. Firewall rules from the
+original server are restored.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to generate an Azure Database for MySQL connection string with PowerShell](how-to-connection-string-powershell.md)
mysql How To Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-server-parameters.md
+
+ Title: Configure server parameters - Azure portal - Azure Database for MySQL
+description: This article describes how to configure MySQL server parameters in Azure Database for MySQL using the Azure portal.
+++++ Last updated : 10/1/2020++
+# Configure server parameters in Azure Database for MySQL using the Azure portal
++
+Azure Database for MySQL supports configuration of some server parameters. This article describes how to configure these parameters by using the Azure portal. Not all server parameters can be adjusted.
+
+>[!Note]
+> Server parameters can be updated globally at the server-level, use the [Azure CLI](./how-to-configure-server-parameters-using-cli.md), [PowerShell](./how-to-configure-server-parameters-using-powershell.md), or [Azure portal](./how-to-server-parameters.md).
+
+## Configure server parameters
+
+1. Sign in to the [Azure portal](https://portal.azure.com), then locate your Azure Database for MySQL server.
+2. Under the **SETTINGS** section, click **Server parameters** to open the server parameters page for the Azure Database for MySQL server.
+3. Locate any settings you need to adjust. Review the **Description** column to understand the purpose and allowed values.
+4. Click **Save** to save your changes.
+5. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**.
+
+## Setting parameters not listed
+
+If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server.
+
+1. Under the **SETTINGS** section, click **Server parameters** to open the server parameters page for the Azure Database for MySQL server.
+2. Search for `init_connect`
+3. Add the server parameters in the format: `SET parameter_name=YOUR_DESIRED_VALUE` in value the value column.
+
+ For example, you can change the character set of your server by setting of `init_connect` to `SET character_set_client=utf8;SET character_set_database=utf8mb4;SET character_set_connection=latin1;SET character_set_results=latin1;`
+4. Click **Save** to save your changes.
+
+>[!Note]
+> `init_connect` can be used to change parameters that do not require SUPER privilege(s) at the session level. To verify if you can set the parameter using `init_connect`, execute the `set session parameter_name=YOUR_DESIRED_VALUE;` command and if it errors out with **Access denied; you need SUPER privileges(s)** error, then you cannot set the parameter using `init_connect'.
+
+## Working with the time zone parameter
+
+### Populating the time zone tables
+
+The time zone tables on your server can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench.
+
+> [!NOTE]
+> If you are running the `mysql.az_load_timezone` command from MySQL Workbench, you may need to turn off safe update mode first using `SET SQL_SAFE_UPDATES=0;`.
+
+```sql
+CALL mysql.az_load_timezone();
+```
+
+> [!IMPORTANT]
+> You should restart the server to ensure the time zone tables are properly populated. To restart the server, use the [Azure portal](how-to-restart-server-portal.md) or [CLI](how-to-restart-server-cli.md).
+
+To view available time zone values, run the following command:
+
+```sql
+SELECT name FROM mysql.time_zone_name;
+```
+
+### Setting the global level time zone
+
+The global level time zone can be set from the **Server parameters** page in the Azure portal. The below sets the global time zone to the value "US/Pacific".
++
+### Setting the session level time zone
+
+The session level time zone can be set by running the `SET time_zone` command from a tool like the MySQL command line or MySQL Workbench. The example below sets the time zone to the **US/Pacific** time zone.
+
+```sql
+SET time_zone = 'US/Pacific';
+```
+
+Refer to the MySQL documentation for [Date and Time Functions](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz).
+
+## Next steps
+
+- [Connection libraries for Azure Database for MySQL](concepts-connection-libraries.md).
mysql How To Stop Start Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-stop-start-server.md
+
+ Title: Stop/start - Azure portal - Azure Database for MySQL server
+description: This article describes how to stop/start operations in Azure Database for MySQL.
+++++ Last updated : 09/21/2020++
+# Stop/Start an Azure Database for MySQL
++
+> [!IMPORTANT]
+> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can choose to **Stop** it again if you are not using the server.
+
+This article provides step-by-step procedure to perform Stop and Start of the single server.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- You must have an Azure Database for MySQL Single Server.
+
+> [!NOTE]
+> Refer to the limitation of using [stop/start](concepts-servers.md#limitations-of-stopstart-operation)
+
+## How to stop/start the Azure Database for MySQL using Azure portal
+
+### Stop a running server
+
+1. In the [Azure portal](https://portal.azure.com/), choose your MySQL server that you want to stop.
+
+2. From the **Overview** page, click the **Stop** button in the toolbar.
+
+ :::image type="content" source="./media/how-to-stop-start-server/mysql-stop-server.png" alt-text="Azure Database for MySQL Stop server":::
+
+ > [!NOTE]
+ > Once the server is stopped, the other management operations are not available for the single server.
+
+### Start a stopped server
+
+1. In the [Azure portal](https://portal.azure.com/), choose your single server that you want to start.
+
+2. From the **Overview** page, click the **Start** button in the toolbar.
+
+ :::image type="content" source="./media/how-to-stop-start-server/mysql-start-server.png" alt-text="Azure Database for MySQL start server":::
+
+ > [!NOTE]
+ > Once the server is started, all management operations are now available for the single server.
+
+## How to stop/start the Azure Database for MySQL using CLI
+
+### Stop a running server
+
+1. In the [Azure portal](https://portal.azure.com/), choose your MySQL server that you want to stop.
+
+2. From the **Overview** page, click the **Stop** button in the toolbar.
+
+ ```azurecli-interactive
+ az mysql server stop --name <server-name> -g <resource-group-name>
+ ```
+ > [!NOTE]
+ > Once the server is stopped, the other management operations are not available for the single server.
+
+### Start a stopped server
+
+1. In the [Azure portal](https://portal.azure.com/), choose your single server that you want to start.
+
+2. From the **Overview** page, click the **Start** button in the toolbar.
+
+ ```azurecli-interactive
+ az mysql server start --name <server-name> -g <resource-group-name>
+ ```
+ > [!NOTE]
+ > Once the server is started, all management operations are now available for the single server.
+
+## Next steps
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md).
mysql How To Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-tls-configurations.md
+
+ Title: TLS configuration - Azure portal - Azure Database for MySQL
+description: Learn how to set TLS configuration using Azure portal for your Azure Database for MySQL
+++++ Last updated : 06/02/2020++
+# Configuring TLS settings in Azure Database for MySQL using Azure portal
++
+This article describes how you can configure an Azure Database for MySQL server to enforce minimum TLS version allowed for connections to go through and deny all connections with lower TLS version than configured minimum TLS version thereby enhancing the network security.
+
+You can enforce TLS version for connecting to their Azure Database for MySQL. Customers now have a choice to set the minimum TLS version for their database server. For example, setting this Minimum TLS version to 1.0 means you shall allow clients connecting using TLS 1.0,1.1 and 1.2. Alternatively, setting this to 1.2 means that you only allow clients connecting using TLS 1.2+ and all incoming connections with TLS 1.0 and TLS 1.1 will be rejected.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+* An [Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md)
+
+## Set TLS configurations for Azure Database for MySQL
+
+Follow these steps to set MySQL server minimum TLS version:
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL server.
+
+1. On the MySQL server page, under **Settings**, click **Connection security** to open the connection security configuration page.
+
+1. In **Minimum TLS version**, select **1.2** to deny connections with TLS version less than TLS 1.2 for your MySQL server.
+
+ :::image type="content" source="./media/how-to-tls-configurations/setting-tls-value.png" alt-text="Azure Database for MySQL TLS configuration":::
+
+1. Click **Save** to save the changes.
+
+1. A notification will confirm that connection security setting was successfully enabled and in effect immediately. There is **no restart** of the server required or performed. After the changes are saved, all new connections to the server are accepted only if the TLS version is greater than or equal to the minimum TLS version set on the portal.
+
+ :::image type="content" source="./media/how-to-tls-configurations/setting-tls-value-success.png" alt-text="Azure Database for MySQL TLS configuration success":::
+
+## Next steps
+
+- Learn about [how to create alerts on metrics](how-to-alert-on-metric.md)
mysql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-connection-issues.md
+
+ Title: Troubleshoot connection issues - Azure Database for MySQL
+description: Learn how to troubleshoot connection issues to Azure Database for MySQL, including transient errors requiring retries, firewall issues, and outages.
+keywords: mysql connection,connection string,connectivity issues,transient error,connection error
+++++ Last updated : 3/18/2020++
+# Troubleshoot connection issues to Azure Database for MySQL
++
+Connection problems may be caused by a variety of things, including:
+
+* Firewall settings
+* Connection time-out
+* Incorrect login information
+* Maximum limit reached on some Azure Database for MySQL resources
+* Issues with the infrastructure of the service
+* Maintenance being performed in the service
+* The compute allocation of the server is changed by scaling the number of vCores or moving to a different service tier
+
+Generally, connection issues to Azure Database for MySQL can be classified as follows:
+
+* Transient errors (short-lived or intermittent)
+* Persistent or non-transient errors (errors that regularly recur)
+
+## Troubleshoot transient errors
+
+Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. The Azure Database for MySQL service has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery.
+
+### Steps to resolve transient connectivity issues
+
+1. Check the [Microsoft Azure Service Dashboard](https://azure.microsoft.com/status) for any known outages that occurred during the time in which the errors were reported by the application.
+2. Applications that connect to a cloud service such as Azure Database for MySQL should expect transient errors and implement retry logic to handle these errors instead of surfacing these as application errors to users. Review [Handling of transient connectivity errors for Azure Database for MySQL](concepts-connectivity.md) for best practices and design guidelines for handling transient errors.
+3. As a server approaches its resource limits, errors can seem to be transient connectivity issue. See [Limitations in Azure Database for MySQL](concepts-limits.md).
+4. If connectivity problems continue, or if the duration for which your application encounters the error exceeds 60 seconds or if you see multiple occurrences of the error in a given day, file an Azure support request by selecting **Get Support** on the [Azure Support](https://azure.microsoft.com/support/options) site.
+
+## Troubleshoot persistent errors
+
+If the application persistently fails to connect to Azure Database for MySQL, it usually indicates an issue with one of the following:
+
+* Server firewall configuration: Make sure that the Azure Database for MySQL server firewall is configured to allow connections from your client, including proxy servers and gateways.
+* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you cannot to must be allowed as well as application names such as MySQL in some firewalls.
+* User error: You might have mistyped connection parameters, such as the server name in the connection string or a missing *\@servername* suffix in the user name.
+
+### Steps to resolve persistent connectivity issues
+
+1. Set up [firewall rules](how-to-manage-firewall-using-portal.md) to allow the client IP address. For temporary testing purposes only, set up a firewall rule using 0.0.0.0 as the starting IP address and using 255.255.255.255 as the ending IP address. This will open the server to all IP addresses. If this resolves your connectivity issue, remove this rule and create a firewall rule for an appropriately limited IP address or address range.
+2. On all firewalls between the client and the internet, make sure that port 3306 is open for outbound connections.
+3. Verify your connection string and other connection settings. Review [How to connect applications to Azure Database for MySQL](how-to-connection-string.md).
+4. Check the service health in the dashboard. If you think there's a regional outage, see [Overview of business continuity with Azure Database for MySQL](concepts-business-continuity.md) for steps to recover to a new region.
+
+## Next steps
+
+* [Handling of transient connectivity errors for Azure Database for MySQL](concepts-connectivity.md)
mysql How To Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-errors.md
+
+ Title: Troubleshoot common errors - Azure Database for MySQL
+description: Learn how to troubleshoot common migration errors encountered by users new to the Azure Database for MySQL service
++++++ Last updated : 5/21/2021++
+# Troubleshoot errors commonly encountered during or post migration to Azure Database for MySQL
++
+Azure Database for MySQL is a fully managed service powered by the community version of MySQL. The MySQL experience in a managed service environment may differ from running MySQL in your own environment. In this article, you'll see some of the common errors users may encounter while migrating to or developing on Azure Database for MySQL for the first time.
+
+## Common Connection Errors
+
+### ERROR 1184 (08S01): Aborted connection 22 to db: 'db-name' user: 'user' host: 'hostIP' (init_connect command failed)
+
+The above error occurs after successful sign-in but before executing any command when session is established. The above message indicates you have set an incorrect value of `init_connect` server parameter, which is causing the session initialization to fail.
+
+There are some server parameters like `require_secure_transport` that aren't supported at the session level, and so trying to change the values of these parameters using `init_connect` can result in Error 1184 while connecting to the MySQL server as shown below:
+
+mysql> show databases;
+ERROR 2006 (HY000): MySQL server has gone away
+No connection. Trying to reconnect...
+Connection id: 64897
+Current database: *** NONE ***
+ERROR 1184 (08S01): Aborted connection 22 to db: 'db-name' user: 'user' host: 'hostIP' (init_connect command failed)
+
+**Resolution**: Reset `init_connect` value in Server parameters tab in Azure portal and set only the supported server parameters using init_connect parameter.
+
+## Errors due to lack of SUPER privilege and DBA role
+
+The SUPER privilege and DBA role aren't supported on the service. As a result, you may encounter some common errors listed below:
+
+### ERROR 1419: You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)
+
+The above error may occur while creating a function, trigger as below or importing a schema. The DDL statements like CREATE FUNCTION or CREATE TRIGGER are written to the binary log, so the secondary replica can execute them. The replica SQL thread has full privileges, which can be exploited to elevate privileges. To guard against this danger for servers that have binary logging enabled, the MySQL engine requires that stored function creators have the SUPER privilege, in addition to the usual CREATE ROUTINE privilege.
+
+```sql
+CREATE FUNCTION f1(i INT)
+RETURNS INT
+DETERMINISTIC
+READS SQL DATA
+BEGIN
+ RETURN i;
+END;
+```
+
+**Resolution**: To resolve the error, set `log_bin_trust_function_creators` to 1 from [server parameters](how-to-server-parameters.md) blade in portal, execute the DDL statements or import the schema to create the desired objects. You can continue to maintain `log_bin_trust_function_creators` to 1 for your server to avoid the error in future. Our recommendation is to set `log_bin_trust_function_creators` as the security risk highlighted in [MySQL community documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) is minimal in Azure Database for MySQL as bin log isn't exposed to any threats.
+
+#### ERROR 1227 (42000) at line 101: Access denied; you need (at least one of) the SUPER privilege(s) for this operation. Operation failed with exitcode 1
+
+The above error may occur while importing a dump file or creating procedure that contains [definers](https://dev.mysql.com/doc/refman/5.7/en/create-procedure.html).
+
+**Resolution**: To resolve this error, the admin user can grant privileges to create or execute procedures by running GRANT command as in the following examples:
+
+```sql
+GRANT CREATE ROUTINE ON mydb.* TO 'someuser'@'somehost';
+GRANT EXECUTE ON PROCEDURE mydb.myproc TO 'someuser'@'somehost';
+```
+
+Alternately, you can replace the definers with the name of the admin user that is running the import process as shown below.
+
+```sql
+DELIMITER;;
+/*!50003 CREATE*/ /*!50017 DEFINER=`root`@`127.0.0.1`*/ /*!50003
+DELIMITER;;
+
+/* Modified to */
+
+DELIMITER ;;
+/*!50003 CREATE*/ /*!50017 DEFINER=`AdminUserName`@`ServerName`*/ /*!50003
+DELIMITER ;
+```
+
+#### ERROR 1227 (42000) at line 295: Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation
+
+The above error may occur while executing CREATE VIEW with DEFINER statements as part of importing a dump file or running a script. Azure Database for MySQL doesn't allow SUPER privileges or the SET_USER_ID privilege to any user.
+
+**Resolution**:
+
+* Use the definer user to execute CREATE VIEW if possible. It's likely that there are many views with different definers having different permissions, so this may not be feasible. OR
+* Edit the dump file or CREATE VIEW script and remove the DEFINER= statement from the dump file. OR
+* Edit the dump file or CREATE VIEW script and replace the definer values with user with admin permissions who is performing the import or execute the script file.
+
+> [!Tip]
+> Use sed or perl to modify a dump file or SQL script to replace the DEFINER= statement
+
+#### ERROR 1227 (42000) at line 18: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
+
+The above error may occur if you're using trying to import the dump file from MySQL server with GTID enabled to the target Azure Database for MySQL server. Mysqldump adds SET @@SESSION.sql_log_bin=0 statement to a dump file from a server where GTIDs are in use, which disables binary logging while the dump file is being reloaded.
+
+**Resolution**:
+To resolve this error while importing, remove or comment out the below lines in your mysqldump file and run import again to ensure it's successful.
+
+SET @MYSQLDUMP_TEMP_LOG_BIN = @@SESSION.SQL_LOG_BIN;
+SET @@SESSION.SQL_LOG_BIN= 0;
+SET @@GLOBAL.GTID_PURGED='';
+SET @@SESSION.SQL_LOG_BIN = @MYSQLDUMP_TEMP_LOG_BIN;
+
+## Common connection errors for server admin sign-in
+
+When an Azure Database for MySQL server is created, a server admin sign-in is provided by the end user during the server creation. The server admin sign-in allows you to create new databases, add new users and grant permissions. If the server admin sign-in is deleted, its permissions are revoked or its password is changed, you may start to see connections errors in your application while connections. Following are some of the common errors
+
+### ERROR 1045 (28000): Access denied for user 'username'@'IP address' (using password: YES)
+
+The above error occurs if:
+
+* The username doesn't exist.
+* The user username was deleted.
+* its password is changed or reset.
+
+**Resolution**:
+
+* Validate if "username" exists as a valid user in the server or is accidentally deleted. You can execute the following query by logging into the Azure Database for MySQL user:
+
+ ```sql
+ select user from mysql.user;
+ ```
+
+* If you can't sign in to the MySQL to execute the above query itself, we recommend you to [reset the admin password using Azure portal](how-to-create-manage-server-portal.md). The reset password option from Azure portal will help recreate the user, reset the password, and restore the admin permissions, which will allow you to sign in using the server admin and perform further operations.
+
+## Next steps
+
+If you didn't find the answer you're looking for, consider the following options:
+
+* Post your question on [Microsoft Q&A question page](/answers/topics/azure-database-mysql.html) or [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
+* Send an email to the Azure Database for MySQL Team [@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com). This email address isn't a technical support alias.
+* Contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+* To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).
mysql How To Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-high-cpu-utilization.md
+
+ Title: Troubleshoot high CPU utilization in Azure Database for MySQL
+description: Learn how to troubleshoot high CPU utilization in Azure Database for MySQL.
+++++ Last updated : 4/27/2022++
+# Troubleshoot high CPU utilization in Azure Database for MySQL
++
+Azure Database for MySQL provides a range of metrics that you can use to identify resource bottlenecks and performance issues on the server. To determine whether your server is experiencing high CPU utilization, monitor metrics such as ΓÇ£Host CPU percentΓÇ¥, ΓÇ£Total ConnectionsΓÇ¥, ΓÇ£Host Memory PercentΓÇ¥, and ΓÇ£IO PercentΓÇ¥. At times, viewing a combination of these metrics will provide insights into what might be causing the increased CPU utilization on your Azure Database for MySQL server.
+
+For example, consider a sudden surge in connections that initiates surge of database queries that cause CPU utilization to shoot up.
+
+Besides capturing metrics, itΓÇÖs important to also trace the workload to understand if one or more queries are causing the spike in CPU utilization.
+
+## Capturing details of the current workload
+
+The SHOW (FULL) PROCESSLIST command displays a list of all user sessions currently connected to the Azure Database for MySQL server. It also provides details about the current state and activity of each session.
+This command only produces a snapshot of the current session status and doesn't provide information about historical session activity.
+
+LetΓÇÖs take a look at sample output from running this command.
+
+```
+mysql> SHOW FULL PROCESSLIST;
++-++--++-+--+--++
+| Id | User | Host | db | Command | Time | State | Info |
++-++--++-+--+--++
+| 1 | event_scheduler | localhost | NULL | Daemon | 13 | Waiting for next activation | NULL |
+| 6 | azure_superuser | 127.0.0.1:33571 | NULL | Sleep | 115 | | NULL
+|
+| 24835 | adminuser | 10.1.1.4:39296 | classicmodels | Query | 7 | Sending data | select * from classicmodels.orderdetails;|
+| 24837 | adminuser | 10.1.1.4:38208 | NULL | Query | 0 | starting | SHOW FULL PROCESSLIST |
++-++--++-+--+--++
+5 rows in set (0.00 sec)
+```
+
+Notice that there are two sessions owned by customer owned user ΓÇ£adminuserΓÇ¥, both from the same IP address:
+
+* Session 24835 has been executing a SELECT statement for the last seven seconds.
+* Session 24837 is executing ΓÇ£show full processlistΓÇ¥ statement.
+
+When necessary, it may be required to terminate a query, such as a reporting or HTAP query that has caused your production workload CPU usage to spike. However, always consider the potential consequences of terminating a query before taking the action in an attempt to reduce CPU utilization. Other times if there are any long running queries identified that are leading to CPU spikes, tune these queries so the resources are optimally utilized.
+
+## Detailed current workload analysis
+
+You need to use at least two sources of information to obtain accurate information about the status of a session, transaction, and query:
+
+* The serverΓÇÖs process list from the INFORMATION_SCHEMA.PROCESSLIST table, which you can also access by running the SHOW [FULL] PROCESSLIST command.
+* InnoDBΓÇÖs transaction metadata from the INFORMATION_SCHEMA.INNODB_TRX table.
+
+With information from only one of these sources, itΓÇÖs impossible to describe the connection and transaction state. For example, the process list doesnΓÇÖt inform you whether thereΓÇÖs an open transaction associated with any of the sessions. On the other hand, the transaction metadata doesnΓÇÖt show session state and time spent in that state.
+
+An example query that combines process list information with some of the important pieces of InnoDB transaction metadata is shown below:
+
+```
+mysql> select p.id as session_id, p.user, p.host, p.db, p.command, p.time, p.state, substring(p.info, 1, 50) as info, t.trx_started, unix_timestamp(now()) - unix_timestamp(t.trx_started) as trx_age_seconds, t.trx_rows_modified, t.trx_isolation_level from information_schema.processlist p left join information_schema.innodb_trx t on p.id = t.trx_mysql_thread_id \G
+```
+
+An example of the output from this query is shown below:
+
+```
+*************************** 1. row ***************************
+ session_id: 11
+ user: adminuser
+ host: 172.31.19.159:53624
+ db: NULL
+ command: Sleep
+ time: 636
+ state: cleaned up
+ info: NULL
+ trx_started: 2019-08-01 15:25:07
+ trx_age_seconds: 2908
+ trx_rows_modified: 17825792
+trx_isolation_level: REPEATABLE READ
+*************************** 2. row ***************************
+ session_id: 12
+ user: adminuser
+ host: 172.31.19.159:53622
+ db: NULL
+ command: Query
+ time: 15
+ state: executing
+ info: select * from classicmodels.orders
+ trx_started: NULL
+ trx_age_seconds: NULL
+ trx_rows_modified: NULL
+trx_isolation_level: NULL
+```
+
+An analysis of this information, by session, is listed in the following table.
+
+| **Area** | **Analysis** |
+|-|-|
+| Session 11 | This session is currently idle (sleeping) with no queries running, and it has been for 636 seconds. Within the session, a transaction thatΓÇÖs been open for 2908 seconds has modified 17,825,792 rows, and it uses REPEATABLE READ isolation. |
+| Session 12 | The session is currently executing a SELECT statement, which has been running for 15 seconds. There's no query running within the session, as indicated by the NULL values for trx_started and trx_age_seconds. The session will continue to hold the garbage collection boundary as long as it runs unless itΓÇÖs using the more relaxed READ COMMITTED isolation. |
+
+Note that if a session is reported as idle, itΓÇÖs no longer executing any statements. At this point, the session has completed any prior work and is waiting for new statements from the client. However, idle sessions are still responsible for some CPU consumption and memory usage.
+
+## Understanding thread states
+
+Transactions that contribute to higher CPU utilization during execution can have threads in various states, as described in the following sections. Use this information to better understand the query lifecycle and various thread states.
+
+### Checking permissions/Opening tables
+
+This state usually means the open table operation is consuming a long time. Usually, you can increase the table cache size to improve the issue. However, tables opening slowly can also be indicative of other issues, such as having too many tables under the same database.
+
+### Sending data
+
+While this state can mean that the thread is sending data through the network, it can also indicate that the query is reading data from the disk or memory. This state can be caused by a sequential table scan. You should check the values of the innodb_buffer_pool_reads and innodb_buffer_pool_read_requests to determine whether a large number of pages are being served from the disk into the memory. For more information, see [Troubleshoot low memory issues in Azure Database for MySQL](how-to-troubleshoot-low-memory-issues.md).
+
+### Updating
+
+This state usually means that the thread is performing a write operation. Check the IO-related metric in the Performance Monitor to get a better understanding on what the current sessions are doing.
+
+### Waiting for <lock_type> lock
+
+This state indicates that the thread is waiting for a second lock. In most cases, it may be a metadata lock. You should review all other threads and see who is taking the lock.
+
+## Understanding and analyzing wait events
+
+ItΓÇÖs important to understand the underlying wait events in MySQL engine, because long waits or a large number of waits in a database can lead to increased CPU utilization. The following shows the appropriate command and sample output.
+
+```
+SELECT event_name AS wait_event,
+count_star AS all_occurrences,
+Concat(Round(sum_timer_wait / 1000000000000, 2), ' s') AS total_wait_time,
+ Concat(Round(avg_timer_wait / 1000000000, 2), ' ms') AS
+avg_wait_time
+FROM performance_schema.events_waits_summary_global_by_event_name
+WHERE count_star > 0 AND event_name <> 'idle'
+ORDER BY sum_timer_wait DESC LIMIT 10;
++--+--+--++
+| wait_event | all_occurrences | total_wait_time | avg_wait_time |
++--+--+--++
+| wait/io/file/sql/binlog | 7090 | 255.54 s | 36.04 ms |
+| wait/io/file/innodb/innodb_log_file | 17798 | 55.43 s | 3.11 ms |
+| wait/io/file/innodb/innodb_data_file | 260227 | 39.67 s | 0.15 ms |
+| wait/io/table/sql/handler | 5548985 | 11.73 s | 0.00 ms |
+| wait/io/file/sql/FRM | 1237 | 7.61 s | 6.15 ms |
+| wait/io/file/sql/dbopt | 28 | 1.89 s | 67.38 ms |
+| wait/io/file/myisam/kfile | 92 | 0.76 s | 8.30 ms |
+| wait/io/file/myisam/dfile | 271 | 0.53 s | 1.95 ms |
+| wait/io/file/sql/file_parser | 18 | 0.32 s | 17.75 ms |
+| wait/io/file/sql/slow_log | 2 | 0.05 s | 25.79 ms |
++--+--+--++
+10 rows in set (0.00 sec)
+```
+
+## Restrict SELECT Statements execution time
+
+If you donΓÇÖt know about the execution cost and execution time for database operations involving SELECT queries, any long running SELECTs can lead to unpredictability or volatility in the database server. The size of statements and transactions, as well as the associated resource utilization, continues to grow depending on the underlying data set growth. Because of this unbounded growth, end user statements and transactions take longer and longer, consuming increasingly more resources until they overwhelm the database server. When using unbounded SELECT queries, itΓÇÖs recommended to configure the max_execution_time parameter so that any queries exceeding this duration will be aborted.
+
+## Recommendations
+
+* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more CPU cores to accommodate your workload.
+* Avoid large or long-running transactions by breaking them into smaller transactions.
+* Run SELECT statements on read replica servers when possible.
+* Use alerts on ΓÇ£Host CPU PercentΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
+* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
+* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
+
+## Next steps
+
+To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql How To Troubleshoot Low Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-low-memory-issues.md
+
+ Title: Troubleshoot low memory issues in Azure Database for MySQL
+description: Learn how to troubleshoot low memory issues in Azure Database for MySQL.
+++++ Last updated : 4/22/2022++
+# Troubleshoot low memory issues in Azure Database for MySQL
++
+To help ensure that a MySQL database server performs optimally, it's very important to have the appropriate memory allocation and utilization. By default, when you create an instance of Azure Database for MySQL, the available physical memory is dependent on the tier and size you select for your workload. In addition, memory is allocated for buffers and caches to improve database operations. For more information, see [How MySQL Uses Memory](https://dev.mysql.com/doc/refman/5.7/en/memory-use.html).
+
+Note that the Azure Database for MySQL service consumes memory to achieve as much cache hit as possible. As a result, memory utilization can often hover between 80- 90% of the available physical memory of an instance. Unless there's an issue with the progress of the query workload, it isn't a concern. However, you may run into out of memory issues for reasons such as that you have:
+
+* Configured too large buffers.
+* Sub optimal queries running.
+* Queries performing joins and sorting large data sets.
+* Set the maximum connections on a database server too high.
+
+A majority of a serverΓÇÖs memory is used by InnoDBΓÇÖs global buffers and caches, which include components such as **innodb_buffer_pool_size**, **innodb_log_buffer_size**, **key_buffer_size**, and **query_cache_size**.
+
+The value of the **innodb_buffer_pool_size** parameter specifies the area of memory in which InnoDB caches the database tables and index-related data. MySQL tries to accommodate as much table and index-related data in the buffer pool as possible. A larger buffer pool requires fewer I/O operations being diverted to the disk.
+
+## Monitoring memory usage
+
+Azure Database for MySQL provides a range of metrics to gauge the performance of your database instance. To better understand the memory utilization for your database server, view the **Host Memory Percent** or **Memory Percent** metrics.
+
+![Viewing memory utilization metrics](media/how-to-troubleshoot-low-memory-issues/average-host-memory-percentage.png)
+
+If you notice that memory utilization has suddenly increased and that available memory is dropping quickly, monitor other metrics, such as **Host CPU Percent**, **Total Connections**, and **IO Percent**, to determine if a sudden spike in the workload is the source of the issue.
+
+ItΓÇÖs important to note that each connection established with the database server requires the allocation of some amount of memory. As a result, a surge in database connections can cause memory shortages.
+
+## Causes of high memory utilization
+
+LetΓÇÖs look at some more causes of high memory utilization in MySQL. These causes are dependent on the characteristics of the workload.
+
+### An increase in temporary tables
+
+MySQL uses ΓÇ£temporary tablesΓÇ¥, which are a special type of table designed to store a temporary result set. Temporary tables can be reused several times during a session. Since any temporary tables created are local to a session, different sessions can have different temporary tables. In production systems with many sessions performing compilations of large temporary result sets, you should regularly check the global status counter created_tmp_tables, which tracks the number of temporary tables being created during peak hours. A large number of in-memory temporary tables can quickly lead to low available memory in an instance of Azure Database for MySQL.
+
+With MySQL, temporary table size is determined by the values of two parameters, as described in the following table.
+
+| **Parameter** | **Description** |
+|-|-|
+| tmp_table_size | Specifies the maximum size of internal, in-memory temporary tables. |
+| max_heap_table_size | Specifies the maximum size to which user created MEMORY tables can grow. |
+
+> [!NOTE]
+> When determining the maximum size of an internal, in-memory temporary table, MySQL considers the lower of the values set for the tmp_table_size and max_heap_table_size parameters.
+>
+
+#### Recommendations
+
+To troubleshoot low memory issues related to temporary tables, consider the following recommendations.
+
+* Before increasing the tmp_table_size value, verify that your database is indexed properly, especially for columns involved in joins and grouped by operations. Using the appropriate indexes on underlying tables limits the number of temporary tables that are created. Increasing the value of this parameter and the max_heap_table_size parameter without verifying your indexes can allow inefficient queries to run without indexes and create more temp tables than are necessary.
+* Tune the values of the max_heap_table_size and tmp_table_size parameters to address the needs of your workload.
+* If the values you set for the max_heap_table_size and tmp_table_size parameters are too low, temporary tables may regularly spill to storage, adding latency to your queries. You can track temporary tables spilling to disk using the global status counter created_tmp_disk_tables. By comparing the values of the created_tmp_disk_tables and created_tmp_tables variables, you view the number of internal, on-disk temporary tables that have been created to the total number of internal temporary tables created.
+
+### Table cache
+
+As a multi-threaded system, MySQL maintains a cache of table file descriptors so that the tables can be concurrently opened independently by multiple sessions. MySQL uses some amount of memory and OS file descriptors to maintain this table cache. The variable table_open_cache defines the size of the table cache.
+
+#### Recommendations
+
+To troubleshoot low memory issues related to the table cache, consider the following recommendations.
+
+* The parameter table_open_cache specifies the number of open tables for all threads. Increasing this value increases the number of file descriptors that mysqld requires. You can check whether you need to increase the table cache by checking the opened_tables status variable in the show global status counter. Increase the value of this parameter in increments to accommodate your workload.
+* Setting table_open_cache too low may cause MySQL to spend more time in opening and closing tables needed for query processing.
+* Setting this value too high may cause usage of more memory and the operating system running of file descriptors leading to refused connections or failing to process queries.
+
+### Other buffers and the query cache
+
+When troubleshooting issues related to low memory, you can work with a few more buffers and a cache to help with the resolution.
+
+#### Net buffer (net_buffer_length)
+
+The net buffer is size for connection and thread buffers for each client thread and can grow to value specified for max_allowed_packet. If a query statement is large, for example, all the inserts/updates have a very large value, then increasing the value of the net_buffer_length parameter will help to improve performance.
+
+#### Join buffer (join_buffer_size)
+
+The join buffer is allocated to cache table rows when a join canΓÇÖt use an index. If your database has many joins performed without indexes, consider adding indexes for faster joins. If you canΓÇÖt add indexes, then consider increasing the value of the join_buffer_size parameter, which specifies the amount of memory allocated per connection.
+
+#### Sort buffer (sort_buffer_size)
+
+The sort buffer is used for performing sorts for some ORDER BY and GROUP BY queries. If you see many Sort_merge_passes per second in the SHOW GLOBAL STATUS output, consider increasing the sort_buffer_size value to speed up ORDER BY or GROUP BY operations that canΓÇÖt be improved using query optimization or better indexing.
+
+Avoid arbitrarily increasing the sort_buffer_size value unless you have related information that indicates otherwise. Memory for this buffer is assigned per connection. In the MySQL documentation, the Server System Variables article calls out that on Linux, there are two thresholds, 256 KB and 2 MB, and that using larger values can significantly slow down memory allocation. As a result, avoid increasing the sort_buffer_size value beyond 2M, as the performance penalty will outweigh any benefits.
+
+#### Query cache (query_cache_size)
+
+The query cache is an area of memory that is used for caching query result sets. The query_cache_size parameter determines the amount of memory that is allocated for caching query results. By default, the query cache is disabled. In addition, the query cache is deprecated in MySQL version 5.7.20 and removed in MySQL version 8.0. If the query cache is currently enabled in your solution, before disabling it, verify that there arenΓÇÖt any queries relying on it.
+
+### Calculating buffer cache hit ratio
+
+Buffer cache hit ratio is important in MySQL environment to understand if the buffer pool can accommodate the workload requests or not, and as a general rule of thumb itΓÇÖs a good practice to always have a buffer pool cache hit ratio more than 99%.
+
+To compute the InnoDB buffer pool hit ratio for read requests, you can run the SHOW GLOBAL STATUS to retrieve counters ΓÇ£Innodb_buffer_pool_read_requestsΓÇ¥ and ΓÇ£Innodb_buffer_pool_readsΓÇ¥ and then compute the value using the formula shown below.
+
+```
+InnoDB Buffer pool hit ratio = Innodb_buffer_pool_read_requests / (Innodb_buffer_pool_read_requests + Innodb_buffer_pool_reads) * 100
+```
+
+Consider the following example.
+
+```
+mysql> show global status like "innodb_buffer_pool_reads";
++--+-+
+| Variable_name | Value |
++--+-+
+| Innodb_buffer_pool_reads | 197 |
++--+-+
+1 row in set (0.00 sec)
+
+mysql> show global status like "innodb_buffer_pool_read_requests";
++-+-+
+| Variable_name | Value |
++-+-+
+| Innodb_buffer_pool_read_requests | 22479167 |
++-+-+
+1 row in set (0.00 sec)
+```
+
+Using the above values, computing the InnoDB buffer pool hit ratio for read requests yields the following result:
+
+```
+InnoDB Buffer pool hit ratio = 22479167/(22479167+197) * 100
+
+Buffer hit ratio = 99.99%
+```
+
+In addition to select statements buffer cache hit ratio, for any DML statements, writes to the InnoDB Buffer Pool happen in the background. However, if it's necessary to read or create a page and no clean pages are available, it's also necessary to wait for pages to be flushed first.
+
+The Innodb_buffer_pool_wait_free counter counts how many times this has happened. Innodb_buffer_pool_wait_free greater than 0 is a strong indicator that the InnoDB Buffer Pool is too small and increase in buffer pool size or instance size is required to accommodate the writes coming into the database.
+
+## Recommendations
+
+* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more physical memory so the buffers and caches to accommodate your workload.
+* Avoid large or long-running transactions by breaking them into smaller transactions.
+* Use alerts ΓÇ£Host Memory PercentΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
+* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
+* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
+
+## Next steps
+
+To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql How To Troubleshoot Query Performance New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance-new.md
+
+ Title: Troubleshoot query performance in Azure Database for MySQL
+description: Learn how to troubleshoot query performance in Azure Database for MySQL.
+++++ Last updated : 4/22/2022++
+# Troubleshoot query performance in Azure Database for MySQL
++
+Query performance can be impacted by multiple factors, so itΓÇÖs first important to look at the scope of the symptoms youΓÇÖre experiencing in your Azure Database for MySQL server. For example, is query performance slow for:
+
+* All queries running on the Azure Database for MySQL server?
+* A specific set of queries?
+* A specific query?
+
+Also keep in mind that any recent changes to the structure or underlying data of the tables youΓÇÖre querying can affect performance.
+
+## Enabling logging functionality
+
+Before analyzing individual queries, you need to define query benchmarks. With this information, you can implement logging functionality on the database server to trace queries that exceed a threshold you specify based on the needs of the application.
+
+With Azure Database for MySQL, itΓÇÖs recommended to use the slow query log feature to identify queries that take longer than *N* seconds to run. After you've identified the queries from the slow query log, you can use MySQL diagnostics to troubleshoot these queries.
+
+Before you can begin to trace long running queries, you need enable the `slow_query_log` parameter by using the Azure portal or Azure CLI. With this parameter enabled, you should also configure the value of the `long_query_time` parameter to specify the number of seconds that queries can run before being identified as ΓÇ£slow runningΓÇ¥ queries. The default value of the parameter is 10 seconds, but you can adjust the value to address the needs of your applicationΓÇÖs SLA.
+
+[ ![Flexible Server slow query log interface.](media/how-to-troubleshoot-query-performance-new/slow-query-log.png) ](media/how-to-troubleshoot-query-performance-new/slow-query-log.png#lightbox)
+
+While the slow query log is a great tool for tracing long running queries, there are certain scenarios in which it might not be effective. For example, the slow query log:
+
+* Negatively impacts performance if the number of queries is very high or if the query statement is very large. Adjust the value of the `long_query_time` parameter accordingly.
+* May not be helpful if youΓÇÖve also enabled the `log_queries_not_using_index` parameter, which specifies to log queries expected to retrieve all rows. Queries performing a full index scan take advantage of an index, but theyΓÇÖd be logged because the index doesn't limit the number of rows returned.
+
+## Retrieving information from the logs
+
+Logs are available for up to seven days from their creation. You can list and download slow query logs via the Azure portal or Azure CLI. In the Azure portal, navigate to your server, under **Monitoring**, select **Server logs**, and then select the downward arrow next to an entry to download the logs associated with the date and time youΓÇÖre investigating.
+
+[ ![Flexible Server retrieving data from the logs.](media/how-to-troubleshoot-query-performance-new/retrieving-information-logs.png) ](media/how-to-troubleshoot-query-performance-new/retrieving-information-logs.png#lightbox)
+
+In addition, if your slow query logs are integrated with Azure Monitor logs through Diagnostic logs, you can run queries in an editor to analyze them further:
+
+```kusto
+AzureDiagnostics
+| where Resource == '<your server name>'
+| where Category == 'MySqlSlowLogs'
+| project TimeGenerated, Resource , event_class_s, start_time_t , query_time_d, sql_text_s
+| where query_time_d > 10
+```
+
+> [!NOTE]
+> For more examples to get you started with diagnosing slow query logs via Diagnostic logs, see [Analyze logs in Azure Monitor Logs](./concepts-server-logs.md#analyze-logs-in-azure-monitor-logs).
+>
+
+The following snapshot depicts a sample slow query.
+
+```
+# Time: 2021-11-13T10:07:52.610719Z
+# User@Host: root[root] @ [172.30.209.6] Id: 735026
+# Query_time: 25.314811 Lock_time: 0.000000 Rows_sent: 126 Rows_examined: 443308
+use employees;
+SET timestamp=1596448847;
+select * from titles where DATE(from_date) > DATE('1994-04-05') AND title like '%senior%';;
+```
+
+Notice that the query ran in 26 seconds, examined over 443k rows, and returned 126 rows of results.
+
+Usually, you should focus on queries with high values for Query_time and Rows_examined. However, if you notice queries with a high Query_time but only a few Rows_examined, this often indicates the presence of a resource bottleneck. For these cases, you should check if there's any IO throttle or CPU usage.
+
+## Profiling a query
+
+After youΓÇÖve identified a specific slow running query, you can use the EXPLAIN command and profiling to gather additional detail.
+
+To check the query plan, run the following command:
+
+```
+EXPLAIN <QUERY>
+```
+
+> [!NOTE]
+> For more information about using EXPLAIN statements, see [How to use EXPLAIN to profile query performance in Azure Database for MySQL](./how-to-troubleshoot-query-performance.md).
+>
+
+In addition to creating an EXPLAIN plan for a query, you can use the SHOW PROFILE command, which allows you to diagnose the execution of statements that have been run within the current session.
+
+To enable profiling and profile a specific query in a session, run the following set of commands:
+
+```
+SET profiling = 1;
+<QUERY>;
+SHOW PROFILES;
+SHOW PROFILE FOR QUERY <X>;
+```
+
+> [!NOTE]
+> Profiling individual queries is only available in a session and historical statements cannot be profiled.
+>
+
+LetΓÇÖs take a closer look at using these commands to profile a query. First, enable profiling for the current session, run the `SET PROFILING = 1` command:
+
+```
+mysql> SET PROFILING = 1;
+Query OK, 0 rows affected, 1 warning (0.00 sec)
+```
+
+Next, execute a suboptimal query that performs a full table scan:
+
+```
+mysql> select * from sbtest8 where c like '%99098187165%';
++-++-+-+
+| id | k | c | pad |
++-++-+-+
+| 10 | 5035785 | 81674956652-89815953173-84507133182-62502329576-99098187165-62672357237-37910808188-52047270287-89115790749-78840418590 | 91637025586-81807791530-84338237594-90990131533-07427691758 |
++-++-+-+
+1 row in set (27.60 sec)
+```
+
+Then, display a list of all available query profiles by running the `SHOW PROFILES` command:
+
+```
+mysql> SHOW PROFILES;
++-+-+-+
+| Query_ID | Duration | Query |
++-+-+-+
+| 1 | 27.59450000 | select * from sbtest8 where c like '%99098187165%' |
++-+-+-+
+1 row in set, 1 warning (0.00 sec)
+```
+
+Finally, to display the profile for query 1, run the `SHOW PROFILE FOR QUERY 1` command.
+
+```
+mysql> SHOW PROFILE FOR QUERY 1;
++-+--+
+| Status | Duration |
++-+--+
+| starting | 0.000102 |
+| checking permissions | 0.000028 |
+| Opening tables | 0.000033 |
+| init | 0.000035 |
+| System lock | 0.000018 |
+| optimizing | 0.000017 |
+| statistics | 0.000025 |
+| preparing | 0.000019 |
+| executing | 0.000011 |
+| Sending data | 27.594038 |
+| end | 0.000041 |
+| query end | 0.000014 |
+| closing tables | 0.000013 |
+| freeing items | 0.000088 |
+| cleaning up | 0.000020 |
++-+--+
+15 rows in set, 1 warning (0.00 sec)
+```
+
+## Listing the most used queries on the database server
+
+Whenever you're troubleshooting query performance, itΓÇÖs helpful to understand which queries are most often run on your MySQL server. You can use this information to gauge if any of the top queries are taking longer than usual to run. In addition, a developer or DBA could use this information to identify if any query has a sudden increase in query execution count and duration.
+
+To list the top 10 most executed queries against your Azure Database for MySQL server, run the following query:
+
+```
+SELECT digest_text AS normalized_query,
+ count_star AS all_occurrences,
+ Concat(Round(sum_timer_wait / 1000000000000, 3), ' s') AS total_time,
+ Concat(Round(min_timer_wait / 1000000000000, 3), ' s') AS min_time,
+ Concat(Round(max_timer_wait / 1000000000000, 3), ' s') AS max_time,
+ Concat(Round(avg_timer_wait / 1000000000000, 3), ' s') AS avg_time,
+ Concat(Round(sum_lock_time / 1000000000000, 3), ' s') AS total_locktime,
+ sum_rows_affected AS sum_rows_changed,
+ sum_rows_sent AS sum_rows_selected,
+ sum_rows_examined AS sum_rows_scanned,
+ sum_created_tmp_tables,
+ sum_select_scan,
+ sum_no_index_used,
+ sum_no_good_index_used
+FROM performance_schema.events_statements_summary_by_digest
+ORDER BY sum_timer_wait DESC LIMIT 10;
+```
+
+> [!NOTE]
+> Use this query to benchmark the top executed queries in your database server and determine if thereΓÇÖs been a change in the top queries or if any existing queries in the initial benchmark have increased in run duration.
+>
+
+## Monitoring InnoDB garbage collection
+
+When InnoDB garbage collection is blocked or delayed, the database can develop a substantial purge lag that can negatively affect storage utilization and query performance.
+
+The InnoDB rollback segment history list length (HLL) measures the number of change records stored in the undo log. A growing HLL value indicates that InnoDBΓÇÖs garbage collection threads (purge threads) arenΓÇÖt keeping up with write workload or that purging is blocked by a long running query or transaction.
+
+Excessive delays in garbage collection can have severe, negative consequences:
+
+* The InnoDB system tablespace will expand, thus accelerating the growth of the underlying storage volume. At times, the system tablespace can swell by several terabytes as a result of a blocked purge.
+* Delete-marked records wonΓÇÖt be removed in a timely fashion. This can cause InnoDB tablespaces to grow and prevents the engine from reusing the storage occupied by these records.
+* The performance of all queries might degrade, and CPU utilization might increase because of the growth of InnoDB storage structures.
+
+As a result, itΓÇÖs important to monitor HLL values, patterns, and trends.
+
+### Finding HLL values
+
+You can find the HLL value by running the show engine innodb status command. The value will be listed in the output, under the TRANSACTIONS heading:
+
+```
+mysql> show engine innodb status\G
+*************************** 1. row ***************************
+
+(...)
+
+
+TRANSACTIONS
+
+Trx id counter 52685768
+Purge done for trx's n:o < 52680802 undo n:o < 0 state: running but idle
+History list length 2964300
+
+(...)
+```
+
+You can also determine the HLL value by querying the information_schema.innodb_metrics table:
+
+```
+mysql> select count from information_schema.innodb_metrics
+ -> where name = 'trx_rseg_history_len';
+| count |
+| 2964300 |
+1 row in set (0.00 sec)
+```
+
+### Interpreting HLL values
+
+When interpreting HLL values, consider the guidelines listed in the following table:
+
+| **Value** | **Notes** |
+|||
+| Less than ~10,000 | Normal values, indicating that garbage collection isn't falling behind. |
+| Between ~10,000 and ~1,000,000 | These values indicate a minor lag in garbage collection. Such values may be acceptable if they remain steady and don't increase. |
+| Greater than ~1,000,000 | These values should be investigated and may require corrective actions |
+
+### Addressing excessive HLL values
+
+If the HLL shows large spikes or exhibits a pattern of periodic growth, investigate the queries and transactions running on your Azure Database for MySQL instance immediately. Then you can resolve any workload issues that might be preventing the progress of the garbage collection process. While itΓÇÖs not expected for the database to be free of purge lag, you must not let the lag grow uncontrollably.
+
+To obtain transaction information from the `information_schema.innodb_trx` table, for example, run the following commands:
+
+```
+select * from information_schema.innodb_trx
+order by trx_started asc\G
+```
+
+The detail in the `trx_started` column will help you calculate the transaction age.
+
+```
+mysql> select * from information_schema.innodb_trx
+ -> order by trx_started asc\G
+*************************** 1. row ***************************
+ trx_id: 8150550
+ trx_state: RUNNING
+ trx_started: 2021-11-13 20:50:11
+ trx_requested_lock_id: NULL
+ trx_wait_started: NULL
+ trx_weight: 0
+ trx_mysql_thread_id: 19
+ trx_query: select * from employees where DATE(hire_date) > DATE('1998-04-05') AND first_name like '%geo%';
+(…)
+```
+
+For information about current database sessions, including the time spent in the sessionΓÇÖs current state, check the `information_schema.processlist` table. The following output, for example, shows a session thatΓÇÖs been actively executing a query for the last 1462 seconds:
+
+```
+mysql> select user, host, db, command, time, info
+ -> from information_schema.processlist
+ -> order by time desc\G
+*************************** 1. row ***************************
+ user: test
+ host: 172.31.19.159:38004
+ db: employees
+command: Query
+ time: 1462
+ info: select * from employees where DATE(hire_date) > DATE('1998-04-05') AND first_name like '%geo%';
+
+(...)
+```
+
+## Recommendations
+
+* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more CPU cores and additional memory to accommodate your workload.
+* Avoid large or long-running transactions by breaking them into smaller transactions.
+* Configure innodb_purge_threads as per your workload to improve efficiency for background purge operations.
+ > [!NOTE]
+ > Test any changes to this server variable for each environment to gauge the change in engine behavior.
+ >
+
+* Use alerts on ΓÇ£Host CPU PercentΓÇ¥, ΓÇ£Host Memory PercentΓÇ¥ and ΓÇ£Total ConnectionsΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
+* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
+* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
+
+## Next steps
+
+To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql How To Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance.md
+
+ Title: Profile query performance - Azure Database for MySQL
+description: Learn how to profile query performance in Azure Database for MySQL by using EXPLAIN.
+++++ Last updated : 3/30/2022++
+# Profile query performance in Azure Database for MySQL using EXPLAIN
++
+**EXPLAIN** is a handy tool that can help you optimize queries. You can use an EXPLAIN statement to get information about how SQL statements are run. The following shows example output from running an EXPLAIN statement.
+
+```sql
+mysql> EXPLAIN SELECT * FROM tb1 WHERE id=100\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: tb1
+ partitions: NULL
+ type: ALL
+possible_keys: NULL
+ key: NULL
+ key_len: NULL
+ ref: NULL
+ rows: 995789
+ filtered: 10.00
+ Extra: Using where
+```
+
+In this example, the value of *key* is NULL, which means that MySQL can't locate any indexes optimized for the query. As a result, it performs a full table scan. Let's optimize this query by adding an index on the **ID** column, and then run the EXPLAIN statement again.
+
+```sql
+mysql> ALTER TABLE tb1 ADD KEY (id);
+mysql> EXPLAIN SELECT * FROM tb1 WHERE id=100\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: tb1
+ partitions: NULL
+ type: ref
+possible_keys: id
+ key: id
+ key_len: 4
+ ref: const
+ rows: 1
+ filtered: 100.00
+ Extra: NULL
+```
+
+Now, the output shows that MySQL uses an index to limit the number of rows to 1, which dramatically shortens the search time.
+
+## Covering index
+
+A covering index includes of all columns of a query, which reduces value retrieval from data tables. The following **GROUP BY** statement and related output illustrates this.
+
+```sql
+mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: tb1
+ partitions: NULL
+ type: ALL
+possible_keys: NULL
+ key: NULL
+ key_len: NULL
+ ref: NULL
+ rows: 995789
+ filtered: 11.11
+ Extra: Using where; Using temporary; Using filesort
+```
+
+The output shows that MySQL doesn't use any indexes, because proper indexes are unavailable. The output also shows *Using temporary; Using filesort*, which indicates that MySQL creates a temporary table to satisfy the **GROUP BY** clause.
+
+Creating an index only on column **c2** makes no difference, and MySQL still needs to create a temporary table:
+
+```sql
+mysql> ALTER TABLE tb1 ADD KEY (c2);
+mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: tb1
+ partitions: NULL
+ type: ALL
+possible_keys: NULL
+ key: NULL
+ key_len: NULL
+ ref: NULL
+ rows: 995789
+ filtered: 11.11
+ Extra: Using where; Using temporary; Using filesort
+```
+
+In this case, you can create a **covered index** on both **c1** and **c2** by adding the value of **c2**" directly in the index, which will eliminate further data lookup.
+
+```sql
+mysql> ALTER TABLE tb1 ADD KEY covered(c1,c2);
+mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: tb1
+ partitions: NULL
+ type: index
+possible_keys: covered
+ key: covered
+ key_len: 108
+ ref: NULL
+ rows: 995789
+ filtered: 11.11
+ Extra: Using where; Using index
+```
+
+As the output of the EXPLAIN above shows, MySQL now uses the covered index and avoids having to creating a temporary table.
+
+## Combined index
+
+A combined index consists values from multiple columns and can be considered an array of rows that are sorted by concatenating values of the indexed columns. This method can be useful in a **GROUP BY** statement.
+
+```sql
+mysql> EXPLAIN SELECT c1, c2 from tb1 WHERE c2 LIKE '%100' ORDER BY c1 DESC LIMIT 10\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: tb1
+ partitions: NULL
+ type: ALL
+possible_keys: NULL
+ key: NULL
+ key_len: NULL
+ ref: NULL
+ rows: 995789
+ filtered: 11.11
+ Extra: Using where; Using filesort
+```
+
+MySQL performs a *file sort* operation that is fairly slow, especially when it has to sort many rows. To optimize this query, create a combined index on both of the columns that are being sorted.
+
+```sql
+mysql> ALTER TABLE tb1 ADD KEY my_sort2 (c1, c2);
+mysql> EXPLAIN SELECT c1, c2 from tb1 WHERE c2 LIKE '%100' ORDER BY c1 DESC LIMIT 10\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: tb1
+ partitions: NULL
+ type: index
+possible_keys: NULL
+ key: my_sort2
+ key_len: 108
+ ref: NULL
+ rows: 10
+ filtered: 11.11
+ Extra: Using where; Using index
+```
+
+The output of the EXPLAIN statement now shows that MySQL uses a combined index to avoid additional sorting as the index is already sorted.
+
+## Conclusion
+
+You can increase performance significantly by using EXPLAIN together with different types of indexes. Having an index on a table doesn't necessarily mean that MySQL can use it for your queries. Always validate your assumptions by using EXPLAIN and optimize your queries using indexes.
+
+## Next steps
+
+- To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql How To Troubleshoot Replication Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-replication-latency.md
+
+ Title: Troubleshoot replication latency - Azure Database for MySQL
+description: Learn how to troubleshoot replication latency by using Azure Database for MySQL read replicas.
+keywords: mysql, troubleshoot, replication latency in seconds
+++++ Last updated : 01/13/2021+
+# Troubleshoot replication latency in Azure Database for MySQL
++
+The [read replica](concepts-read-replicas.md) feature allows you to replicate data from an Azure Database for MySQL server to a read-only replica server. You can scale out workloads by routing read and reporting queries from the application to replica servers. This setup reduces the pressure on the source server. It also improves overall performance and latency of the application as it scales.
+
+Replicas are updated asynchronously by using the MySQL engine's native binary log (binlog) file position-based replication technology. For more information, see [MySQL binlog file position-based replication configuration overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+
+The replication lag on the secondary read replicas depends several factors. These factors include but aren't limited to:
+
+- Network latency.
+- Transaction volume on the source server.
+- Compute tier of the source server and secondary read replica server.
+- Queries running on the source server and secondary server.
+
+In this article, you'll learn how to troubleshoot replication latency in Azure Database for MySQL. You'll also understand some common causes of increased replication latency on replica servers.
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+>
+
+## Replication concepts
+
+When a binary log is enabled, the source server writes committed transactions into the binary log. The binary log is used for replication. It's turned on by default for all newly provisioned servers that support up to 16 TB of storage. On replica servers, two threads run on each replica server. One thread is the *IO thread*, and the other is the *SQL thread*:
+
+- The IO thread connects to the source server and requests updated binary logs. This thread receives the binary log updates. Those updates are saved on a replica server, in a local log called the *relay log*.
+- The SQL thread reads the relay log and then applies the data changes on replica servers.
+
+## Monitoring replication latency
+
+Azure Database for MySQL provides the metric for replication lag in seconds in [Azure Monitor](concepts-monitoring.md). This metric is available only on read replica servers. It's calculated by the seconds_behind_master metric that's available in MySQL.
+
+To understand the cause of increased replication latency, connect to the replica server by using [MySQL Workbench](connect-workbench.md) or [Azure Cloud Shell](https://shell.azure.com). Then run following command.
+
+>[!NOTE]
+> In your code, replace the example values with your replica server name and admin username. The admin username requires `@\<servername>` for Azure Database for MySQL.
+
+```azurecli-interactive
+mysql --host=myreplicademoserver.mysql.database.azure.com --user=myadmin@mydemoserver -p
+```
+
+Here's how the experience looks in the Cloud Shell terminal:
+
+```
+Requesting a Cloud Shell.Succeeded.
+Connecting terminal...
+
+Welcome to Azure Cloud Shell
+
+Type "az" to use Azure CLI
+Type "help" to learn about Cloud Shell
+
+user@Azure:~$mysql -h myreplicademoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
+Enter password:
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 64796
+Server version: 5.6.42.0 Source distribution
+
+Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+mysql>
+```
+
+In the same Cloud Shell terminal, run the following command:
+
+```
+mysql> SHOW SLAVE STATUS;
+```
+
+Here's a typical output:
+
+>[!div class="mx-imgBorder"]
+> :::image type="content" source="./media/how-to-troubleshoot-replication-latency/show-status.png" alt-text="Monitoring replication latency":::
+
+The output contains a lot of information. Normally, you need to focus on only the rows that the following table describes.
+
+|Metric|Description|
+|||
+|Slave_IO_State| Represents the current status of the IO thread. Normally, the status is "Waiting for master to send event" if the source (master) server is synchronizing. A status such as "Connecting to master" indicates that the replica lost the connection to the source server. Make sure the source server is running, or check to see whether a firewall is blocking the connection.|
+|Master_Log_File| Represents the binary log file to which the source server is writing.|
+|Read_Master_Log_Pos| Indicates where the source server is writing in the binary log file.|
+|Relay_Master_Log_File| Represents the binary log file that the replica server is reading from the source server.|
+|Slave_IO_Running| Indicates whether the IO thread is running. The value should be `Yes`. If the value is `NO`, then the replication is likely broken.|
+|Slave_SQL_Running| Indicates whether the SQL thread is running. The value should be `Yes`. If the value is `NO`, then the replication is likely broken.|
+|Exec_Master_Log_Pos| Indicates the position of the Relay_Master_Log_File that the replica is applying. If there's latency, then this position sequence should be smaller than Read_Master_Log_Pos.|
+|Relay_Log_Space|Indicates the total combined size of all existing relay log files. You can check the upper limit size by querying `SHOW GLOBAL VARIABLES` like `relay_log_space_limit`.|
+|Seconds_Behind_Master| Displays replication latency in seconds.|
+|Last_IO_Errno|Displays the IO thread error code, if any. For more information about these codes, see the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).|
+|Last_IO_Error| Displays the IO thread error message, if any.|
+|Last_SQL_Errno|Displays the SQL thread error code, if any. For more information about these codes, see the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).|
+|Last_SQL_Error|Displays the SQL thread error message, if any.|
+|Slave_SQL_Running_State| Indicates the current SQL thread status. In this state, `System lock` is normal. It's also normal to see a status of `Waiting for dependent transaction to commit`. This status indicates that the replica is waiting for the source server to update committed transactions.|
+
+If Slave_IO_Running is `Yes` and Slave_SQL_Running is `Yes`, then the replication is running fine.
+
+Next, check Last_IO_Errno, Last_IO_Error, Last_SQL_Errno, and Last_SQL_Error. These fields display the error number and error message of the most-recent error that caused the SQL thread to stop. An error number of `0` and an empty message means there's no error. Investigate any nonzero error value by checking the error code in the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).
+
+## Common scenarios for high replication latency
+
+The following sections address scenarios in which high replication latency is common.
+
+### Network latency or high CPU consumption on the source server
+
+If you see the following values, then replication latency is likely caused by high network latency or high CPU consumption on the source server.
+
+```
+Slave_IO_State: Waiting for master to send event
+Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File, e.g. mysql-bin.00020
+Relay_Master_Log_File: the file sequence is smaller than Master_Log_File, e.g. mysql-bin.00010
+```
+
+In this case, the IO thread is running and is waiting on the source server. The source server has already written to binary log file number 20. The replica has received only up to file number 10. The primary factors for high replication latency in this scenario are network speed or high CPU utilization on the source server.
+
+In Azure, network latency within a region can typically be measured milliseconds. Across regions, latency ranges from milliseconds to seconds.
+
+In most cases, the connection delay between IO threads and the source server is caused by high CPU utilization on the source server. The IO threads are processed slowly. You can detect this problem by using Azure Monitor to check CPU utilization and the number of concurrent connections on the source server.
+
+If you don't see high CPU utilization on the source server, the problem might be network latency. If network latency is suddenly abnormally high, check the [Azure status page](https://status.azure.com/status) for known issues or outages.
+
+### Heavy bursts of transactions on the source server
+
+If you see the following values, then a heavy burst of transactions on the source server is likely causing the replication latency.
+
+```
+Slave_IO_State: Waiting for the slave SQL thread to free enough relay log space
+Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File, e.g. mysql-bin.00020
+Relay_Master_Log_File: the file sequence is smaller then Master_Log_File, e.g. mysql-bin.00010
+```
+
+The output shows that the replica can retrieve the binary log behind the source server. But the replica IO thread indicates that the relay log space is full already.
+
+Network speed isn't causing the delay. The replica is trying to catch up. But the updated binary log size exceeds the upper limit of the relay log space.
+
+To troubleshoot this issue, enable the [slow query log](concepts-server-logs.md) on the source server. Use slow query logs to identify long-running transactions on the source server. Then tune the identified queries to reduce the latency on the server.
+
+Replication latency of this sort is commonly caused by the data load on the source server. When source servers have weekly or monthly data loads, replication latency is unfortunately unavoidable. The replica servers eventually catch up after the data load on the source server finishes.
+
+### Slowness on the replica server
+
+If you observe the following values, then the problem might be on the replica server.
+
+```
+Slave_IO_State: Waiting for master to send event
+Master_Log_File: The binary log file sequence equals to Relay_Master_Log_File, e.g. mysql-bin.000191
+Read_Master_Log_Pos: The position of master server written to the above file is larger than Relay_Log_Pos, e.g. 103978138
+Relay_Master_Log_File: mysql-bin.000191
+Slave_IO_Running: Yes
+Slave_SQL_Running: Yes
+Exec_Master_Log_Pos: The position of slave reads from master binary log file is smaller than Read_Master_Log_Pos, e.g. 13468882
+Seconds_Behind_Master: There is latency and the value here is greater than 0
+```
+
+In this scenario, the output shows that both the IO thread and the SQL thread are running well. The replica reads the same binary log file that the source server writes. However, some latency on the replica server reflects the same transaction from the source server.
+
+The following sections describe common causes of this kind of latency.
+
+#### No primary key or unique key on a table
+
+Azure Database for MySQL uses row-based replication. The source server writes events to the binary log, recording changes in individual table rows. The SQL thread then replicates those changes to the corresponding table rows on the replica server. When a table lacks a primary key or unique key, the SQL thread scans all rows in the target table to apply the changes. This scan can cause replication latency.
+
+In MySQL, the primary key is an associated index that ensures fast query performance because it can't include NULL values. If you use the InnoDB storage engine, the table data is physically organized to do ultra-fast lookups and sorts based on the primary key.
+
+We recommend that you add a primary key on tables in the source server before you create the replica server. Add primary keys on the source server and then re-create read replicas to help improve replication latency.
+
+Use the following query to find out which tables are missing a primary key on the source server:
+
+```sql
+select tab.table_schema as database_name, tab.table_name
+from information_schema.tables tab left join
+information_schema.table_constraints tco
+on tab.table_schema = tco.table_schema
+and tab.table_name = tco.table_name
+and tco.constraint_type = 'PRIMARY KEY'
+where tco.constraint_type is null
+and tab.table_schema not in('mysql', 'information_schema', 'performance_schema', 'sys')
+and tab.table_type = 'BASE TABLE'
+order by tab.table_schema, tab.table_name;
+
+```
+
+#### Long-running queries on the replica server
+
+The workload on the replica server can make the SQL thread lag behind the IO thread. Long-running queries on the replica server are one of the common causes of high replication latency. To troubleshoot this problem, enable the [slow query log](concepts-server-logs.md) on the replica server.
+
+Slow queries can increase resource consumption or slow down the server so that the replica can't catch up with the source server. In this scenario, tune the slow queries. Faster queries prevent blockage of the SQL thread and improve replication latency significantly.
+
+#### DDL queries on the source server
+
+On the source server, a data definition language (DDL) command like [`ALTER TABLE`](https://dev.mysql.com/doc/refman/5.7/en/alter-table.html) can take a long time. While the DDL command is running, thousands of other queries might be running in parallel on the source server.
+
+When the DDL is replicated, to ensure database consistency, the MySQL engine runs the DDL in a single replication thread. During this task, all other replicated queries are blocked and must wait until the DDL operation finishes on the replica server. Even online DDL operations cause this delay. DDL operations increase replication latency.
+
+If you enabled the [slow query log](concepts-server-logs.md) on the source server, you can detect this latency problem by checking for a DDL command that ran on the source server. Through index dropping, renaming, and creating, you can use the INPLACE algorithm for the ALTER TABLE. You might need to copy the table data and rebuild the table.
+
+Typically, concurrent DML is supported for the INPLACE algorithm. But you can briefly take an exclusive metadata lock on the table when you prepare and run the operation. So for the CREATE INDEX statement, you can use the clauses ALGORITHM and LOCK to influence the method for table copying and the level of concurrency for reading and writing. You can still prevent DML operations by adding a FULLTEXT index or SPATIAL index.
+
+The following example creates an index by using ALGORITHM and LOCK clauses.
+
+```sql
+ALTER TABLE table_name ADD INDEX index_name (column), ALGORITHM=INPLACE, LOCK=NONE;
+```
+
+Unfortunately, for a DDL statement that requires a lock, you can't avoid replication latency. To reduce the potential effects, do these types of DDL operations during off-peak hours, for instance during the night.
+
+#### Downgraded replica server
+
+In Azure Database for MySQL, read replicas use the same server configuration as the source server. You can change the replica server configuration after it has been created.
+
+If the replica server is downgraded, the workload can consume more resources, which in turn can lead to replication latency. To detect this problem, use Azure Monitor to check the CPU and memory consumption of the replica server.
+
+In this scenario, we recommend that you keep the replica server's configuration at values equal to or greater than the values of the source server. This configuration allows the replica to keep up with the source server.
+
+#### Improving replication latency by tuning the source server parameters
+
+In Azure Database for MySQL, by default, replication is optimized to run with parallel threads on replicas. When high-concurrency workloads on the source server cause the replica server to fall behind, you can improve the replication latency by configuring the parameter binlog_group_commit_sync_delay on the source server.
+
+The binlog_group_commit_sync_delay parameter controls how many microseconds the binary log commit waits before synchronizing the binary log file. The benefit of this parameter is that instead of immediately applying every committed transaction, the source server sends the binary log updates in bulk. This delay reduces IO on the replica and helps improve performance.
+
+It might be useful to set the binlog_group_commit_sync_delay parameter to 1000 or so. Then monitor the replication latency. Set this parameter cautiously, and use it only for high-concurrency workloads.
+
+> [!IMPORTANT]
+> In replica server, binlog_group_commit_sync_delay parameter is recommended to be 0. This is recommended because unlike source server, the replica server won't have high-concurrency and increasing the value for binlog_group_commit_sync_delay on replica server could inadvertently cause replication lag to increase.
+
+For low-concurrency workloads that include many singleton transactions, the binlog_group_commit_sync_delay setting can increase latency. Latency can increase because the IO thread waits for bulk binary log updates even if only a few transactions are committed.
+
+## Next steps
+
+Check out the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
mysql How To Troubleshoot Sys Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-sys-schema.md
+
+ Title: Use the sys_schema - Azure Database for MySQL
+description: Learn how to use the sys_schema to find performance issues and maintain databases in Azure Database for MySQL.
+++++ Last updated : 3/10/2022++
+# Tune performance and maintain databases in Azure Database for MySQL using the sys_schema
++
+The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, and tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL version 5.7.
++
+There are 52 views in the sys_schema, and each view has one of the following prefixes:
+
+- Host_summary or IO: I/O related latencies.
+- InnoDB: InnoDB buffer status and locks.
+- Memory: Memory usage by the host and users.
+- Schema: Schema-related information, such as auto increment, indexes, etc.
+- Statement: Information on SQL statements; it can be statement that resulted in full table scan, or long query time.
+- User: Resources consumed and grouped by users. Examples are file I/Os, connections, and memory.
+- Wait: Wait events grouped by host or user.
+
+Now let's look at some common usage patterns of the sys_schema. To begin with, we'll group the usage patterns into two categories: **Performance tuning** and **Database maintenance**.
+
+## Performance tuning
+
+### *sys.user_summary_by_file_io*
+
+IO is the most expensive operation in the database. We can find out the average IO latency by querying the *sys.user_summary_by_file_io* view. With the default 125 GB of provisioned storage, my IO latency is about 15 seconds.
++
+Because Azure Database for MySQL scales IO with respect to storage, after increasing my provisioned storage to 1 TB, my IO latency reduces to 571 ms.
++
+### *sys.schema_tables_with_full_table_scans*
+
+Despite careful planning, many queries can still result in full table scans. For additional information about the types of indexes and how to optimize them, you can refer to this article: [How to troubleshoot query performance](./how-to-troubleshoot-query-performance.md). Full table scans are resource-intensive and degrade your database performance. The quickest way to find tables with full table scan is to query the *sys.schema_tables_with_full_table_scans* view.
++
+### *sys.user_summary_by_statement_type*
+
+To troubleshoot database performance issues, it may be beneficial to identify the events happening inside of your database, and using the *sys.user_summary_by_statement_type* view may just do the trick.
++
+In this example Azure Database for MySQL spent 53 minutes flushing the slog query log 44579 times. That's a long time and many IOs. You can reduce this activity by either disabling your slow query log or decreasing the frequency of slow query login to the Azure portal.
+
+## Database maintenance
+
+### *sys.innodb_buffer_stats_by_table*
+
+[!IMPORTANT]
+> Querying this view can impact performance. It is recommended to perform this troubleshooting during off-peak business hours.
+
+The InnoDB buffer pool resides in memory and is the main cache mechanism between the DBMS and storage. The size of the InnoDB buffer pool is tied to the performance tier and canΓÇÖt be changed unless a different product SKU is chosen. As with memory in your operating system, old pages are swapped out to make room for fresher data. To find out which tables consume most of the InnoDB buffer pool memory, you can query the *sys.innodb_buffer_stats_by_table* view.
++
+In the graphic above, it's apparent that other than system tables and views, each table in the mysqldatabase033 database, which hosts one of my WordPress sites, occupies 16 KB, or 1 page, of data in memory.
+
+### *Sys.schema_unused_indexes* & *sys.schema_redundant_indexes*
+
+Indexes are great tools to improve read performance, but they do incur additional costs for inserts and storage. *Sys.schema_unused_indexes* and *sys.schema_redundant_indexes* provide insights into unused or duplicate indexes.
+++
+## Conclusion
+
+In summary, the sys_schema is a great tool for both performance tuning and database maintenance. Make sure to take advantage of this feature in your Azure Database for MySQL.
+
+## Next steps
+
+- To find peer answers to your most concerned questions or post a new question/answer, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/overview.md
+
+ Title: Overview - Azure Database for MySQL
+description: Learn about the Azure Database for MySQL service, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
++++++ Last updated : 3/18/2020++
+# What is Azure Database for MySQL?
++
+Azure Database for MySQL is a relational database service in the Microsoft cloud based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine, versions 5.6 (retired), 5.7, and 8.0. Azure Database for MySQL delivers:
+
+- Zone redundant and same zone high availability.
+- Maximum control with ability to select your scheduled maintenance window.
+- Data protection using automatic backups and point-in-time-restore for up to 35 days.
+- Automated patching and maintenance for underlying hardware, operating system and database engine to keep the service secure and up to date.
+- Predictable performance, using inclusive pay-as-you-go pricing.
+- Elastic scaling within seconds.
+- Cost optimization controls with low cost burstable SKU and ability to stop/start server.
+- Enterprise grade security, industry-leading compliance, and privacy to protect sensitive data at-rest and in-motion.
+- Monitoring and automation to simplify management and monitoring for large-scale deployments.
+- Industry-leading support experience.
+
+These capabilities require almost no administration and all are provided at no additional cost. They allow you to focus on rapid app development and accelerating your time to market rather than allocating precious time and resources to managing virtual machines and infrastructure. In addition, you can continue to develop your application with the open-source tools and platform of your choice to deliver with the speed and efficiency your business demands, all without having to learn new skills.
++
+## Deployment models
+
+Azure Database for MySQL powered by the MySQL community edition is available in two deployment modes:
+- Flexible Server
+- Single Server
+
+### Azure Database for MySQL - Flexible Server
+
+Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that do not need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](../flexible-server/overview.md#azure-regions).
+
+The Flexible Server deployment option offers three compute tiers: Burstable, General Purpose, and Memory Optimized. Each tier offers different compute and memory capacity to support your database workloads. You can build your first app on a burstable tier for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Compute and Storage](../flexible-server/concepts-compute-storage.md) for details.
+
+Flexible servers are best suited for
+- Ease of deployments, simplified scaling and low database management overhead for functions like backups, high availability, security and monitoring
+- Application developments requiring community version of MySQL with better control and customizations
+- Production workloads with same-zone, zone redundant high availability and managed maintenance windows
+- Simplified development experience
+- Enterprise grade security
+
+For detailed overview of flexible server deployment mode, refer [flexible server overview](../flexible-server/overview.md). For latest updates on Flexible Server, refer to [What's new in Azure Database for MySQL - Flexible Server](../flexible-server/whats-new.md).
+
+### Azure Database for MySQL - Single Server
+
+Azure Database for MySQL Single Server is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+
+Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. To learn about the differences between Flexible Server and Single Server deployment options, refer [select the right deployment option for you](select-right-deployment-type.md) documentation.
+
+For detailed overview of single server deployment mode, refer [single server overview](single-server-overview.md). For latest updates on Flexible Server, refer to [What's new in Azure Database for MySQL - Single Server](single-server-whats-new.md).
+
+## Contacts
+For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address is not a technical support alias.
+
+In addition, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).
+
+## Next steps
+
+Learn more about the two deployment modes for Azure Database for MySQL and choose the right options based on your needs.
+
+- [Single Server](index.yml)
+- [Flexible Server](../flexible-server/index.yml)
+- [Choose the right MySQL deployment option for your workload](select-right-deployment-type.md)
mysql Partners Migration Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/partners-migration-mysql.md
+
+ Title: Azure Database for MySQL Migration Partners | Microsoft Docs
+description: Lists of third-party migration partners with solutions that support Azure Database for MySQL.
+++++ Last updated : 08/18/2021++
+# Azure Database for MySQL migration partners
++
+To broadly support your Azure Database for MySQL solution, you can choose from a wide variety of industry-leading partners and tools. This article highlights Microsoft partners with migration solutions that support Azure Database for MySQL.
+
+## Migration partners
+
+| Partner | Description | Links | Videos |
+||-|-|--|
+| ![Devart][1] |**Devart**<br>Founded in 1997, Devart is one of the leading developers of database management software, ALM solutions, and data providers for most popular database servers. dbForge Studio for MySQL provides functionality to transfer the data to a testing server or to completely migrate the entire database to a new production server.|[Website][devart_website]<br>[Twitter][devart_twitter]<br>[YouTube][devart_youtube]<br>[Contact][devart_contact] | |
+| ![SNP Technologies][2] |**SNP Technologies**<br>SNP Technologies is a cloud-only service provider, building secure and reliable solutions for businesses of the future. The company believes in generating real value for your business. From thought to execution, SNP Technologies shares a common purpose with clients, to turn their investment into an advantage.|[Website][snp_website]<br>[Twitter][snp_twitter]<br>[Contact][snp_contact] | |
+| ![Pragmatic Works][3] |**Pragmatic Works**<br>Pragmatic Works is a training and consulting company with deep expertise in data management and performance, Business Intelligence, Big Data, Power BI, and Azure. They focus on data optimization and improving the efficiency of SQL Server and cloud management.|[Website][pragmatic-works_website]<br>[Twitter][pragmatic-works_twitter]<br>[YouTube][pragmatic-works_youtube]<br>[Contact][pragmatic-works_contact] | |
+| ![Infosys][4] |**Infosys**<br>Infosys is a global leader in the latest digital services and consulting. With over three decades of experience managing the systems of global enterprises, Infosys expertly steers clients through their digital journey by enabling organizations with an AI-powered core. Doing so helps prioritize the execution of change. Infosys also provides businesses with agile digital at scale to deliver unprecedented levels of performance and customer delight.|[Website][infosys_website]<br>[Twitter][infosys_twitter]<br>[YouTube][infosys_youtube]<br>[Contact][infosys_contact] | |
+| ![credativ][5] |**credativ**<br>credativ is an independent consulting and services company. Since 1999, they have offered comprehensive services and technical support for the implementation and operation of Open Source software in business applications. Their comprehensive range of services includes strategic consulting, sound technical advice, qualified training, and personalized support up to 24 hours per day for all your IT needs.|[Marketplace][credativ_marketplace]<br>[Website][credativ_website]<br>[Twitter][credative_twitter]<br>[YouTube][credativ_youtube]<br>[Contact][credativ_contact] | |
+| ![Pactera][6] |**Pactera**<br>Pactera is a global company offering consulting, digital, technology, and operations services to the worldΓÇÖs leading enterprises. From their roots in engineering to the latest in digital transformation, they give customers a competitive edge. Their proven methodologies and tools ensure your data is secure, authentic, and accurate.|[Website][pactera_website]<br>[Twitter][pactera_twitter]<br>[Contact][pactera_contact] | |
+
+## Next steps
+
+To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/).
+
+<!--Image references-->
+[1]: ./media/partner-migration-mysql/devart-logo.png
+[2]: ./media/partner-migration-mysql/snp-logo.png
+[3]: ./media/partner-migration-mysql/pw-logo-text-cmyk-1000.png
+[4]: ./media/partner-migration-mysql/infosys-logo.png
+[5]: ./media/partner-migration-mysql/credativ-round-logo-2.png
+[6]: ./media/partner-migration-mysql/pactera-logo-small-2.png
+
+<!--Website links -->
+[devart_website]:https://www.devart.com//
+[snp_website]:https://www.snp.com//
+[pragmatic-works_website]:https://pragmaticworks.com//
+[infosys_website]:https://www.infosys.com/
+[credativ_website]:https://www.credativ.com/postgresql-competence-center/microsoft-azure
+[pactera_website]:https://en.pactera.com/
+
+<!--Get Started Links-->
+<!--Datasheet Links-->
+<!--Marketplace Links -->
+[credativ_marketplace]:https://azuremarketplace.microsoft.com/de-de/marketplace/apps?search=credativ&page=1
+
+<!--Press links-->
+
+<!--YouTube links-->
+[devart_youtube]:https://www.youtube.com/user/DevartSoftware
+[pragmatic-works_youtube]:https://www.youtube.com/user/PragmaticWorks
+[infosys_youtube]:https://www.youtube.com/user/Infosys
+[credativ_youtube]:https://www.youtube.com/channel/UCnSnr6_TcILUQQvAwlYFc8A
+
+<!--Twitter links-->
+[devart_twitter]:https://twitter.com/DevartSoftware
+[snp_twitter]:https://twitter.com/snptechnologies
+[pragmatic-works_twitter]:https://twitter.com/PragmaticWorks
+[infosys_twitter]:https://twitter.com/infosys
+[credative_twitter]:https://twitter.com/credativ
+[pactera_twitter]:https://twitter.com/Pactera?s=17
+
+<!--Contact links-->
+[devart_contact]:https://www.devart.com/company/contact.html
+[snp_contact]:mailto:sachin@snp.com
+[pragmatic-works_contact]:mailto:marketing@pragmaticworks.com
+[infosys_contact]:https://www.infosys.com/contact/
+[credativ_contact]:mailto:info@credativ.com
+[pactera_contact]:mailto:shushi.gaur@pactera.com
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
+
+ Title: Built-in policy definitions for Azure Database for MySQL
+description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 05/12/2022++++++
+# Azure Policy built-in definitions for Azure Database for MySQL
++
+This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy definitions for Azure Database for MySQL. For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the **Version** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure Database for MySQL
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../../governance/policy/concepts/effects.md).
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
+
+ Title: 'Quickstart: Create an Azure DB for MySQL - ARM template'
+description: In this Quickstart, learn how to create an Azure Database for MySQL server with virtual network integration, by using an Azure Resource Manager template.
++++++ Last updated : 05/19/2020++
+# Quickstart: Use an ARM template to create an Azure Database for MySQL server
++
+Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for MySQL server with virtual network integration. You can create the server in the Azure portal, Azure CLI, or Azure PowerShell.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure button":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.dbformysql%2Fmanaged-mysql-with-vnet%2Fazuredeploy.json)
+
+## Prerequisites
+
+# [Portal](#tab/azure-portal)
+
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+# [PowerShell](#tab/PowerShell)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally, [Azure PowerShell](/powershell/azure/).
+
+# [CLI](#tab/CLI)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally, [Azure CLI](/cli/azure/).
+++
+## Review the template
+
+You create an Azure Database for MySQL server with a defined set of compute and storage resources. To learn more, see [Azure Database for MySQL pricing tiers](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-mysql-with-vnet/).
++
+The template defines five Azure resources:
+
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets)
+* [**Microsoft.DBforMySQL/servers**](/azure/templates/microsoft.dbformysql/servers)
+* [**Microsoft.DBforMySQL/servers/virtualNetworkRules**](/azure/templates/microsoft.dbformysql/servers/virtualnetworkrules)
+* [**Microsoft.DBforMySQL/servers/firewallRules**](/azure/templates/microsoft.dbformysql/servers/firewallrules)
+
+More Azure Database for MySQL template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Dbformysql&pageNumber=1&sort=Popular).
+
+## Deploy the template
+
+# [Portal](#tab/azure-portal)
+
+Select the following link to deploy the Azure Database for MySQL server template in the Azure portal:
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure in portal":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.dbformysql%2Fmanaged-mysql-with-vnet%2Fazuredeploy.json)
+
+On the **Deploy Azure Database for MySQL with VNet** page:
+
+1. For **Resource group**, select **Create new**, enter a name for the new resource group, and select **OK**.
+
+2. If you created a new resource group, select a **Location** for the resource group and the new server.
+
+3. Enter a **Server Name**, **Administrator Login**, and **Administrator Login Password**.
+
+ :::image type="content" source="./media/quickstart-create-mysql-server-database-using-arm-template/deploy-azure-database-for-mysql-with-vnet.png" alt-text="Deploy Azure Database for MySQL with VNet window, Azure quickstart template, Azure portal":::
+
+4. Change the other default settings if you want:
+
+ * **Subscription**: the Azure subscription you want to use for the server.
+ * **Sku Capacity**: the vCore capacity, which can be *2* (the default), *4*, *8*, *16*, *32*, or *64*.
+ * **Sku Name**: the SKU tier prefix, SKU family, and SKU capacity, joined by underscores, such as *B_Gen5_1*, *GP_Gen5_2* (the default), or *MO_Gen5_32*.
+ * **Sku Size MB**: the storage size, in megabytes, of the Azure Database for MySQL server (default *5120*).
+ * **Sku Tier**: the deployment tier, such as *Basic*, *GeneralPurpose* (the default), or *MemoryOptimized*.
+ * **Sku Family**: *Gen4* or *Gen5* (the default), which indicates hardware generation for server deployment.
+ * **Mysql Version**: the version of MySQL server to deploy, such as *5.6* or *5.7* (the default).
+ * **Backup Retention Days**: the desired period for geo-redundant backup retention, in days (default *7*).
+ * **Geo Redundant Backup**: *Enabled* or *Disabled* (the default), depending on geo-disaster recovery (Geo-DR) requirements.
+ * **Virtual Network Name**: the name of the virtual network (default *azure_mysql_vnet*).
+ * **Subnet Name**: the name of the subnet (default *azure_mysql_subnet*).
+ * **Virtual Network Rule Name**: the name of the virtual network rule allowing the subnet (default *AllowSubnet*).
+ * **Vnet Address Prefix**: the address prefix for the virtual network (default *10.0.0.0/16*).
+ * **Subnet Prefix**: the address prefix for the subnet (default *10.0.0.0/16*).
+
+5. Read the terms and conditions, and then select **I agree to the terms and conditions stated above**.
+
+6. Select **Purchase**.
+
+# [PowerShell](#tab/PowerShell)
+
+Use the following interactive code to create a new Azure Database for MySQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password.
+
+To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurepowershell-interactive
+$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for MySQL server"
+$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist"
+$location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
+$adminUser = Read-Host -Prompt "Enter the Azure Database for MySQL server's administrator account name"
+$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString
+
+New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
+ -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json `
+ -serverName $serverName `
+ -administratorLogin $adminUser `
+ -administratorLoginPassword $adminPassword
+
+Read-Host -Prompt "Press [ENTER] to continue ..."
+```
+
+# [CLI](#tab/CLI)
+
+Use the following interactive code to create a new Azure Database for MySQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password.
+
+To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurecli-interactive
+echo "Enter a name for the new Azure Database for MySQL server:" &&
+read serverName &&
+echo "Enter a name for the new resource group where the server will exist:" &&
+read resourceGroupName &&
+echo "Enter an Azure region (for example, centralus) for the resource group:" &&
+read location &&
+echo "Enter the Azure Database for MySQL server's administrator account name:" &&
+read adminUser &&
+echo "Enter the administrator password:" &&
+read adminPassword &&
+params='serverName='$serverName' administratorLogin='$adminUser' administratorLoginPassword='$adminPassword &&
+az group create --name $resourceGroupName --location $location &&
+az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json &&
+echo "Press [ENTER] to continue ..."
+```
+++
+## Review deployed resources
+
+# [Portal](#tab/azure-portal)
+
+Follow these steps to see an overview of your new Azure Database for MySQL server:
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for MySQL servers**.
+
+2. In the database list, select your new server. The **Overview** page for your new Azure Database for MySQL server appears.
+
+# [PowerShell](#tab/PowerShell)
+
+Run the following interactive code to view details about your Azure Database for MySQL server. You'll have to enter the name of the new server.
+
+```azurepowershell-interactive
+$serverName = Read-Host -Prompt "Enter the name of your Azure Database for MySQL server"
+Get-AzResource -ResourceType "Microsoft.DBforMySQL/servers" -Name $serverName | ft
+Write-Host "Press [ENTER] to continue..."
+```
+
+# [CLI](#tab/CLI)
+
+Run the following interactive code to view details about your Azure Database for MySQL server. You'll have to enter the name and the resource group of the new server.
+
+```azurecli-interactive
+echo "Enter your Azure Database for MySQL server name:" &&
+read serverName &&
+echo "Enter the resource group where the Azure Database for MySQL server exists:" &&
+read resourcegroupName &&
+az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DbForMySQL/servers"
+```
+++
+## Exporting ARM template from the portal
+You can [export an ARM template](../../azure-resource-manager/templates/export-template-portal.md) from the Azure portal. There are two ways to export a template:
+
+- [Export from resource group or resource](../../azure-resource-manager/templates/export-template-portal.md#export-template-from-a-resource). This option generates a new template from existing resources. The exported template is a "snapshot" of the current state of the resource group. You can export an entire resource group or specific resources within that resource group.
+- [Export before deployment or from history](../../azure-resource-manager/templates/export-template-portal.md#download-template-before-deployment). This option retrieves an exact copy of a template used for deployment.
+
+When exporting the template, in the ```"properties":{ }``` section of the MySQL server resource you will notice that ```administratorLogin``` and ```administratorLoginPassword``` will not be included for security reasons. You **MUST** add these parameters to your template before deploying the template or the template will fail.
+
+```
+"resources": [
+ {
+ "type": "Microsoft.DBforMySQL/servers",
+ "apiVersion": "2017-12-01",
+ "name": "[parameters('servers_name')]",
+ "location": "southcentralus",
+ "sku": {
+ "name": "B_Gen5_1",
+ "tier": "Basic",
+ "family": "Gen5",
+ "capacity": 1
+ },
+ "properties": {
+ "administratorLogin": "[parameters('administratorLogin')]",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+```
+
+## Clean up resources
+
+When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
+
+2. In the resource group list, choose the name of your resource group.
+
+3. In the **Overview** page of your resource group, select **Delete resource group**.
+
+4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+Write-Host "Press [ENTER] to continue..."
+```
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating an ARM template, see:
+
+> [!div class="nextstepaction"]
+> [ Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
mysql Quickstart Create Mysql Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli.md
+
+ Title: 'Quickstart: Create a server - Azure CLI - Azure Database for MySQL'
+description: This quickstart describes how to use the Azure CLI to create an Azure Database for MySQL server in an Azure resource group.
++++
+ms.devlang: azurecli
+ Last updated : 07/15/2020+++
+# Quickstart: Create an Azure Database for MySQL server using Azure CLI
++
+> [!TIP]
+> Consider using the simpler [az mysql up](/cli/azure/mysql#az-mysql-up) Azure CLI command (currently in preview). Try out the [quickstart](./quickstart-create-server-up-azure-cli.md).
+
+This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create an Azure Database for MySQL server in five minutes.
+++
+ - This quickstart requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+ - Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
+
+ ```azurecli
+ az account set --subscription <subscription id>
+ ```
+
+## Create an Azure Database for MySQL server
+Create an [Azure resource group](../../azure-resource-manager/management/overview.md) using the [az group create](/cli/azure/group) command and then create your MySQL server inside this resource group. You should provide a unique name. The following example creates a resource group named `myresourcegroup` in the `westus` location.
+
+```azurecli-interactive
+az group create --name myresourcegroup --location westus
+```
+
+Create an Azure Database for MySQL server with the [az mysql server create](/cli/azure/mysql/server#az-mysql-server-create) command. A server can contain multiple databases.
+
+```azurecli
+az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2
+```
+
+Here are the details for arguments above :
+
+**Setting** | **Sample value** | **Description**
+||
+name | mydemoserver | Enter a unique name for your Azure Database for MySQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
+resource-group | myresourcegroup | Provide the name of the Azure resource group.
+location | westus | The Azure location for the server.
+admin-user | myadmin | The username for the administrator login. It cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
+admin-password | *secure password* | The password of the administrator user. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
+sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information.
+
+>[!IMPORTANT]
+>- The default MySQL version on your server is 5.7 . We currently have 5.6 and 8.0 versions also available.
+>- To view all the arguments for **az mysql server create** command, see this [reference document](/cli/azure/mysql/server#az-mysql-server-create).
+>- SSL is enabled by default on your server . For more infroamtion on SSL, see [Configure SSL connectivity](how-to-configure-ssl.md)
+
+## Configure a server-level firewall rule
+By default the new server created is protected with firewall rules and not accessible publicly. You can configure the firewall rule on your server using the [az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule) command. This will allow you to connect to the server locally.
+
+The following example creates a firewall rule called `AllowMyIP` that allows connections from a specific IP address, 192.168.0.1. Replace the IP address you will be connecting from. You can use an range of IP addresses if needed. Don't know how to look for your IP, then go to [https://whatismyipaddress.com/](https://whatismyipaddress.com/) to get your IP address.
+
+```azurecli-interactive
+az mysql server firewall-rule create --resource-group myresourcegroup --server mydemoserver --name AllowMyIP --start-ip-address 192.168.0.1 --end-ip-address 192.168.0.1
+```
+
+> [!NOTE]
+> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from within a corporate network, outbound traffic over port 3306 might not be allowed. If this is the case, you can't connect to your server unless your IT department opens port 3306.
+
+## Get the connection information
+
+To connect to your server, you need to provide host information and access credentials.
+
+```azurecli-interactive
+az mysql server show --resource-group myresourcegroup --name mydemoserver
+```
+
+The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**.
+```json
+{
+ "administratorLogin": "myadmin",
+ "earliestRestoreDate": null,
+ "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver",
+ "location": "westus",
+ "name": "mydemoserver",
+ "resourceGroup": "myresourcegroup",
+ "sku": {
+ "capacity": 2,
+ "family": "Gen5",
+ "name": "GP_Gen5_2",
+ "size": null,
+ "tier": "GeneralPurpose"
+ },
+ "sslEnforcement": "Enabled",
+ "storageProfile": {
+ "backupRetentionDays": 7,
+ "geoRedundantBackup": "Disabled",
+ "storageMb": 5120
+ },
+ "tags": null,
+ "type": "Microsoft.DBforMySQL/servers",
+ "userVisibleState": "Ready",
+ "version": "5.7"
+}
+```
+
+## Connect to Azure Database for MySQL server using mysql command-line client
+You can connect to your server using a popular client tool, **[mysql.exe](https://dev.mysql.com/downloads/)** command-line tool with [Azure Cloud Shell](../../cloud-shell/overview.md). Alternatively, you can use mysql command line on your local environment.
+```bash
+ mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
+```
+
+## Clean up resources
+If you don't need these resources for another quickstart/tutorial, you can delete them by doing the following command:
+
+```azurecli-interactive
+az group delete --name myresourcegroup
+```
+
+If you would just like to delete the one newly created server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
+
+```azurecli-interactive
+az mysql server delete --resource-group myresourcegroup --name mydemoserver
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>[Build a PHP app on Windows with MySQL](../../app-service/tutorial-php-mysql-app.md)
mysql Quickstart Create Mysql Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-portal.md
+
+ Title: 'Quickstart: Create a server - Azure portal - Azure Database for MySQL'
+description: This article walks you through using the Azure portal to create a sample Azure Database for MySQL server in about five minutes.
++++++ Last updated : 11/04/2020++
+# Quickstart: Create an Azure Database for MySQL server by using the Azure portal
++
+Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. This quickstart shows you how to use the Azure portal to create an Azure Database for MySQL single server. It also shows you how to connect to the server.
+
+## Prerequisites
+An Azure subscription is required. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+
+## Create an Azure Database for MySQL single server
+1. Go to the [Azure portal](https://portal.azure.com/) to create a MySQL Single Server database. Search for and select **Azure Database for MySQL**:
+
+ >[!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/find-azure-mysql-in-portal.png" alt-text="Find Azure Database for MySQL":::
+
+1. Select **Add**.
+
+2. On the **Select Azure Database for MySQL deployment option** page, select **Single server**:
+ >[!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/choose-singleserver.png" alt-text="Screenshot that shows the Single server option.":::
+
+3. Enter the basic settings for a new single server:
+
+ >[!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/4-create-form.png" alt-text="Screenshot that shows the Create MySQL server page.":::
+
+ **Setting** | **Suggested value** | **Description**
+ ||
+ Subscription | Your subscription | Select the desired Azure subscription.
+ Resource group | **myresourcegroup** | Enter a new resource group or an existing one from your subscription.
+ Server name | **mydemoserver** | Enter a unique name. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters.
+ Data source |**None** | Select **None** to create a new server from scratch. Select **Backup** only if you're restoring from a geo-backup of an existing server.
+ Location |Your desired location | Select a location from the list.
+ Version | The latest major version| Use the latest major version. See [all supported versions](concepts-supported-versions.md).
+ Compute + storage | Use the defaults| The default pricing tier is **General Purpose** with **4 vCores** and **100 GB** storage. Backup retention is set to **7 days**, with the **Geographically Redundant** backup option.<br/>Review the [pricing](https://azure.microsoft.com/pricing/details/mysql/) page, and update the defaults if you need to.
+ Admin username | **mydemoadmin** | Enter your server admin user name. You can't use **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public** for the admin user name.
+ Password | A password | A new password for the server admin user. The password must be 8 to 128 characters long and contain a combination of uppercase or lowercase letters, numbers, and non-alphanumeric characters (!, $, #, %, and so on).
+
+
+ > [!NOTE]
+ > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier can't later be scaled to General Purpose or Memory Optimized.
+
+4. Select **Review + create** to provision the server.
+
+5. Wait for the portal page to display **Your deployment is complete**. Select **Go to resource** to go to the newly created server page:
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/deployment-complete.png" alt-text="Screenshot that shows the Your deployment is complete message.":::
+
+[Having problems? Let us know.](https://aka.ms/mysql-doc-feedback)
+
+## Configure a server-level firewall rule
+
+By default, the new server is protected with a firewall. To connect, you must provide access to your IP by completing these steps:
+
+1. Go to **Connection security** from the left pane for your server resource. If you don't know how to find your resource, see [How to open a resource](../../azure-resource-manager/management/manage-resources-portal.md#open-resources).
+
+ >[!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/add-current-ip-firewall.png" alt-text="Screenshot that shows the Connection security > Firewall rules page.":::
+
+2. Select **Add current client IP address**, and then select **Save**.
+
+ > [!NOTE]
+ > To avoid connectivity problems, check if your network allows outbound traffic over port 3306, which is used by Azure Database for MySQL.
+
+You can add more IPs or provide an IP range to connect to your server from those IPs. For more information, see [How to manage firewall rules on an Azure Database for MySQL server](./concepts-firewall-rules.md).
++
+[Having problems? Let us know](https://aka.ms/mysql-doc-feedback)
+
+## Connect to the server by using mysql.exe
+You can use either [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) or [MySQL Workbench](./connect-workbench.md) to connect to the server from your local environment. In this quickstart, we'll use mysql.exe in [Azure Cloud Shell](../../cloud-shell/overview.md) to connect to the server.
++
+1. Open Azure Cloud Shell in the portal by selecting the first button on the toolbar, as shown in the following screenshot. Note the server name, server admin name, and subscription for your new server in the **Overview** section, as shown in the screenshot.
+
+ > [!NOTE]
+ > If you're opening Cloud Shell for the first time, you'll be prompted to create a resource group and storage account. This is a one-time step. It will be automatically attached for all sessions.
+
+ >[!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-portal/use-in-cloud-shell.png" alt-text="Screenshot that shows Cloud Shell in the Azure portal.":::
+2. Run the following command in the Azure Cloud Shell terminal. Replace the values shown here with your actual server name and admin user name. For Azure Database for MySQL, you need to add `@\<servername>` to the admin user name, as shown here:
+
+ ```azurecli-interactive
+ mysql --host=mydemoserver.mysql.database.azure.com --user=myadmin@mydemoserver -p
+ ```
+
+ Here's what it looks like in the Cloud Shell terminal:
+
+ ```
+ Requesting a Cloud Shell.Succeeded.
+ Connecting terminal...
+
+ Welcome to Azure Cloud Shell
+
+ Type "az" to use Azure CLI
+ Type "help" to learn about Cloud Shell
+
+ user@Azure:~$mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
+ Enter password:
+ Welcome to the MySQL monitor. Commands end with ; or \g.
+ Your MySQL connection id is 64796
+ Server version: 5.6.42.0 Source distribution
+
+ Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
+ Oracle is a registered trademark of Oracle Corporation and/or its
+ affiliates. Other names may be trademarks of their respective
+ owners.
+
+ Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+ mysql>
+ ```
+3. In the same Azure Cloud Shell terminal, create a database named `guest`:
+ ```
+ mysql> CREATE DATABASE guest;
+ Query OK, 1 row affected (0.27 sec)
+ ```
+4. Switch to the `guest` database:
+ ```
+ mysql> USE guest;
+ Database changed
+ ```
+5. Enter `quit`, and then select **Enter** to quit mysql.
+
+[Having problems? Let us know.](https://aka.ms/mysql-doc-feedback)
+
+## Clean up resources
+You have now created an Azure Database for MySQL server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the MySQL server. To delete the resource group, complete these steps:
+1. In the Azure portal, search for and select **Resource groups**.
+2. In the list of resource groups, select the name of your resource group.
+3. On the **Overview** page for your resource group, select **Delete resource group**.
+4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+
+To delete the server, you can select **Delete** on the **Overview** page for your server, as shown here:
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="media/quickstart-create-mysql-server-database-using-azure-portal/delete-server.png" alt-text="Screenshot that shows the Delete button on the server overview page.":::
+
+## Next steps
+> [!div class="nextstepaction"]
+>[Build a PHP app on Windows with MySQL](../../app-service/tutorial-php-mysql-app.md) <br/>
+
+> [!div class="nextstepaction"]
+>[Build PHP app on Linux with MySQL](../../app-service/tutorial-php-mysql-app.md?pivots=platform-linux%3fpivots%3dplatform-linux)<br/><br/>
+
+[Can't find what you're looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Quickstart Create Mysql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-powershell.md
+
+ Title: 'Quickstart: Create a server - Azure PowerShell - Azure Database for MySQL'
+description: This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an Azure resource group.
++++
+ms.devlang: azurepowershell
+ Last updated : 04/28/2020+++
+# Quickstart: Create an Azure Database for MySQL server using PowerShell
++
+This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an
+Azure resource group. You can use PowerShell to create and manage Azure resources interactively or
+in scripts.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
+before you begin.
+
+If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
+module and connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information
+about installing the Az PowerShell module, see
+[Install Azure PowerShell](/powershell/azure/install-az-ps).
+
+> [!IMPORTANT]
+> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
+> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If this is your first time using the Azure Database for MySQL service, you must register the
+**Microsoft.DBforMySQL** resource provider.
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL
+```
++
+If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
+should be billed. Select a specific subscription ID using the
+[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+```azurepowershell-interactive
+Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+```
+
+## Create a resource group
+
+Create an [Azure resource group](../../azure-resource-manager/management/overview.md)
+using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. A
+resource group is a logical container in which Azure resources are deployed and managed as a group.
+
+The following example creates a resource group named **myresourcegroup** in the **West US** region.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myresourcegroup -Location westus
+```
+
+## Create an Azure Database for MySQL server
+
+Create an Azure Database for MySQL server with the `New-AzMySqlServer` cmdlet. A server can manage
+multiple databases. Typically, a separate database is used for each project or for each user.
+
+The following table contains a list of commonly used parameters and sample values for the
+`New-AzMySqlServer` cmdlet.
+
+| **Setting** | **Sample value** | **Description** |
+| -- | - | - |
+| Name | mydemoserver | Choose a globally unique name in Azure that identifies your Azure Database for MySQL server. The server name can only contain letters, numbers, and the hyphen (-) character. Any uppercase characters that are specified are automatically converted to lowercase during the creation process. It must contain from 3 to 63 characters. |
+| ResourceGroupName | myresourcegroup | Provide the name of the Azure resource group. |
+| Sku | GP_Gen5_2 | The name of the SKU. Follows the convention **pricing-tier\_compute-generation\_vCores** in shorthand. For more information about the Sku parameter, see the information following this table. |
+| BackupRetentionDay | 7 | How long a backup should be retained. Unit is days. Range is 7-35. |
+| GeoRedundantBackup | Enabled | Whether geo-redundant backups should be enabled for this server or not. This value cannot be enabled for servers in the basic pricing tier and it cannot be changed after the server is created. Allowed values: Enabled, Disabled. |
+| Location | westus | The Azure region for the server. |
+| SslEnforcement | Enabled | Whether SSL should be enabled or not for this server. Allowed values: Enabled, Disabled. |
+| StorageInMb | 51200 | The storage capacity of the server (unit is megabytes). Valid StorageInMb is a minimum of 5120 MB and increases in 1024 MB increments. For more information about storage size limits, see [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md). |
+| Version | 5.7 | The MySQL major version. |
+| AdministratorUserName | myadmin | The username for the administrator login. It cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**. |
+| AdministratorLoginPassword | `<securestring>` | The password of the administrator user in the form of a secure string. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. |
+
+The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as
+shown in the following examples.
+
+- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.
+- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.
+- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.
+
+For information about valid **Sku** values by region and for tiers, see
+[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md).
+
+The following example creates a MySQL server in the **West US** region named **mydemoserver** in the
+**myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5 server in
+the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document the
+password used in the first line of the example as this is the password for the MySQL server admin
+account.
+
+> [!TIP]
+> A server name maps to a DNS name and must be globally unique in Azure.
+
+```azurepowershell-interactive
+$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
+New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
+```
+
+Consider using the basic pricing tier if light compute and I/O are adequate for your workload.
+
+> [!IMPORTANT]
+> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory-
+> optimized and cannot be geo-replicated.
+
+## Configure a firewall rule
+
+Create an Azure Database for MySQL server-level firewall rule using the `New-AzMySqlFirewallRule`
+cmdlet. A server-level firewall rule allows an external application, such as the `mysql`
+command-line tool or MySQL Workbench to connect to your server through the Azure Database for MySQL
+service firewall.
+
+The following example creates a firewall rule named **AllowMyIP** that allows connections from a
+specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond
+to the location that you are connecting from.
+
+```azurepowershell-interactive
+New-AzMySqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1
+```
+
+> [!NOTE]
+> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from
+> within a corporate network, outbound traffic over port 3306 might not be allowed. In this
+> scenario, you can only connect to the server if your IT department opens port 3306.
+
+## Configure SSL settings
+
+By default, SSL connections between your server and client applications are enforced. This default
+ensures the security of _in-motion_ data by encrypting the data stream over the Internet. For this
+quickstart, disable SSL connections for your server. For more information, see
+[Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md).
+
+> [!WARNING]
+> Disabling SSL is not recommended for production servers.
+
+The following example disables SSL on your Azure Database for MySQL server.
+
+```azurepowershell-interactive
+Update-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -SslEnforcement Disabled
+```
+
+## Get the connection information
+
+To connect to your server, you need to provide host information and access credentials. Use the
+following example to determine the connection information. Make a note of the values for
+**FullyQualifiedDomainName** and **AdministratorLogin**.
+
+```azurepowershell-interactive
+Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Select-Object -Property FullyQualifiedDomainName, AdministratorLogin
+```
+
+```Output
+FullyQualifiedDomainName AdministratorLogin
+
+mydemoserver.mysql.database.azure.com myadmin
+```
+
+## Connect to the server using the mysql command-line tool
+
+Connect to your server using the `mysql` command-line tool. To download and install the command-line
+tool, see [MySQL Community Downloads](https://dev.mysql.com/downloads/shell/). You can also access a
+pre-installed version of the `mysql` command-line tool in Azure Cloud Shell by selecting the **Try
+It** button on a code sample in this article. Other ways to access Azure Cloud Shell are to select
+the **>_** button on the upper-right toolbar in the Azure portal or by visiting
+[shell.azure.com](https://shell.azure.com/).
+
+1. Connect to the server using the `mysql` command-line tool.
+
+ ```azurepowershell-interactive
+ mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
+ ```
+
+1. View server status.
+
+ ```sql
+ mysql> status
+ ```
+
+ ```Output
+ C:\Users\>mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
+ Enter password: *************
+ Welcome to the MySQL monitor. Commands end with ; or \g.
+ Your MySQL connection id is 65512
+ Server version: 5.6.42.0 MySQL Community Server (GPL)
+
+ Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
+
+ Oracle is a registered trademark of Oracle Corporation and/or its
+ affiliates. Other names may be trademarks of their respective
+ owners.
+
+ Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+ mysql> status
+ --
+ mysql Ver 14.14 Distrib 5.7.29, for Win64 (x86_64)
+
+ Connection id: 65512
+ Current database:
+ Current user: myadmin@myipaddress
+ SSL: Not in use
+ Using delimiter: ;
+ Server version: 5.6.42.0 MySQL Community Server (GPL)
+ Protocol version: 10
+ Connection: mydemoserver.mysql.database.azure.com via TCP/IP
+ Server characterset: latin1
+ Db characterset: latin1
+ Client characterset: utf8
+ Conn. characterset: utf8
+ TCP port: 3306
+ Uptime: 1 hour 2 min 12 sec
+
+ Threads: 7 Questions: 952 Slow queries: 0 Opens: 66 Flush tables: 3 Open tables: 16 Queries per second avg: 0.255
+ --
+
+ mysql>
+ ```
+
+For additional commands, see [MySQL 5.7 Reference Manual - Chapter 4.5.1](https://dev.mysql.com/doc/refman/5.7/en/mysql.html).
+
+## Connect to the server using MySQL Workbench
+
+1. Launch the MySQL Workbench application on your client computer. To download and install MySQL
+ Workbench, see [Download MySQL Workbench](https://dev.mysql.com/downloads/workbench/).
+
+1. In the **Setup New Connection** dialog box, enter the following information on the **Parameters**
+ tab:
+
+ :::image type="content" source="./media/quickstart-create-mysql-server-database-using-azure-powershell/setup-new-connection.png" alt-text="setup new connection":::
+
+ | **Setting** | **Suggested Value** | **Description** |
+ | -- | | - |
+ | Connection Name | My Connection | Specify a label for this connection |
+ | Connection Method | Standard (TCP/IP) | Use TCP/IP protocol to connect to Azure Database for MySQL |
+ | Hostname | `mydemoserver.mysql.database.azure.com` | Server name you previously noted |
+ | Port | 3306 | The default port for MySQL |
+ | Username | myadmin@mydemoserver | The server admin login you previously noted |
+ | Password | ************* | Use the admin account password you configured earlier |
+
+1. To test if the parameters are configured correctly, click the **Test Connection** button.
+
+1. Select the connection to connect to the server.
+
+## Clean up resources
+
+If the resources created in this quickstart aren't needed for another quickstart or tutorial, you
+can delete them by running the following example.
+
+> [!CAUTION]
+> The following example deletes the specified resource group and all resources contained within it.
+> If resources outside the scope of this quickstart exist in the specified resource group, they will
+> also be deleted.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myresourcegroup
+```
+
+To delete only the server created in this quickstart without deleting the resource group, use the
+`Remove-AzMySqlServer` cmdlet.
+
+```azurepowershell-interactive
+Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Design an Azure Database for MySQL using PowerShell](tutorial-design-database-using-powershell.md)
mysql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-server-up-azure-cli.md
+
+ Title: 'Quickstart: Create Azure Database for MySQL using az mysql up'
+description: Quickstart guide to create Azure Database for MySQL server using Azure CLI (command line interface) up command.
++++
+ms.devlang: azurecli
+ Last updated : 3/18/2020+++
+# Quickstart: Create an Azure Database for MySQL using a simple Azure CLI command - az mysql up (preview)
++
+> [!IMPORTANT]
+> The [az mysql up](/cli/azure/mysql#az-mysql-up) Azure CLI command is in preview.
+
+Azure Database for MySQL is a managed service that enables you to run, manage, and scale highly available MySQL databases in the cloud. The Azure CLI is used to create and manage Azure resources from the command-line or in scripts. This quickstart shows you how to use the [az mysql up](/cli/azure/mysql#az-mysql-up) command to create an Azure Database for MySQL server using the Azure CLI. In addition to creating the server, the `az mysql up` command creates a sample database, a root user in the database, opens the firewall for Azure services, and creates default firewall rules for the client computer. This helps to expedite the development process.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+You'll need to login to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name.
+
+```azurecli
+az login
+```
+
+If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account) command. Substitute the **subscription ID** property from the **az login** output for your subscription into the subscription ID placeholder.
+
+```azurecli
+az account set --subscription <subscription id>
+```
+
+## Create an Azure Database for MySQL server
+
+To use the commands, install the [db-up](/cli/azure/ext/db-up/mysql
+) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
+
+```azurecli
+az extension add --name db-up
+```
+
+Create an Azure Database for MySQL server using the following command:
+
+```azurecli
+az mysql up
+```
+
+The server is created with the following default values (unless you manually override them):
+
+**Setting** | **Default value** | **Description**
+||
+server-name | System generated | A unique name that identifies your Azure Database for MySQL server.
+resource-group | System generated | A new Azure resource group.
+sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. See our [pricing page](https://azure.microsoft.com/pricing/details/mysql/) for more information about the tiers.
+backup-retention | 7 | How long a backup should be retained. Unit is days.
+geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not.
+location | westus2 | The Azure location for the server.
+ssl-enforcement | Enabled | Whether SSL should be enabled or not for this server.
+storage-size | 5120 | The storage capacity of the server (unit is megabytes).
+version | 5.7 | The MySQL major version.
+admin-user | System generated | The username for the administrator login.
+admin-password | System generated | The password of the administrator user.
+
+> [!NOTE]
+> For more information about the `az mysql up` command and its additional parameters, see the [Azure CLI documentation](/cli/azure/mysql#az-mysql-up).
+
+Once your server is created, it comes with the following settings:
+
+- A firewall rule called "devbox" is created. The Azure CLI attempts to detect the IP address of the machine the `az mysql up` command is run from and allows that IP address.
+- "Allow access to Azure services" is set to ON. This setting configures the server's firewall to accept connections from all Azure resources, including resources not in your subscription.
+- The `wait_timeout` parameter is set to 8 hours
+- An empty database named "sampledb" is created
+- A new user named "root" with privileges to "sampledb" is created
+
+> [!NOTE]
+> Azure Database for MySQL communicates over port 3306. When connecting from within a corporate network, outbound traffic over port 3306 may not be allowed by your network's firewall. Have your IT department open port 3306 to connect to your server.
+
+## Get the connection information
+
+After the `az mysql up` command is completed, a list of connection strings for popular programming languages is returned to you. These connection strings are pre-configured with the specific attributes of your newly created Azure Database for MySQL server.
+
+You can use the [az mysql show-connection-string](/cli/azure/mysql#az-mysql-show-connection-string) command to list these connection strings again.
+
+## Clean up resources
+
+Clean up all resources you created in the quickstart using the following command. This command deletes the Azure Database for MySQL server and the resource group.
+
+```azurecli
+az mysql down --delete-group
+```
+
+If you would just like to delete the newly created server, you can run [az mysql down](/cli/azure/mysql#az-mysql-down) command.
+
+```azurecli
+az mysql down
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Design a MySQL Database with Azure CLI](./tutorial-design-database-using-cli.md)
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-mysql-github-actions.md
+
+ Title: 'Quickstart: Connect to Azure MySQL with GitHub Actions'
+description: Use Azure MySQL from a GitHub Actions workflow
+++++ Last updated : 05/09/2022+++
+# Quickstart: Use GitHub Actions to connect to Azure MySQL
++
+Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a workflow to deploy database updates to [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/).
+
+## Prerequisites
+
+You'll need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A GitHub account. If you don't have a GitHub account, [sign up for free](https://github.com/join).
+- A GitHub repository with sample data (`data.sql`).
+
+ > [!IMPORTANT]
+ > This quickstart assumes that you have cloned a GitHub repository to your computer so that you can add the associated IP address to a firewall rule, if necessary.
+
+- An Azure Database for MySQL server.
+ - [Quickstart: Create an Azure Database for MySQL server in the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)
+
+## Workflow file overview
+
+A GitHub Actions workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
+
+The file has two sections:
+
+|Section |Tasks |
+|||
+|**Authentication** | 1. Generate deployment credentials. |
+|**Deploy** | 1. Deploy the database. |
+
+## Generate deployment credentials
+# [Service principal](#tab/userlevel)
+
+You can create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac&preserve-view=true) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
+
+Replace the placeholders `server-name` with the name of your MySQL server hosted on Azure. Replace the `subscription-id` and `resource-group` with the subscription ID and resource group connected to your MySQL server.
+
+```azurecli-interactive
+ az ad sp create-for-rbac --name {server-name} --role contributor \
+ --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
+ --sdk-auth
+```
+
+The output is a JSON object with the role assignment credentials that provide access to your database similar to below. Copy this output JSON object for later.
+
+```output
+ {
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>",
+ (...)
+ }
+```
+
+> [!IMPORTANT]
+> It's always a good practice to grant minimum access. The scope in the previous example is limited to the specific server and not the entire resource group.
+
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
+++
+## Copy the MySQL connection string
+
+In the Azure portal, go to your Azure Database for MySQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string will look similar to the following.
+
+> [!IMPORTANT]
+>
+> - For Single server use **Uid=adminusername@servername**. Note the **@servername** is required.
+> - For Flexible server , use **Uid= adminusername** without the @servername.
+
+```output
+ Server=my-mysql-server.mysql.database.azure.com; Port=3306; Database={your_database}; Uid=adminname@my-mysql-server; Pwd={your_password}; SslMode=Preferred;
+```
+
+You'll use the connection string as a GitHub secret.
+
+## Configure GitHub secrets
+# [Service principal](#tab/userlevel)
+
+1. In [GitHub](https://github.com/), browse your repository.
+
+2. Select **Settings > Secrets > New secret**.
+
+3. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
+
+ When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
+
+ ```yaml
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ ```
+
+4. Select **New secret** again.
+
+5. Paste the connection string value into the secret's value field. Give the secret the name `AZURE_MYSQL_CONNECTION_STRING`.
+
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
+++
+## Add your workflow
+
+1. Go to **Actions** for your GitHub repository.
+
+2. Select **Set up your workflow yourself**.
+
+3. Delete everything after the `on:` section of your workflow file. For example, your remaining workflow may look like this.
+
+ ```yaml
+ name: CI
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+ ```
+
+4. Rename your workflow `MySQL for GitHub Actions` and add the checkout and login actions. These actions will check out your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
+
+ # [Service principal](#tab/userlevel)
+
+ ```yaml
+ name: MySQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+ jobs:
+ build:
+ runs-on: windows-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ ```
+
+ # [OpenID Connect](#tab/openid)
+
+ ```yaml
+ name: MySQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+ jobs:
+ build:
+ runs-on: windows-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ ```
+
+ ___
+
+5. Use the Azure MySQL Deploy action to connect to your MySQL instance. Replace `MYSQL_SERVER_NAME` with the name of your server. You should have a MySQL data file named `data.sql` at the root level of your repository.
+
+ ```yaml
+ - uses: azure/mysql@v1
+ with:
+ server-name: MYSQL_SERVER_NAME
+ connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
+ sql-file: './data.sql'
+ ```
+
+6. Complete your workflow by adding an action to sign out of Azure. Here's the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
+
+ # [Service principal](#tab/userlevel)
+
+ ```yaml
+ name: MySQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+ jobs:
+ build:
+ runs-on: windows-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - uses: azure/mysql@v1
+ with:
+ server-name: MYSQL_SERVER_NAME
+ connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
+ sql-file: './data.sql'
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+ ```
+ # [OpenID Connect](#tab/openid)
+
+ ```yaml
+ name: MySQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+ jobs:
+ build:
+ runs-on: windows-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ - uses: azure/mysql@v1
+ with:
+ server-name: MYSQL_SERVER_NAME
+ connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
+ sql-file: './data.sql'
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+ ```
+ ___
+
+## Review your deployment
+
+1. Go to **Actions** for your GitHub repository.
+
+2. Open the first result to see detailed logs of your workflow's run.
+
+ :::image type="content" source="media/quickstart-mysql-github-actions/github-actions-run-mysql.png" alt-text="Log of GitHub actions run":::
+
+## Clean up resources
+
+When your Azure MySQL database and repository are no longer needed, clean up the resources you deployed by deleting the resource group and your GitHub repository.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about Azure and GitHub integration](/azure/developer/github/)
mysql Reference Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/reference-stored-procedures.md
+
+ Title: Management stored procedures - Azure Database for MySQL
+description: Learn which stored procedures in Azure Database for MySQL are useful to help you configure data-in replication, set the timezone, and kill queries.
+++++ Last updated : 3/18/2020++
+# Azure Database for MySQL management stored procedures
++
+Stored procedures are available on Azure Database for MySQL servers to help manage your MySQL server. This includes managing your server's connections, queries, and setting up Data-in Replication.
+
+## Data-in Replication stored procedures
+
+Data-in Replication allows you to synchronize data from a MySQL server running on-premises, in virtual machines, or database services hosted by other cloud providers into the Azure Database for MySQL service.
+
+The following stored procedures are used to set up or remove Data-in Replication between a source and replica.
+
+|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**|
+|--|--|--|--|
+|*mysql.az_replication_change_master*|master_host<br/>master_user<br/>master_password<br/>master_port<br/>master_log_file<br/>master_log_pos<br/>master_ssl_ca|N/A|To transfer data with SSL mode, pass in the CA certificate's context into the master_ssl_ca parameter. </br><br>To transfer data without SSL, pass in an empty string into the master_ssl_ca parameter.|
+|*mysql.az_replication _start*|N/A|N/A|Starts replication.|
+|*mysql.az_replication _stop*|N/A|N/A|Stops replication.|
+|*mysql.az_replication _remove_master*|N/A|N/A|Removes the replication relationship between the source and replica.|
+|*mysql.az_replication_skip_counter*|N/A|N/A|Skips one replication error.|
+
+To set up Data-in Replication between a source and a replica in Azure Database for MySQL, refer to [how to configure Data-in Replication](how-to-data-in-replication.md).
+
+## Other stored procedures
+
+The following stored procedures are available in Azure Database for MySQL to manage your server.
+
+|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**|
+|--|--|--|--|
+|*mysql.az_kill*|processlist_id|N/A|Equivalent to [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the connection associated with the provided processlist_id after terminating any statement the connection is executing.|
+|*mysql.az_kill_query*|processlist_id|N/A|Equivalent to [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the statement the connection is currently executing. Leaves the connection itself alive.|
+|*mysql.az_load_timezone*|N/A|N/A|Loads time zone tables to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").|
+
+## Next steps
+- Learn how to set up [Data-in Replication](how-to-data-in-replication.md)
+- Learn how to use the [time zone tables](how-to-server-parameters.md#working-with-the-time-zone-parameter)
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-azure-cli.md
+
+ Title: Azure CLI samples - Azure Database for MySQL | Microsoft Docs
+description: This article lists the Azure CLI code samples available for interacting with Azure Database for MySQL.
+++
+ms.devlang: azurecli
++ Last updated : 09/17/2021
+keywords: azure cli samples, azure cli code samples, azure cli script samples
+
+# Azure CLI samples for Azure Database for MySQL
+
+The following table includes links to sample Azure CLI scripts for Azure Database for MySQL.
+
+| Sample link | Description |
+|||
+|**Create a server**||
+| [Create a server and firewall rule](../scripts/sample-create-server-and-firewall-rule.md) | Azure CLI script that creates a single Azure Database for MySQL server and configures a server-level firewall rule. |
+|**Scale a server**||
+| [Scale a server](../scripts/sample-scale-server.md) | Azure CLI script that scales a single Azure Database for MySQL server up or down to allow for changing performance needs. |
+|**Change server configurations**||
+| [Change server configurations](../scripts/sample-change-server-configuration.md) | Azure CLI script that change configurations of a single Azure Database for MySQL server. |
+|**Restore a server**||
+| [Restore a server](../scripts/sample-point-in-time-restore.md) | Azure CLI script that restores a single Azure Database for MySQL server to a previous point in time. |
+|**Manipulate with server logs**||
+| [Enable server logs](../scripts/sample-server-logs.md) | Azure CLI script that enables server logs of a single Azure Database for MySQL server. |
+|||
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-java-connection-pooling.md
+
+ Title: Java samples to illustrate connection pooling
+description: This article lists java samples to illustrate connection pooling.
++++++ Last updated : 02/28/2018+
+# Java sample to illustrate connection pooling
++
+The below sample code illustrates connection pooling in Java.
+
+```java
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.Stack;
+
+public class MySQLConnectionPool {
+ private String databaseUrl;
+ private String userName;
+ private String password;
+ private int maxPoolSize = 10;
+ private int connNum = 0;
+
+ private static final String SQL_VERIFYCONN = "select 1";
+
+ Stack<Connection> freePool = new Stack<>();
+ Set<Connection> occupiedPool = new HashSet<>();
+
+ /**
+ * Constructor
+ *
+ * @param databaseUrl
+ * The connection url
+ * @param userName
+ * user name
+ * @param password
+ * password
+ * @param maxSize
+ * max size of the connection pool
+ */
+ public MySQLConnectionPool(String databaseUrl, String userName,
+ String password, int maxSize) {
+ this.databaseUrl = databaseUrl;
+ this.userName = userName;
+ this.password = password;
+ this.maxPoolSize = maxSize;
+ }
+
+ /**
+ * Get an available connection
+ *
+ * @return An available connection
+ * @throws SQLException
+ * Fail to get an available connection
+ */
+ public synchronized Connection getConnection() throws SQLException {
+ Connection conn = null;
+
+ if (isFull()) {
+ throw new SQLException("The connection pool is full.");
+ }
+
+ conn = getConnectionFromPool();
+
+ // If there is no free connection, create a new one.
+ if (conn == null) {
+ conn = createNewConnectionForPool();
+ }
+
+ // For Azure Database for MySQL, if there is no action on one connection for some
+ // time, the connection is lost. By this, make sure the connection is
+ // active. Otherwise reconnect it.
+ conn = makeAvailable(conn);
+ return conn;
+ }
+
+ /**
+ * Return a connection to the pool
+ *
+ * @param conn
+ * The connection
+ * @throws SQLException
+ * When the connection is returned already or it isn't gotten
+ * from the pool.
+ */
+ public synchronized void returnConnection(Connection conn)
+ throws SQLException {
+ if (conn == null) {
+ throw new NullPointerException();
+ }
+ if (!occupiedPool.remove(conn)) {
+ throw new SQLException(
+ "The connection is returned already or it isn't for this pool");
+ }
+ freePool.push(conn);
+ }
+
+ /**
+ * Verify if the connection is full.
+ *
+ * @return if the connection is full
+ */
+ private synchronized boolean isFull() {
+ return ((freePool.size() == 0) && (connNum >= maxPoolSize));
+ }
+
+ /**
+ * Create a connection for the pool
+ *
+ * @return the new created connection
+ * @throws SQLException
+ * When fail to create a new connection.
+ */
+ private Connection createNewConnectionForPool() throws SQLException {
+ Connection conn = createNewConnection();
+ connNum++;
+ occupiedPool.add(conn);
+ return conn;
+ }
+
+ /**
+ * Crate a new connection
+ *
+ * @return the new created connection
+ * @throws SQLException
+ * When fail to create a new connection.
+ */
+ private Connection createNewConnection() throws SQLException {
+ Connection conn = null;
+ conn = DriverManager.getConnection(databaseUrl, userName, password);
+ return conn;
+ }
+
+ /**
+ * Get a connection from the pool. If there is no free connection, return
+ * null
+ *
+ * @return the connection.
+ */
+ private Connection getConnectionFromPool() {
+ Connection conn = null;
+ if (freePool.size() > 0) {
+ conn = freePool.pop();
+ occupiedPool.add(conn);
+ }
+ return conn;
+ }
+
+ /**
+ * Make sure the connection is available now. Otherwise, reconnect it.
+ *
+ * @param conn
+ * The connection for verification.
+ * @return the available connection.
+ * @throws SQLException
+ * Fail to get an available connection
+ */
+ private Connection makeAvailable(Connection conn) throws SQLException {
+ if (isConnectionAvailable(conn)) {
+ return conn;
+ }
+
+ // If the connection is't available, reconnect it.
+ occupiedPool.remove(conn);
+ connNum--;
+ conn.close();
+
+ conn = createNewConnection();
+ occupiedPool.add(conn);
+ connNum++;
+ return conn;
+ }
+
+ /**
+ * By running a sql to verify if the connection is available
+ *
+ * @param conn
+ * The connection for verification
+ * @return if the connection is available for now.
+ */
+ private boolean isConnectionAvailable(Connection conn) {
+ try (Statement st = conn.createStatement()) {
+ st.executeQuery(SQL_VERIFYCONN);
+ return true;
+ } catch (SQLException e) {
+ return false;
+ }
+ }
+
+ // Just an Example
+ public static void main(String[] args) throws SQLException {
+ Connection conn = null;
+ MySQLConnectionPool pool = new MySQLConnectionPool(
+ "jdbc:mysql://mysqlaasdevintic-sha.cloudapp.net:3306/<Your DB name>",
+ "<Your user>", "<Your Password>", 2);
+ try {
+ conn = pool.getConnection();
+ try (Statement statement = conn.createStatement())
+ {
+ ResultSet res = statement.executeQuery("show tables");
+ System.out.println("There are below tables:");
+ while (res.next()) {
+ String tblName = res.getString(1);
+ System.out.println(tblName);
+ }
+ }
+ }
+ finally {
+ if (conn != null) {
+ pool.returnConnection(conn);
+ }
+ }
+ }
+
+}
+
+```
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
+
+ Title: Azure Policy Regulatory Compliance controls for Azure Database for MySQL
+description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
Last updated : 05/10/2022+++++++
+# Azure Policy Regulatory Compliance controls for Azure Database for MySQL
++
+[Regulatory Compliance in Azure Policy](../../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This
+page lists the **compliance domains** and **security controls** for Azure Database for MySQL. You can assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard.
+++
+## Next steps
+
+- Learn more about [Azure Policy Regulatory Compliance](../../governance/policy/concepts/regulatory-compliance.md).
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/select-right-deployment-type.md
+
+ Title: Selecting the right deployment type - Azure Database for MySQL
+description: This article describes what factors to consider before you deploy Azure Database for MySQL as either infrastructure as a service (IaaS) or platform as a service (PaaS).
+++++ Last updated : 08/26/2020++
+# Choose the right MySQL Server option in Azure
++
+With Azure, your MySQL server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has two deployment options, and there are service tiers within each deployment option. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, backups, security, monitoring, scaling or if you want to delegate these operations to Azure.
+
+When making your decision, consider the following two options:
+
+- **Azure Database for MySQL**. This option is a fully managed MySQL database engine based on the stable version of MySQL community edition. This relational database as a service (DBaaS), hosted on the Azure cloud platform, falls into the industry category of PaaS. With a managed instance of MySQL on Azure, you can use built-in features viz automated patching, high availability, automated backups, elastic scaling, enterprise grade security, compliance and governance, monitoring and alerting that otherwise require extensive configuration when MySQL Server is either on-premises or in an Azure VM. When using MySQL as a service, you pay-as-you-go, with options to scale up or scale out for greater control with no interruption. [Azure Database for MySQL](overview.md), powered by the MySQL community edition is available in two deployment modes:
+
+ - [Flexible Server](../flexible-server/overview.md) - Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that do not need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](../flexible-server/overview.md#azure-regions). Flexible servers are best suited for all new developments and migration of production workloads to Azure Database for MySQL service.
+
+ - [Single Server](single-server-overview.md) is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option.
+
+- **MySQL on Azure VMs**. This option falls into the industry category of IaaS. With this service, you can run MySQL Server inside a managed virtual machine on the Azure cloud platform. All recent versions and editions of MySQL can be installed in the virtual machine.
+
+## Comparing the MySQL deployment options in Azure
+
+The main differences between these options are listed in the following table:
+
+| Attribute | Azure Database for MySQL<br/>Single Server |Azure Database for MySQL<br/>Flexible Server |MySQL on Azure VMs |
+|:-|:-|:|:|
+| [**General**](../flexible-server/overview.md) | | | |
+| General availability | Generally Available | Generally Available | Generally Available |
+| Service-level agreement (SLA) | 99.99% availability SLA |99.99% using Availability Zones| 99.99% using Availability Zones|
+| Underlying O/S | Windows | Linux | User Managed |
+| MySQL Edition | Community Edition | Community Edition | Community or Enterprise Edition |
+| MySQL Version Support | 5.6(Retired), 5.7 & 8.0| 5.7 & 8.0 | Any version|
+| Availability zone selection for application colocation | No | Yes | Yes |
+| Username in connection string | `<user_name>@server_name`. For example, `mysqlusr@mypgServer` | Just username. For example, `mysqlusr` | Just username. For example, `mysqlusr` |
+| [**Compute & Storage Scaling**](../flexible-server/concepts-compute-storage.md) | | | |
+| Compute tiers | Basic, General Purpose, Memory Optimized | Burstable, General Purpose, Memory Optimized | Burstable, General Purpose, Memory Optimized |
+| Compute scaling | Supported (Scaling from and to Basic tier is **not supported**)| Supported | Supported|
+| Storage size | 5 GiB to 16 TiB| 20 GiB to 16 TiB | 32 GiB to 32,767 GiB|
+| Online Storage scaling | Supported| Supported| Not Supported|
+| Auto storage scaling | Supported| Supported| Not Supported|
+| IOPs scaling | Not Supported| Supported| Not Supported|
+| [**Cost Optimization**](https://azure.microsoft.com/pricing/details/mysql/flexible-server/) | | | |
+| Reserved Instance Pricing | Supported | Supported | Supported |
+| Stop/Start Server for development | Server can be stopped up to 7 days | Server can be stopped up to 30 days | Supported |
+| Low cost Burstable SKU | Not Supported | Supported | Supported |
+| [**Networking/Security**](concepts-security.md) | | | |
+| Network Connectivity | - Public endpoints with server firewall.<br/> - Private access with Private Link support.|- Public endpoints with server firewall.<br/> - Private access with Virtual Network integration.| - Public endpoints with server firewall.<br/> - Private access with Private Link support.|
+| SSL/TLS | Enabled by default with support for TLS v1.2, 1.1 and 1.0 | Enabled by default with support for TLS v1.2, 1.1 and 1.0| Supported with TLS v1.2, 1.1 and 1.0 |
+| Data Encryption at rest | Supported with customer managed keys (BYOK) | Supported with service managed keys | Not Supported|
+| Azure AD Authentication | Supported | Not Supported | Not Supported|
+| Microsoft Defender for Cloud support | Yes | No | No |
+| Server Audit | Supported | Supported | User Managed |
+| [**Patching & Maintenance**](../flexible-server/concepts-maintenance.md) | | |
+| Operating system patching| Automatic | Automatic | User managed |
+| MySQL minor version upgrade | Automatic | Automatic | User managed |
+| MySQL in-place major version upgrade | Supported from 5.6 to 5.7 | Not Supported | User Managed |
+| Maintenance control | System managed | Customer managed | User managed |
+| Maintenance window | Anytime within 15 hrs window | 1hr window | User managed |
+| Planned maintenance notification | 3 days | 5 days | User managed |
+| [**High Availability**](../flexible-server/concepts-high-availability.md) | | | |
+| High availability | Built-in HA (without hot standby)| Built-in HA (without hot standby), Same-zone and zone-redundant HA with hot standby | User managed |
+| Zone redundancy | Not supported | Supported | Supported|
+| Standby zone placement | Not supported | Supported | Supported|
+| Automatic failover | Yes (spins another server)| Yes | User Managed|
+| User initiated Forced failover | No | Yes | User Managed |
+| Transparent Application failover | Yes | Yes | User Managed|
+| [**Replication**](../flexible-server/concepts-read-replicas.md) | | | |
+| Support for read replicas | Yes | Yes | User Managed |
+| Number of read replicas supported | 5 | 10 | User Managed |
+| Mode of replication | Asynchronous | Asynchronous | User Managed |
+| Gtid support for read replicas | Supported | Supported | User Managed |
+| Cross-region support (Geo-replication) | Yes | Not supported | User Managed |
+| Hybrid scenarios | Supported with [Data-in Replication](./concepts-data-in-replication.md)| Supported with [Data-in Replication](../flexible-server/concepts-data-in-replication.md) | User Managed |
+| Gtid support for data-in replication | Supported | Supported | User Managed |
+| Data-out replication | Not Supported | In preview | Supported |
+| [**Backup and Recovery**](../flexible-server/concepts-backup-restore.md) | | | |
+| Automated backups | Yes | Yes | No |
+| Backup retention | 7-35 days | 1-35 days | User Managed |
+| Long term retention of backups | User Managed | User Managed | User Managed |
+| Exporting backups | Supported using logical backups | Supported using logical backups | Supported |
+| Point in time restore capability to any time within the retention period | Yes | Yes | User Managed |
+| Fast restore point | No | Yes | No |
+| Ability to restore on a different zone | Not supported | Yes | Yes |
+| Ability to restore to a different VNET | No | Yes | Yes |
+| Ability to restore to a different region | Yes (Geo-redundant) | Yes (Geo-redundant) | User Managed |
+| Ability to restore a deleted server | Yes | Yes | No |
+| [**Disaster Recovery**](../flexible-server/concepts-business-continuity.md) | | | |
+| DR across Azure regions | Using cross region read replicas, geo-redundant backup | Using geo-redundant backup | User Managed |
+| Automatic failover | No | Not Supported | No |
+| Can use the same r/w endpoint | No | Not Supported | No |
+| [**Monitoring**](../flexible-server/concepts-monitoring.md) | | | |
+| Azure Monitor integration & alerting | Supported | Supported | User Managed |
+| Monitoring database operations | Supported | Supported | User Managed |
+| Query Performance Insights | Supported | Supported (using Workbooks)| User Managed |
+| Server Logs | Supported | Supported (using Diagnostics logs) | User Managed |
+| Audit Logs | Supported | Supported | Supported |
+| Error Logs | Not Supported | Supported | Supported |
+| Azure advisor support | Supported | Not Supported | Not Supported |
+| **Plugins** | | | |
+| validate_password | Not Supported | In preview | Supported |
+| caching_sha2_password | Not Supported | In preview | Supported |
+| [**Developer Productivity**](../flexible-server/quickstart-create-server-cli.md) | | | |
+| Fleet Management | Supported with Azure CLI, PowerShell, REST, and Azure Resource Manager | Supported with Azure CLI, PowerShell, REST, and Azure Resource Manager | Supported for VMs with Azure CLI, PowerShell, REST, and Azure Resource Manager |
+| Terraform Support | Supported | Supported | Supported |
+| GitHub Actions | Supported | Supported | User Managed |
+
+## Business motivations for choosing PaaS or IaaS
+
+There are several factors that can influence your decision to choose PaaS or IaaS to host your MySQL databases.
+
+### Cost
+
+Cost reduction is often the primary consideration that determines the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for MySQL and MySQL on Azure VMs.
+
+#### Billing
+
+Azure Database for MySQL is currently available as a service in several tiers with different prices for resources. All resources are billed hourly at a fixed rate. For the latest information on the currently supported service tiers, compute sizes, and storage amounts, see [pricing page](https://azure.microsoft.com/pricing/details/mysql/). You can dynamically adjust service tiers and compute sizes to match your application's varied throughput needs. You're billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/).
+
+With Azure Database for MySQL, Microsoft automatically configures, patches, and upgrades the database software. These automated actions reduce your administration costs. Also, Azure Database for MySQL has [automated backups](./concepts-backup.md) capabilities. These capabilities help you achieve significant cost savings, especially when you have a large number of databases. In contrast, with MySQL on Azure VMs you can choose and run any MySQL version. No matter what MySQL version you use, you pay for the provisioned VM, storage cost associated with the data, backup, monitoring data and log storage and the costs for the specific MySQL license type used (if any).
+
+Azure Database for MySQL provides built-in high availability for any kind of node-level interruption while still maintaining the 99.99% SLA guarantee for the service. However, for database high availability within VMs, you use the high availability options like [MySQL replication](https://dev.mysql.com/doc/refman/8.0/en/replication.html) that are available on a MySQL database. Using a supported high availability option doesn't provide an additional SLA. But it does let you achieve greater than 99.99% database availability at additional cost and administrative overhead.
+
+For more information on pricing, see the following articles:
+
+- [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/)
+- [Virtual machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/)
+- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
+
+### Administration
+
+For many businesses, the decision to transition to a cloud service is as much about offloading complexity of administration as it is about cost.
+
+With IaaS, Microsoft:
+
+- Administers the underlying infrastructure.
+- Provides automated patching for underlying hardware and OS.
+
+With PaaS, Microsoft:
+
+- Administers the underlying infrastructure.
+- Provides automated patching for underlying hardware, OS, and database engine.
+- Manages high availability of the database.
+- Automatically performs backups and replicates all data to provide disaster recovery.
+- Encrypts the data at rest and in motion by default.
+- Monitors your server and provides features for query performance insights and performance recommendations
+
+The following list describes administrative considerations for each option:
+
+- With Azure Database for MySQL, you can continue to administer your database. But you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include:
+
+ - Databases
+ - Sign-in
+ - Index tuning
+ - Query tuning
+ - Auditing
+ - Security
+
+ Additionally, configuring high availability to another data center requires minimal to no configuration or administration.
+
+- With MySQL on Azure VMs, you have full control over the operating system and the MySQL server instance configuration. With a VM, you decide when to update or upgrade the operating system and database software and what patches to apply. You also decide when to install any additional software such as an antivirus application. Some automated features are provided to greatly simplify patching, backup, and high availability. You can control the size of the VM, the number of disks, and their storage configurations. For more information, see [Virtual machine and cloud service sizes for Azure](../../virtual-machines/sizes.md).
+
+### Time to move to Azure
+
+- Azure Database for MySQL is the right solution for cloud-designed applications when developer productivity and fast time to market for new solutions are critical. With programmatic functionality that is like DBA, the service is suitable for cloud architects and developers because it lowers the need for managing the underlying operating system and database.
+
+- When you want to avoid the time and expense of acquiring new on-premises hardware, MySQL on Azure VMs is the right solution for applications that require a granular control and customization of MySQL engine not supported by the service or requiring access of the underlying OS. This solution is also suitable for migrating existing on-premises applications and databases to Azure intact, for cases where Azure Database for MySQL is a poor fit.
+
+Because there's no need to change the presentation, application, and data layers, you save time and budget on rearchitecting your existing solution. Instead, you can focus on migrating all your solutions to Azure and addressing some performance optimizations that the Azure platform might require.
+
+## Next steps
+
+- See [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/MySQL/).
+- Get started by [creating your first server](./quickstart-create-mysql-server-database-using-azure-portal.md).
mysql Single Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-overview.md
+
+ Title: Overview - Azure Database for MySQL Single Server
+description: Learn about the Azure Database for MySQL Single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
++++++ Last updated : 6/19/2021+
+# Azure Database for MySQL Single Server
++
+[Azure Database for MySQL](overview.md) powered by the MySQL community edition is available in two deployment modes:
+
+- Flexible Server
+- Single Server
+
+In this article, we'll provide an overview and introduction to core concepts of the Single Server deployment model. To learn about flexible server deployment mode, refer [flexible server overview](../flexible-server/index.yml). For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](select-right-deployment-type.md).
+
+## Overview
+Azure Database for MySQL Single Server is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+
+Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. To learn about the differences between Flexible Server and Single Server deployment options, refer [select the right deployment option for you](select-right-deployment-type.md) documentation.
+
+## High availability
+
+The Single Server deployment model is optimized for built-in high availability, and elasticity at reduced cost. The architecture separates compute and storage. The database engine runs on a proprietary compute container, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
+
+During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
+
+1. A new compute container is provisioned
+2. The storage with data files is mapped to the new container
+3. MySQL database engine is brought online on the new compute container
+4. Gateway service ensures transparent failover ensuring no application side changes requires.
+
+The typical failover time ranges from 60-120 seconds. The cloud native design of Single Server allows it to support 99.99% of availability eliminating the cost of passive hot standby.
+
+Azure's industry leading 99.99% availability service level agreement (SLA), powered by a global network of Microsoft-managed datacenters, helps keep your applications running 24/7.
++
+## Automated Patching
+
+The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There's no user action or configuration settings required for patching. The patching frequency is service managed based on the criticality of the payload. In general, the service follows monthly release schedule as part of the continuous integration and release. Users can subscribe to the [planned maintenance notification](concepts-monitoring.md) to receive notification of the upcoming maintenance 72 hours before the event.
+
+## Automatic Backups
+
+Single Server automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption. Refer to [Backups](concepts-backup.md) for details.
+
+## Adjust performance and scale within seconds
+
+Single Server is available in three SKU tiers: Basic, General Purpose, and Memory Optimized. The Basic tier is best suited for low-cost development and low concurrency workloads. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume. See [Pricing tiers](./concepts-pricing-tiers.md) for details.
+
+## Enterprise grade Security, Compliance, and Governance
+
+Single Server uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default) or [customer managed](concepts-data-encryption-mysql.md). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service supports TLS versions 1.2, 1.1 and 1.0 with an ability to enforce [minimum TLS version](concepts-ssl-connection-security.md).
+
+The service allows private access to the servers using [private link](concepts-data-access-security-private-link.md) and offers threat protection through the optional [Microsoft Defender for open-source relational databases](/azure/defender-for-cloud/defender-for-databases-introduction) plan. Microsoft Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
+
+In addition to native authentication, Single Server supports [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) authentication. Azure AD authentication is a mechanism of connecting to the MySQL servers using identities defined and managed in Azure AD. With Azure AD authentication, you can manage database user identities and other Azure services in a central location, which simplifies and centralizes access control.
+
+[Audit logging](concepts-audit-logs.md) is available to track all database level activity.
+
+Single Server is complaint with all the industry-leading certifications like FedRAMP, HIPAA, PCI DSS. Visit the [Azure Trust Center](https://www.microsoft.com/trustcenter/security) for information about Azure's platform security.
+
+For more information about Azure Database for MySQL security features, see the [security overview](concepts-security.md).
+
+## Monitoring and alerting
+
+Single Server is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service allows configuring slow query logs and comes with a differentiated [Query store](concepts-query-store.md) feature. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. See [Monitoring](concepts-monitoring.md) for details.
+
+## Migration
+
+The service runs community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing application developed on MySQL engine to Single Server. The migration to the single server can be performed using one of the following options:
+
+- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like mysqldump/mydumper can provide fastest way to migrate. See [Migrate using dump and restore](concepts-migrate-dump-restore.md) for details.
+- **Azure Database Migration Service** ΓÇô For seamless and simplified offline migrations to single server with high speed data migration, [Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md) can be leveraged.
+- **Data-in replication** ΓÇô For minimal downtime migrations, data-in replication, which relies on binlog based replication can also be leveraged. Data-in replication is preferred for minimal downtime migrations by hands-on experts looking for more control over migration. See [data-in replication](concepts-data-in-replication.md) for details.
+
+## Contacts
+
+For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address isn't a technical support alias.
+
+In addition, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).
+
+## Next steps
+
+Now that you've read an introduction to Azure Database for MySQL - Single Server deployment mode, you're ready to:
+
+- Create your first server.
+ - [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)
+ - [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md)
+ - [Azure CLI samples for Azure Database for MySQL](sample-scripts-azure-cli.md)
+
+- Build your first app using your preferred language:
+ - [Python](./connect-python.md)
+ - [Node.JS](./connect-nodejs.md)
+ - [Java](./connect-java.md)
+ - [Ruby](./connect-ruby.md)
+ - [PHP](./connect-php.md)
+ - [.NET (C#)](./connect-csharp.md)
+ - [Go](./connect-go.md)
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-whats-new.md
+
+ Title: What's new in Azure Database for MySQL Single Server
+description: Learn about recent updates to Azure Database for MySQL - Single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
++++++ Last updated : 06/17/2021+
+# What's new in Azure Database for MySQL - Single Server?
++
+Azure Database for MySQL is a relational database service in the Microsoft cloud. The service is based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine and supports versions 5.6(retired), 5.7, and 8.0. [Azure Database for MySQL - Single Server](./overview.md#azure-database-for-mysqlsingle-server) is a deployment mode that provides a fully managed database service with minimal requirements for customizations of database. The Single Server platform is designed to handle most database management functions such as patching, backups, high availability, and security, all with minimal user configuration and control.
+
+This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+
+## May 2022
+
+Enabled the ability to change the server parameter innodb_ft_server_stopword_table from Portal/CLI.
+Users can now change the value of the innodb_ft_server_stopword_table parameter using the Azure portal and CLI. This parameter helps to configure your own InnoDB FULLTEXT index stopword list for all InnoDB tables. For more information, see [innodb_ft_server_stopword_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_ft_server_stopword_table).
+
+## March 2022
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+**Bug Fixes**
+
+The MySQL 8.0.27 client and newer versions are now compatible with Azure Database for MySQL - Single Server.
+
+## February 2022
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+**Known Issues**
+
+Customers in Japan,East US received two Maintenance Notification emails for this month. The Email notification send for *05-Feb 2022* was send by mistake and no changes will be done to the service on this date. You can safely ignore them. We apologize for the inconvenience.
+
+## December 2021
+
+This release of Azure Database for MySQL - Single Server includes the following updates:
+
+- **Query Text removed in Query Performance Insights to avoid unauthorized access**
+
+Starting December 2021, you will not be able to see the query text of the queries in Query performance insight blade in Azure portal. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk. The recommended steps to view the query text is shared below:
+
+- Identify the query_id of the top queries from the Query Performance Insight blade in Azure portal
+- Login to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries
+
+ ```sql
+ SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
+ SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
+ ```
+
+- You can browse the query_digest_text column to identify the query text for the corresponding query_id
+
+The above steps will ensure only authenticated and authorized users can have secure access to the query text.
+
+## October 2021
+
+- **Known Issues**
+
+The MySQL 8.0.27 client is incompatible with Azure Database for MySQL - Single Server. All connections from the MySQL 8.0.27 client created either via mysql.exe or workbench will fail. As a workaround, consider using an earlier version of the client (prior to MySQL 8.0.27) or creating an instance of [Azure Database for MySQL - Flexible Server](../flexible-server/overview.md) instead.
+
+## June 2021
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+- **Enabled the ability to change the server parameter `activate_all_roles_on_login` from Portal/CLI for MySQL 8.0**
+
+ Users can now change the value of the activate_all_roles_on_login parameter using the Azure portal and CLI. This parameter helps to configure whether to enable automatic activation of all granted roles when users sign in to the server. For more information, see [Server System Variables](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html).
+
+- **Addressed MySQL Community Bugs #29596969 and #94668**
+
+ This release addresses an issue with the default expression being ignored in a CREATE TABLE query if the field was marked as PRIMARY KEY for MySQL 8.0. (MySQL Community Bug #29596969, Bug #94668). For more information, see [MySQL Bugs: #94668: Expression Default is made NULL during CREATE TABLE query, if field is made PK](https://bugs.mysql.com/bug.php?id=94668)
+
+- **Addressed an issue with duplicate table names in "SHOW TABLE" query**
+
+ We've introduced a new function to give a fine-grained control of the table cache during the table operation. Because of a code defect in the new feature, the entry in the directory cache might be miss configured or added and cause the unexpected behavior like return two tables with the same name. The directory cache only works for the ΓÇ£SHOW TABLEΓÇ¥ related query; it won't impact any DML or DDL queries. This issue is completely resolved in this release.
+
+- **Increased the default value for the server parameter `max_heap_table_size` to help reduce temp table spills to disk**
+
+ With this release, the max allowed value for the parameter `max_heap_table_size` has been changed to 8589934592 for General Purpose 64 vCore and Memory Optimize 32 vCore.
+
+- **Addressed an issue with setting the value of the parameter `sql_require_primary_key` from the portal**
+
+ Users can now modify the value of the parameter `sql_require_primary_key` directly from the Azure portal.
+
+- **General Availability of planned maintenance notification**
+
+ This release provides General Availability of planned maintenance notifications in Azure Database for MySQL - Single Server. For more information, see the article [Planned maintenance notification](concepts-planned-maintenance-notification.md).
+
+- **Enabled the parameter `redirect_enabled` by default**
+
+ With this release, the parameter `redirect_enabled` will be enabled by default. Redirection aims to reduce network latency between client applications and MySQL servers by allowing applications to connect directly to backend server nodes. Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft. For more information, see the article [Connect to Azure Database for MySQL with redirection](how-to-redirection.md).
+
+>[!Note]
+> * Redirection does not work with Private link setup. If you are using Private link for Azure Database for MySQL, you might encounter connection issue. To resolve the issue, make sure the parameter redirect_enabled is set to ΓÇ£OFFΓÇ¥ and the client application is restarted.</br>
+> * If you have a PHP application that uses the mysqlnd_azure redirection driver to connect to Azure Database for MySQL (with redirection enabled by default), you might face a data encoding issue that impacts your insert transactions..</br>
+> To resolve this issue, either:
+> - In Azure portal, disable the redirection by setting the redirect_enabled parameter to ΓÇ£OFFΓÇ¥, and restart the PHP application to clear the driver cache after the change.
+> - Explicitly set the charset related parameters at the session level, based on your settings after the connection is established (for example ΓÇ£set names utf8mb4ΓÇ¥).
+
+## February 2021
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+- Added new stored procedures to support the global transaction identifier (GTID) for data-in for the version 5.7 and 8.0 Large Storage server.
+- Updated to support MySQL versions to 5.6.50 and 5.7.32.
+
+## January 2021
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+- Enabled "reset password" to automatically fix the first admin permission.
+- Exposed the `auto_increment_increment/auto_increment_offset` server parameter and `session_track_gtids`.
+- Added new stored procedures for control innodb buffer pool dump/restore.
+- Exposed the innodb warm up related server parameter for large storage server.
+
+## Contacts
+
+If you have questions about or suggestions for working with Azure Database for MySQL, contact the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address isn't a technical support alias.
+
+In addition, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0).
+
+## Next steps
+
+- Learn more about [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/server/).
+- Browse the [public documentation](./index.yml) for Azure Database for MySQL ΓÇô Single Server.
+- Review details on [troubleshooting common errors](./how-to-troubleshoot-common-errors.md).
mysql Tutorial Design Database Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-cli.md
+
+ Title: 'Tutorial: Design a server - Azure CLI - Azure Database for MySQL'
+description: This tutorial explains how to create and manage Azure Database for MySQL server and database using Azure CLI from the command line.
++++
+ms.devlang: azurecli
+ Last updated : 12/02/2019+++
+# Tutorial: Design an Azure Database for MySQL using Azure CLI
++
+Azure Database for MySQL is a relational database service in the Microsoft cloud based on MySQL Community Edition database engine. In this tutorial, you use Azure CLI (command-line interface) and other utilities to learn how to:
+
+> [!div class="checklist"]
+> * Create an Azure Database for MySQL
+> * Configure the server firewall
+> * Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database
+> * Load sample data
+> * Query data
+> * Update data
+> * Restore data
+++
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account#az-account-set) command.
+```azurecli-interactive
+az account set --subscription 00000000-0000-0000-0000-000000000000
+```
+
+## Create a resource group
+Create an [Azure resource group](../../azure-resource-manager/management/overview.md) with [az group create](/cli/azure/group#az-group-create) command. A resource group is a logical container into which Azure resources are deployed and managed as a group.
+
+The following example creates a resource group named `myresourcegroup` in the `westus` location.
+
+```azurecli-interactive
+az group create --name myresourcegroup --location westus
+```
+
+## Create an Azure Database for MySQL server
+Create an Azure Database for MySQL server with the az mysql server create command. A server can manage multiple databases. Typically, a separate database is used for each project or for each user.
+
+The following example creates an Azure Database for MySQL server located in `westus` in the resource group `myresourcegroup` with name `mydemoserver`. The server has an administrator user named `myadmin`. It is a General Purpose, Gen 5 server with 2 vCores. Substitute the `<server_admin_password>` with your own value.
+
+```azurecli-interactive
+az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7
+```
+The sku-name parameter value follows the convention {pricing tier}\_{compute generation}\_{vCores} as in the examples below:
++ `--sku-name B_Gen5_2` maps to Basic, Gen 5, and 2 vCores.++ `--sku-name GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.++ `--sku-name MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.+
+Please see the [pricing tiers](./concepts-pricing-tiers.md) documentation to understand the valid values per region and per tier.
+
+> [!IMPORTANT]
+> The server admin login and password that you specify here are required to log in to the server and its databases later in this quickstart. Remember or record this information for later use.
++
+## Configure firewall rule
+Create an Azure Database for MySQL server-level firewall rule with the az mysql server firewall-rule create command. A server-level firewall rule allows an external application, such as **mysql** command-line tool or MySQL Workbench to connect to your server through the Azure MySQL service firewall.
+
+The following example creates a firewall rule called `AllowMyIP` that allows connections from a specific IP address, 192.168.0.1. Substitute in the IP address or range of IP addresses that correspond to where you'll be connecting from.
+
+```azurecli-interactive
+az mysql server firewall-rule create --resource-group myresourcegroup --server mydemoserver --name AllowMyIP --start-ip-address 192.168.0.1 --end-ip-address 192.168.0.1
+```
+
+## Get the connection information
+
+To connect to your server, you need to provide host information and access credentials.
+```azurecli-interactive
+az mysql server show --resource-group myresourcegroup --name mydemoserver
+```
+
+The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**.
+```json
+{
+ "administratorLogin": "myadmin",
+ "administratorLoginPassword": null,
+ "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver",
+ "location": "westus",
+ "name": "mydemoserver",
+ "resourceGroup": "myresourcegroup",
+ "sku": {
+ "capacity": 2,
+ "family": "Gen5",
+ "name": "GP_Gen5_2",
+ "size": null,
+ "tier": "GeneralPurpose"
+ },
+ "sslEnforcement": "Enabled",
+ "storageProfile": {
+ "backupRetentionDays": 7,
+ "geoRedundantBackup": "Disabled",
+ "storageMb": 5120
+ },
+ "tags": null,
+ "type": "Microsoft.DBforMySQL/servers",
+ "userVisibleState": "Ready",
+ "version": "5.7"
+}
+```
+
+## Connect to the server using mysql
+Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to establish a connection to your Azure Database for MySQL server. In this example, the command is:
+```cmd
+mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
+```
+
+## Create a blank database
+Once youΓÇÖre connected to the server, create a blank database.
+```sql
+mysql> CREATE DATABASE mysampledb;
+```
+
+At the prompt, run the following command to switch the connection to this newly created database:
+```sql
+mysql> USE mysampledb;
+```
+
+## Create tables in the database
+Now that you know how to connect to the Azure Database for MySQL database, complete some basic tasks.
+
+First, create a table and load it with some data. Let's create a table that stores inventory information.
+```sql
+CREATE TABLE inventory (
+ id serial PRIMARY KEY,
+ name VARCHAR(50),
+ quantity INTEGER
+);
+```
+
+## Load data into the tables
+Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data.
+```sql
+INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
+INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
+```
+
+Now you have two rows of sample data into the table you created earlier.
+
+## Query and update the data in the tables
+Execute the following query to retrieve information from the database table.
+```sql
+SELECT * FROM inventory;
+```
+
+You can also update the data in the tables.
+```sql
+UPDATE inventory SET quantity = 200 WHERE name = 'banana';
+```
+
+The row gets updated accordingly when you retrieve data.
+```sql
+SELECT * FROM inventory;
+```
+
+## Restore a database to a previous point in time
+Imagine you have accidentally deleted this table. This is something you cannot easily recover from. Azure Database for MySQL allows you to go back to any point in time in the last up to 35 days and restore this point in time to a new server. You can use this new server to recover your deleted data. The following steps restore the sample server to a point before the table was added.
+
+For the restore, you need the following information:
+
+- Restore point: Select a point-in-time that occurs before the server was changed. Must be greater than or equal to the source database's Oldest backup value.
+- Target server: Provide a new server name you want to restore to
+- Source server: Provide the name of the server you want to restore from
+- Location: You cannot select the region, by default it is same as the source server
+
+```azurecli-interactive
+az mysql server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time "2017-05-4 03:10" --source-server-name mydemoserver
+```
+
+The `az mysql server restore` command needs the following parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+| resource-group |  myresourcegroup |  The resource group in which the source server exists.  |
+| name | mydemoserver-restored | The name of the new server that is created by the restore command. |
+| restore-point-in-time | 2017-04-13T13:59:00Z | Select a point-in-time to restore to. This date and time must be within the source server's backup retention period. Use ISO8601 date and time format. For example, you may use your own local timezone, such as `2017-04-13T05:59:00-08:00`, or use UTC Zulu format `2017-04-13T13:59:00Z`. |
+| source-server | mydemoserver | The name or ID of the source server to restore from. |
+
+Restoring a server to a point-in-time creates a new server, copied as the original server as of the point in time you specify. The location and pricing tier values for the restored server are the same as the source server.
+
+The command is synchronous, and will return after the server is restored. Once the restore finishes, locate the new server that was created. Verify the data was restored as expected.
+
+## Clean up resources
+If you don't need these resources for another quickstart/tutorial, you can delete them by doing the following command:
+
+```azurecli-interactive
+az group delete --name myresourcegroup
+```
+
+If you would just like to delete the one newly created server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
+
+```azurecli-interactive
+az mysql server delete --resource-group myresourcegroup --name mydemoserver
+```
+
+## Next steps
+In this tutorial you learned to:
+> [!div class="checklist"]
+> * Create an Azure Database for MySQL server
+> * Configure the server firewall
+> * Use the mysql command-line tool to create a database
+> * Load sample data
+> * Query data
+> * Update data
+> * Restore data
+
+> [!div class="nextstepaction"]
+> [Azure Database for MySQL - Azure CLI samples](./sample-scripts-azure-cli.md)
mysql Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-portal.md
+
+ Title: 'Tutorial: Design a server - Azure portal - Azure Database for MySQL'
+description: This tutorial explains how to create and manage Azure Database for MySQL server and database using Azure portal.
+++++ Last updated : 3/20/2020+++
+# Tutorial: Design an Azure Database for MySQL database using the Azure portal
++
+Azure Database for MySQL is a managed service that enables you to run, manage, and scale highly available MySQL databases in the cloud. Using the Azure portal, you can easily manage your server and design a database.
+
+In this tutorial, you use the Azure portal to learn how to:
+
+> [!div class="checklist"]
+> * Create an Azure Database for MySQL
+> * Configure the server firewall
+> * Use mysql command-line tool to create a database
+> * Load sample data
+> * Query data
+> * Update data
+> * Restore data
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+
+## Sign in to the Azure portal
+
+Open your favorite web browser, and visit the [Microsoft Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+
+## Create an Azure Database for MySQL server
+
+An Azure Database for MySQL server is created with a defined set of [compute and storage resources](./concepts-pricing-tiers.md). The server is created within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+
+1. Select the **Create a resource** button (+) in the upper left corner of the portal.
+
+2. Select **Databases** > **Azure Database for MySQL**. If you cannot find MySQL Server under the **Databases** category, click **See all** to show all available database services. You can also type **Azure Database for MySQL** in the search box to quickly find the service.
+
+ :::image type="content" source="./media/tutorial-design-database-using-portal/1-navigate-to-mysql.png" alt-text="Navigate to MySQL":::
+
+3. Click **Azure Database for MySQL** tile. Fill out the Azure Database for MySQL form.
+
+ :::image type="content" source="./media/tutorial-design-database-using-portal/2-create-form.png" alt-text="Create form":::
+
+ **Setting** | **Suggested value** | **Field description**
+ ||
+ Server name | Unique server name | Choose a unique name that identifies your Azure Database for MySQL server. For example, mydemoserver. The domain name *.mysql.database.azure.com* is appended to the server name you provide. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
+ Subscription | Your subscription | Select the Azure subscription that you want to use for your server. If you have multiple subscriptions, choose the subscription in which you get billed for the resource.
+ Resource group | *myresourcegroup* | Provide a new or existing resource group name.
+ Select source | *Blank* | Select *Blank* to create a new server from scratch. (You select *Backup* if you are creating a server from a geo-backup of an existing Azure Database for MySQL server).
+ Server admin login | myadmin | A sign-in account to use when you're connecting to the server. The admin sign-in name cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
+ Password | *Your choice* | Provide a new password for the server admin account. It must contain from 8 to 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
+ Confirm password | *Your choice*| Confirm the admin account password.
+ Location | *The region closest to your users*| Choose the location that is closest to your users or your other Azure applications.
+ Version | *The latest version*| The latest version (unless you have specific requirements that require another version).
+ Pricing tier | **General Purpose**, **Gen 5**, **2 vCores**, **5 GB**, **7 days**, **Geographically Redundant** | The compute, storage, and backup configurations for your new server. Select **Pricing tier**. Next, select the **General Purpose** tab. *Gen 5*, *2 vCores*, *5 GB*, and *7 days* are the default values for **Compute Generation**, **vCore**, **Storage**, and **Backup Retention Period**. You can leave those sliders as is. To enable your server backups in geo-redundant storage, select **Geographically Redundant** from the **Backup Redundancy Options**. To save this pricing tier selection, select **OK**. The next screenshot captures these selections.
+
+ :::image type="content" source="./media/tutorial-design-database-using-portal/3-pricing-tier.png" alt-text="Pricing tier":::
+
+ > [!TIP]
+ > With **auto-growth** enabled your server increases storage when you are approaching the allocated limit, without impacting your workload.
+
+4. Click **Review + create**. You can click on the **Notifications** button on the toolbar to monitor the deployment process. Deployment can take up to 20 minutes.
+
+## Configure firewall
+
+Azure Databases for MySQL are protected by a firewall. By default, all connections to the server and the databases inside the server are rejected. Before connecting to Azure Database for MySQL for the first time, configure the firewall to add the client machine's public network IP address (or IP address range).
+
+1. Click your newly created server, and then click **Connection security**.
+
+ :::image type="content" source="./media/tutorial-design-database-using-portal/1-connection-security.png" alt-text="Connection security":::
+2. You can **Add My IP**, or configure firewall rules here. Remember to click **Save** after you have created the rules.
+You can now connect to the server using mysql command-line tool or MySQL Workbench GUI tool.
+
+> [!TIP]
+> Azure Database for MySQL server communicates over port 3306. If you are trying to connect from within a corporate network, outbound traffic over port 3306 may not be allowed by your network's firewall. If so, you cannot connect to Azure MySQL server unless your IT department opens port 3306.
+
+## Get connection information
+
+Get the fully qualified **Server name** and **Server admin login name** for your Azure Database for MySQL server from the Azure portal. You use the fully qualified server name to connect to your server using mysql command-line tool.
+
+1. In [Azure portal](https://portal.azure.com/), click **All resources** from the left-hand menu, type the name, and search for your Azure Database for MySQL server. Select the server name to view the details.
+
+2. From the **Overview** page, note down **Server Name** and **Server admin login name**. You may click the copy button next to each field to copy to the clipboard.
+ :::image type="content" source="./media/tutorial-design-database-using-portal/2-server-properties.png" alt-text="4-2 server properties":::
+
+In this example, the server name is *mydemoserver.mysql.database.azure.com*, and the server admin login is *myadmin\@mydemoserver*.
+
+## Connect to the server using mysql
+
+Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.7/en/mysql.html) to establish a connection to your Azure Database for MySQL server. You can run the mysql command-line tool from the Azure Cloud Shell in the browser or from your own machine using mysql tools installed locally. To launch the Azure Cloud Shell, click the `Try It` button on a code block in this article, or visit the Azure portal and click the `>_` icon in the top right toolbar.
+
+Type the command to connect:
+
+```azurecli-interactive
+mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
+```
+
+## Create a blank database
+
+Once you're connected to the server, create a blank database to work with.
+
+```sql
+CREATE DATABASE mysampledb;
+```
+
+At the prompt, run the following command to switch connection to this newly created database:
+
+```sql
+USE mysampledb;
+```
+
+## Create tables in the database
+
+Now that you know how to connect to the Azure Database for MySQL database, you can complete some basic tasks:
+
+First, create a table and load it with some data. Let's create a table that stores inventory information.
+
+```sql
+CREATE TABLE inventory (
+ id serial PRIMARY KEY,
+ name VARCHAR(50),
+ quantity INTEGER
+);
+```
+
+## Load data into the tables
+
+Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data.
+
+```sql
+INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
+INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
+```
+
+Now you have two rows of sample data into the table you created earlier.
+
+## Query and update the data in the tables
+
+Execute the following query to retrieve information from the database table.
+
+```sql
+SELECT * FROM inventory;
+```
+
+You can also update the data in the tables.
+
+```sql
+UPDATE inventory SET quantity = 200 WHERE name = 'banana';
+```
+
+The row gets updated accordingly when you retrieve data.
+
+```sql
+SELECT * FROM inventory;
+```
+
+## Restore a database to a previous point in time
+
+Imagine you have accidentally deleted an important database table, and cannot recover the data easily. Azure Database for MySQL allows you to restore the server to a point in time, creating a copy of the databases into new server. You can use this new server to recover your deleted data. The following steps restore the sample server to a point before the table was added.
+
+1. In the Azure portal, locate your Azure Database for MySQL. On the **Overview** page, click **Restore** on the toolbar. The Restore page opens.
+
+ :::image type="content" source="./media/tutorial-design-database-using-portal/1-restore-a-db.png" alt-text="10-1 restore a database":::
+
+2. Fill out the **Restore** form with the required information.
+
+ :::image type="content" source="./media/tutorial-design-database-using-portal/2-restore-form.png" alt-text="10-2 restore form":::
+
+ - **Restore point**: Select a point-in-time that you want to restore to, within the timeframe listed. Make sure to convert your local timezone to UTC.
+ - **Restore to new server**: Provide a new server name you want to restore to.
+ - **Location**: The region is same as the source server, and cannot be changed.
+ - **Pricing tier**: The pricing tier is the same as the source server, and cannot be changed.
+
+3. Click **OK** to restore the server to [restore to a point in time](./how-to-restore-server-portal.md) before the table was deleted. Restoring a server creates a new copy of the server, as of the point in time you specify.
+
+## Clean up resources
+
+If you don't expect to need these resources in the future, you can delete them by deleting the resource group or just delete the MySQL server. To delete the resource group, follow these steps:
+1. In the Azure portal, search for and select **Resource groups**.
+2. In the resource group list, choose the name of your resource group.
+3. In the Overview page of your resource group, select **Delete resource group**.
+4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+
+## Next steps
+
+In this tutorial, you use the Azure portal to learned how to:
+
+> [!div class="checklist"]
+> * Create an Azure Database for MySQL
+> * Configure the server firewall
+> * Use mysql command-line tool to create a database
+> * Load sample data
+> * Query data
+> * Update data
+> * Restore data
+
+> [!div class="nextstepaction"]
+> [How to connect applications to Azure Database for MySQL](./how-to-connection-string.md)
mysql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-powershell.md
+
+ Title: 'Tutorial: Design a server - Azure PowerShell - Azure Database for MySQL'
+description: This tutorial explains how to create and manage Azure Database for MySQL server and database using PowerShell.
++++
+ms.devlang: azurepowershell
+ Last updated : 04/29/2020+++
+# Tutorial: Design an Azure Database for MySQL using PowerShell
++
+Azure Database for MySQL is a relational database service in the Microsoft cloud based on MySQL
+Community Edition database engine. In this tutorial, you use PowerShell and other utilities to learn
+how to:
+
+> [!div class="checklist"]
+> - Create an Azure Database for MySQL
+> - Configure the server firewall
+> - Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database
+> - Load sample data
+> - Query data
+> - Update data
+> - Restore data
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
+before you begin.
+
+If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
+module and connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information
+about installing the Az PowerShell module, see
+[Install Azure PowerShell](/powershell/azure/install-az-ps).
+
+> [!IMPORTANT]
+> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
+> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If this is your first time using the Azure Database for MySQL service, you must register the
+**Microsoft.DBforMySQL** resource provider.
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL
+```
++
+If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
+should be billed. Select a specific subscription ID using the
+[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+```azurepowershell-interactive
+Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+```
+
+## Create a resource group
+
+Create an [Azure resource group](../../azure-resource-manager/management/overview.md)
+using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. A
+resource group is a logical container in which Azure resources are deployed and managed as a group.
+
+The following example creates a resource group named **myresourcegroup** in the **West US** region.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myresourcegroup -Location westus
+```
+
+## Create an Azure Database for MySQL server
+
+Create an Azure Database for MySQL server with the `New-AzMySqlServer` cmdlet. A server can manage
+multiple databases. Typically, a separate database is used for each project or for each user.
+
+The following example creates a MySQL server in the **West US** region named **mydemoserver** in the
+**myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5 server in
+the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document the
+password used in the first line of the example as this is the password for the MySQL server admin
+account.
+
+> [!TIP]
+> A server name maps to a DNS name and must be globally unique in Azure.
+
+```azurepowershell-interactive
+$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
+New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
+```
+
+The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as
+shown in the following examples.
+
+- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.
+- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.
+- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.
+
+For information about valid **Sku** values by region and for tiers, see
+[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md).
+
+Consider using the basic pricing tier if light compute and I/O are adequate for your workload.
+
+> [!IMPORTANT]
+> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory-
+> optimized and cannot be geo-replicated.
+
+## Configure a firewall rule
+
+Create an Azure Database for MySQL server-level firewall rule using the `New-AzMySqlFirewallRule`
+cmdlet. A server-level firewall rule allows an external application, such as the `mysql`
+command-line tool or MySQL Workbench to connect to your server through the Azure Database for MySQL
+service firewall.
+
+The following example creates a firewall rule named **AllowMyIP** that allows connections from a
+specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond
+to the location that you are connecting from.
+
+```azurepowershell-interactive
+New-AzMySqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1
+```
+
+> [!NOTE]
+> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from
+> within a corporate network, outbound traffic over port 3306 might not be allowed. In this
+> scenario, you can only connect to the server if your IT department opens port 3306.
+
+## Get the connection information
+
+To connect to your server, you need to provide host information and access credentials. Use the
+following example to determine the connection information. Make a note of the values for
+**FullyQualifiedDomainName** and **AdministratorLogin**.
+
+```azurepowershell-interactive
+Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Select-Object -Property FullyQualifiedDomainName, AdministratorLogin
+```
+
+```Output
+FullyQualifiedDomainName AdministratorLogin
+
+mydemoserver.mysql.database.azure.com myadmin
+```
+
+## Connect to the server using the mysql command-line tool
+
+Connect to your server using the `mysql` command-line tool. To download and install the command-line
+tool, see [MySQL Community Downloads](https://dev.mysql.com/downloads/shell/). You can also access a
+pre-installed version of the `mysql` command-line tool in Azure Cloud Shell by selecting the **Try
+It** button on a code sample in this article. Other ways to access Azure Cloud Shell are to select
+the **>_** button on the upper-right toolbar in the Azure portal or by visiting
+[shell.azure.com](https://shell.azure.com/).
+
+```azurepowershell-interactive
+mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
+```
+
+## Create a database
+
+Once youΓÇÖre connected to the server, create a blank database.
+
+```sql
+mysql> CREATE DATABASE mysampledb;
+```
+
+At the prompt, run the following command to switch the connection to this newly created database:
+
+```sql
+mysql> USE mysampledb;
+```
+
+## Create tables in the database
+
+Now that you know how to connect to the Azure Database for MySQL database, complete some basic
+tasks.
+
+First, create a table and load it with some data. Let's create a table that stores inventory
+information.
+
+```sql
+CREATE TABLE inventory (
+ id serial PRIMARY KEY,
+ name VARCHAR(50),
+ quantity INTEGER
+);
+```
+
+## Load data into the tables
+
+Now that you have a table, insert some data into it. At the open command prompt window, run the
+following query to insert some rows of data.
+
+```sql
+INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
+INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
+```
+
+Now you have two rows of sample data into the table you created earlier.
+
+## Query and update the data in the tables
+
+Execute the following query to retrieve information from the database table.
+
+```sql
+SELECT * FROM inventory;
+```
+
+You can also update the data in the tables.
+
+```sql
+UPDATE inventory SET quantity = 200 WHERE name = 'banana';
+```
+
+The row gets updated accordingly when you retrieve data.
+
+```sql
+SELECT * FROM inventory;
+```
+
+## Restore a database to a previous point in time
+
+You can restore the server to a previous point-in-time. The restored data is copied to a new server,
+and the existing server is left unchanged. For example, if a table is accidentally dropped, you can
+restore to the time just the drop occurred. Then, you can retrieve the missing table and data from
+the restored copy of the server.
+
+To restore the server, use the `Restore-AzMySqlServer` PowerShell cmdlet.
+
+### Run the restore command
+
+To restore the server, run the following example from PowerShell.
+
+```azurepowershell-interactive
+$restorePointInTime = (Get-Date).AddMinutes(-10)
+Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Restore-AzMySqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore
+```
+
+When you restore a server to an earlier point-in-time, a new server is created. The original server
+and its databases from the specified point-in-time are copied to the new server.
+
+The location and pricing tier values for the restored server remain the same as the original server.
+
+After the restore process finishes, locate the new server and verify that the data is restored as
+expected. The new server has the same server admin login name and password that was valid for the
+existing server at the time the restore was started. The password can be changed from the new
+server's **Overview** page.
+
+The new server created during a restore does not have the VNet service endpoints that existed on the
+original server. These rules must be set up separately for the new server. Firewall rules from the
+original server are restored.
+
+## Clean up resources
+
+If the resources created in this tutorial aren't needed for another quickstart or tutorial, you
+can delete them by running the following example.
+
+> [!CAUTION]
+> The following example deletes the specified resource group and all resources contained within it.
+> If resources outside the scope of this tutorial exist in the specified resource group, they will
+> also be deleted.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myresourcegroup
+```
+
+To delete only the server created in this tutorial without deleting the resource group, use the
+`Remove-AzMySqlServer` cmdlet.
+
+```azurepowershell-interactive
+Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to back up and restore an Azure Database for MySQL server using PowerShell](how-to-restore-server-powershell.md)
mysql Tutorial Provision Mysql Server Using Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-provision-mysql-server-using-azure-resource-manager-templates.md
+
+ Title: 'Tutorial: Create Azure Database for MySQL - Azure Resource Manager template'
+description: This tutorial explains how to provision and automate Azure Database for MySQL server deployments using Azure Resource Manager template.
+++++ Last updated : 12/02/2019+++
+# Tutorial: Provision an Azure Database for MySQL server using Azure Resource Manager template
++
+The [Azure Database for MySQL REST API](/rest/api/mysql/) enables DevOps engineers to automate and integrate provisioning, configuration, and operations of managed MySQL servers and databases in Azure. The API allows the creation, enumeration, management, and deletion of MySQL servers and databases on the Azure Database for MySQL service.
+
+Azure Resource Manager leverages the underlying REST API to declare and program the Azure resources required for deployments at scale, aligning with infrastructure as a code concept. The template parameterizes the Azure resource name, SKU, network, firewall configuration, and settings, allowing it to be created one time and used multiple times. Azure Resource Manager templates can be easily created using [Azure portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md) or [Visual Studio Code](../../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md?tabs=CLI). They enable application packaging, standardization, and deployment automation, which can be integrated in the DevOps CI/CD pipeline. For instance, if you are looking to quickly deploy a Web App with Azure Database for MySQL backend, you can perform the end-to-end deployment using this [QuickStart template](https://azure.microsoft.com/resources/templates/webapp-managed-mysql/) from the GitHub gallery.
+
+In this tutorial, you use Azure Resource Manager template and other utilities to learn how to:
+
+> [!div class="checklist"]
+> * Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template
+> * Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database
+> * Load sample data
+> * Query data
+> * Update data
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+
+## Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template
+
+To get the JSON template reference for an Azure Database for MySQL server, go to [Microsoft.DBforMySQL servers](/azure/templates/microsoft.dbformysql/servers) template reference. Below is the sample JSON template that can be used to create a new server running Azure Database for MySQL with VNet Service Endpoint.
+```json
+{
+ "apiVersion": "2017-12-01",
+ "type": "Microsoft.DBforMySQL/servers",
+ "name": "string",
+ "location": "string",
+ "tags": "string",
+ "properties": {
+ "version": "string",
+ "sslEnforcement": "string",
+ "administratorLogin": "string",
+ "administratorLoginPassword": "string",
+ "storageProfile": {
+ "storageMB": "string",
+ "backupRetentionDays": "string",
+ "geoRedundantBackup": "string"
+ }
+ },
+ "sku": {
+ "name": "string",
+ "tier": "string",
+ "capacity": "string",
+ "family": "string"
+ },
+ "resources": [
+ {
+ "name": "AllowSubnet",
+ "type": "virtualNetworkRules",
+ "apiVersion": "2017-12-01",
+ "properties": {
+ "virtualNetworkSubnetId": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
+ "ignoreMissingVnetServiceEndpoint": true
+ },
+ "dependsOn": [
+ "[concat('Microsoft.DBforMySQL/servers/', parameters('serverName'))]"
+ ]
+ }
+ ]
+}
+```
+In this request, the values that need to be customized are:
++ `name` - Specify the name of your MySQL Server (without domain name).++ `location` - Specify a valid Azure data center region for your MySQL Server. For example, westus2.++ `properties/version` - Specify the MySQL server version to deploy. For example, 5.6 or 5.7.++ `properties/administratorLogin` - Specify the MySQL admin login for the server. The admin sign-in name cannot be azure_superuser, admin, administrator, root, guest, or public.++ `properties/administratorLoginPassword` - Specify the password for the MySQL admin user specified above.++ `properties/sslEnforcement` - Specify Enabled/Disabled to enable/disable sslEnforcement.++ `storageProfile/storageMB` - Specify the max provisioned storage size required for the server in megabytes. For example, 5120.++ `storageProfile/backupRetentionDays` - Specify the desired backup retention period in days. For example, 7. ++ `storageProfile/geoRedundantBackup` - Specify Enabled/Disabled depending on Geo-DR requirements.++ `sku/tier` - Specify Basic, GeneralPurpose, or MemoryOptimized tier for deployment.++ `sku/capacity` - Specify the vCore capacity. Possible values include 2, 4, 8, 16, 32 or 64.++ `sku/family` - Specify Gen5 to choose hardware generation for server deployment.++ `sku/name` - Specify TierPrefix_family_capacity. For example B_Gen5_1, GP_Gen5_16, MO_Gen5_32. See the [pricing tiers](./concepts-pricing-tiers.md) documentation to understand the valid values per region and per tier.++ `resources/properties/virtualNetworkSubnetId` - Specify the Azure identifier of the subnet in VNet where Azure MySQL server should be placed. ++ `tags(optional)` - Specify optional tags are key value pairs that you would use to categorize the resources for billing etc.+
+If you are looking to build an Azure Resource Manager template to automate Azure Database for MySQL deployments for your organization, the recommendation would be to start from the sample [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json) in Azure Quickstart GitHub Gallery first and build on top of it.
+
+If you are new to Azure Resource Manager templates and would like to try it, you can start by following these steps:
++ Clone or download the Sample [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json) from Azure Quickstart gallery. ++ Modify the azuredeploy.parameters.json to update the parameter values based on your preference and save the file. ++ Use Azure CLI to create the Azure MySQL server using the following commands+
+You may use the Azure Cloud Shell in the browser, or Install Azure CLI on your own computer to run the code blocks in this tutorial.
++
+```azurecli-interactive
+az login
+az group create -n ExampleResourceGroup -l "West US2"
+az deployment group create -g $ ExampleResourceGroup --template-file $ {templateloc} --parameters $ {parametersloc}
+```
+
+## Get the connection information
+To connect to your server, you need to provide host information and access credentials.
+```azurecli-interactive
+az mysql server show --resource-group myresourcegroup --name mydemoserver
+```
+
+The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**.
+```json
+{
+ "administratorLogin": "myadmin",
+ "administratorLoginPassword": null,
+ "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver",
+ "location": "westus2",
+ "name": "mydemoserver",
+ "resourceGroup": "myresourcegroup",
+ "sku": {
+ "capacity": 2,
+ "family": "Gen5",
+ "name": "GP_Gen5_2",
+ "size": null,
+ "tier": "GeneralPurpose"
+ },
+ "sslEnforcement": "Enabled",
+ "storageProfile": {
+ "backupRetentionDays": 7,
+ "geoRedundantBackup": "Disabled",
+ "storageMb": 5120
+ },
+ "tags": null,
+ "type": "Microsoft.DBforMySQL/servers",
+ "userVisibleState": "Ready",
+ "version": "5.7"
+}
+```
+
+## Connect to the server using mysql
+Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to establish a connection to your Azure Database for MySQL server. In this example, the command is:
+```cmd
+mysql -h mydemoserver.database.windows.net -u myadmin@mydemoserver -p
+```
+
+## Create a blank database
+Once you're connected to the server, create a blank database.
+```sql
+mysql> CREATE DATABASE mysampledb;
+```
+
+At the prompt, run the following command to switch the connection to this newly created database:
+```sql
+mysql> USE mysampledb;
+```
+
+## Create tables in the database
+Now that you know how to connect to the Azure Database for MySQL database, complete some basic tasks.
+
+First, create a table and load it with some data. Let's create a table that stores inventory information.
+```sql
+CREATE TABLE inventory (
+ id serial PRIMARY KEY,
+ name VARCHAR(50),
+ quantity INTEGER
+);
+```
+
+## Load data into the tables
+Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data.
+```sql
+INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
+INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
+```
+
+Now you have two rows of sample data into the table you created earlier.
+
+## Query and update the data in the tables
+Execute the following query to retrieve information from the database table.
+```sql
+SELECT * FROM inventory;
+```
+
+You can also update the data in the tables.
+```sql
+UPDATE inventory SET quantity = 200 WHERE name = 'banana';
+```
+
+The row gets updated accordingly when you retrieve data.
+```sql
+SELECT * FROM inventory;
+```
+
+## Clean up resources
+
+When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
+
+2. In the resource group list, choose the name of your resource group.
+
+3. In the **Overview** page of your resource group, select **Delete resource group**.
+
+4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+Write-Host "Press [ENTER] to continue..."
+```
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+++
+## Next steps
+In this tutorial you learned to:
+> [!div class="checklist"]
+> * Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template
+> * Use the mysql command-line tool to create a database
+> * Load sample data
+> * Query data
+> * Update data
+
+> [!div class="nextstepaction"]
+> [How to connect applications to Azure Database for MySQL](./how-to-connection-string.md)
mysql Tutorial Design Database Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/tutorial-design-database-using-cli.md
- Title: 'Tutorial: Design a server - Azure CLI - Azure Database for MySQL'
-description: This tutorial explains how to create and manage Azure Database for MySQL server and database using Azure CLI from the command line.
----- Previously updated : 12/02/2019---
-# Tutorial: Design an Azure Database for MySQL using Azure CLI
--
-Azure Database for MySQL is a relational database service in the Microsoft cloud based on MySQL Community Edition database engine. In this tutorial, you use Azure CLI (command-line interface) and other utilities to learn how to:
-
-> [!div class="checklist"]
-> * Create an Azure Database for MySQL
-> * Configure the server firewall
-> * Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database
-> * Load sample data
-> * Query data
-> * Update data
-> * Restore data
----- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account#az-account-set) command.
-```azurecli-interactive
-az account set --subscription 00000000-0000-0000-0000-000000000000
-```
-
-## Create a resource group
-Create an [Azure resource group](../azure-resource-manager/management/overview.md) with [az group create](/cli/azure/group#az-group-create) command. A resource group is a logical container into which Azure resources are deployed and managed as a group.
-
-The following example creates a resource group named `myresourcegroup` in the `westus` location.
-
-```azurecli-interactive
-az group create --name myresourcegroup --location westus
-```
-
-## Create an Azure Database for MySQL server
-Create an Azure Database for MySQL server with the az mysql server create command. A server can manage multiple databases. Typically, a separate database is used for each project or for each user.
-
-The following example creates an Azure Database for MySQL server located in `westus` in the resource group `myresourcegroup` with name `mydemoserver`. The server has an administrator user named `myadmin`. It is a General Purpose, Gen 5 server with 2 vCores. Substitute the `<server_admin_password>` with your own value.
-
-```azurecli-interactive
-az mysql server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 5.7
-```
-The sku-name parameter value follows the convention {pricing tier}\_{compute generation}\_{vCores} as in the examples below:
-+ `--sku-name B_Gen5_2` maps to Basic, Gen 5, and 2 vCores.
-+ `--sku-name GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.
-+ `--sku-name MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.
-
-Please see the [pricing tiers](./concepts-pricing-tiers.md) documentation to understand the valid values per region and per tier.
-
-> [!IMPORTANT]
-> The server admin login and password that you specify here are required to log in to the server and its databases later in this quickstart. Remember or record this information for later use.
--
-## Configure firewall rule
-Create an Azure Database for MySQL server-level firewall rule with the az mysql server firewall-rule create command. A server-level firewall rule allows an external application, such as **mysql** command-line tool or MySQL Workbench to connect to your server through the Azure MySQL service firewall.
-
-The following example creates a firewall rule called `AllowMyIP` that allows connections from a specific IP address, 192.168.0.1. Substitute in the IP address or range of IP addresses that correspond to where you'll be connecting from.
-
-```azurecli-interactive
-az mysql server firewall-rule create --resource-group myresourcegroup --server mydemoserver --name AllowMyIP --start-ip-address 192.168.0.1 --end-ip-address 192.168.0.1
-```
-
-## Get the connection information
-
-To connect to your server, you need to provide host information and access credentials.
-```azurecli-interactive
-az mysql server show --resource-group myresourcegroup --name mydemoserver
-```
-
-The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**.
-```json
-{
- "administratorLogin": "myadmin",
- "administratorLoginPassword": null,
- "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver",
- "location": "westus",
- "name": "mydemoserver",
- "resourceGroup": "myresourcegroup",
- "sku": {
- "capacity": 2,
- "family": "Gen5",
- "name": "GP_Gen5_2",
- "size": null,
- "tier": "GeneralPurpose"
- },
- "sslEnforcement": "Enabled",
- "storageProfile": {
- "backupRetentionDays": 7,
- "geoRedundantBackup": "Disabled",
- "storageMb": 5120
- },
- "tags": null,
- "type": "Microsoft.DBforMySQL/servers",
- "userVisibleState": "Ready",
- "version": "5.7"
-}
-```
-
-## Connect to the server using mysql
-Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to establish a connection to your Azure Database for MySQL server. In this example, the command is:
-```cmd
-mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
-```
-
-## Create a blank database
-Once youΓÇÖre connected to the server, create a blank database.
-```sql
-mysql> CREATE DATABASE mysampledb;
-```
-
-At the prompt, run the following command to switch the connection to this newly created database:
-```sql
-mysql> USE mysampledb;
-```
-
-## Create tables in the database
-Now that you know how to connect to the Azure Database for MySQL database, complete some basic tasks.
-
-First, create a table and load it with some data. Let's create a table that stores inventory information.
-```sql
-CREATE TABLE inventory (
- id serial PRIMARY KEY,
- name VARCHAR(50),
- quantity INTEGER
-);
-```
-
-## Load data into the tables
-Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data.
-```sql
-INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
-INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
-```
-
-Now you have two rows of sample data into the table you created earlier.
-
-## Query and update the data in the tables
-Execute the following query to retrieve information from the database table.
-```sql
-SELECT * FROM inventory;
-```
-
-You can also update the data in the tables.
-```sql
-UPDATE inventory SET quantity = 200 WHERE name = 'banana';
-```
-
-The row gets updated accordingly when you retrieve data.
-```sql
-SELECT * FROM inventory;
-```
-
-## Restore a database to a previous point in time
-Imagine you have accidentally deleted this table. This is something you cannot easily recover from. Azure Database for MySQL allows you to go back to any point in time in the last up to 35 days and restore this point in time to a new server. You can use this new server to recover your deleted data. The following steps restore the sample server to a point before the table was added.
-
-For the restore, you need the following information:
--- Restore point: Select a point-in-time that occurs before the server was changed. Must be greater than or equal to the source database's Oldest backup value.-- Target server: Provide a new server name you want to restore to-- Source server: Provide the name of the server you want to restore from-- Location: You cannot select the region, by default it is same as the source server-
-```azurecli-interactive
-az mysql server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time "2017-05-4 03:10" --source-server-name mydemoserver
-```
-
-The `az mysql server restore` command needs the following parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-| resource-group |  myresourcegroup |  The resource group in which the source server exists.  |
-| name | mydemoserver-restored | The name of the new server that is created by the restore command. |
-| restore-point-in-time | 2017-04-13T13:59:00Z | Select a point-in-time to restore to. This date and time must be within the source server's backup retention period. Use ISO8601 date and time format. For example, you may use your own local timezone, such as `2017-04-13T05:59:00-08:00`, or use UTC Zulu format `2017-04-13T13:59:00Z`. |
-| source-server | mydemoserver | The name or ID of the source server to restore from. |
-
-Restoring a server to a point-in-time creates a new server, copied as the original server as of the point in time you specify. The location and pricing tier values for the restored server are the same as the source server.
-
-The command is synchronous, and will return after the server is restored. Once the restore finishes, locate the new server that was created. Verify the data was restored as expected.
-
-## Clean up resources
-If you don't need these resources for another quickstart/tutorial, you can delete them by doing the following command:
-
-```azurecli-interactive
-az group delete --name myresourcegroup
-```
-
-If you would just like to delete the one newly created server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
-
-```azurecli-interactive
-az mysql server delete --resource-group myresourcegroup --name mydemoserver
-```
-
-## Next steps
-In this tutorial you learned to:
-> [!div class="checklist"]
-> * Create an Azure Database for MySQL server
-> * Configure the server firewall
-> * Use the mysql command-line tool to create a database
-> * Load sample data
-> * Query data
-> * Update data
-> * Restore data
-
-> [!div class="nextstepaction"]
-> [Azure Database for MySQL - Azure CLI samples](./sample-scripts-azure-cli.md)
mysql Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/tutorial-design-database-using-portal.md
- Title: 'Tutorial: Design a server - Azure portal - Azure Database for MySQL'
-description: This tutorial explains how to create and manage Azure Database for MySQL server and database using Azure portal.
----- Previously updated : 3/20/2020---
-# Tutorial: Design an Azure Database for MySQL database using the Azure portal
--
-Azure Database for MySQL is a managed service that enables you to run, manage, and scale highly available MySQL databases in the cloud. Using the Azure portal, you can easily manage your server and design a database.
-
-In this tutorial, you use the Azure portal to learn how to:
-
-> [!div class="checklist"]
-> * Create an Azure Database for MySQL
-> * Configure the server firewall
-> * Use mysql command-line tool to create a database
-> * Load sample data
-> * Query data
-> * Update data
-> * Restore data
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
-
-## Sign in to the Azure portal
-
-Open your favorite web browser, and visit the [Microsoft Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
-
-## Create an Azure Database for MySQL server
-
-An Azure Database for MySQL server is created with a defined set of [compute and storage resources](./concepts-pricing-tiers.md). The server is created within an [Azure resource group](../azure-resource-manager/management/overview.md).
-
-1. Select the **Create a resource** button (+) in the upper left corner of the portal.
-
-2. Select **Databases** > **Azure Database for MySQL**. If you cannot find MySQL Server under the **Databases** category, click **See all** to show all available database services. You can also type **Azure Database for MySQL** in the search box to quickly find the service.
-
- :::image type="content" source="./media/tutorial-design-database-using-portal/1-Navigate-to-MySQL.png" alt-text="Navigate to MySQL":::
-
-3. Click **Azure Database for MySQL** tile. Fill out the Azure Database for MySQL form.
-
- :::image type="content" source="./media/tutorial-design-database-using-portal/2-create-form.png" alt-text="Create form":::
-
- **Setting** | **Suggested value** | **Field description**
- ||
- Server name | Unique server name | Choose a unique name that identifies your Azure Database for MySQL server. For example, mydemoserver. The domain name *.mysql.database.azure.com* is appended to the server name you provide. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
- Subscription | Your subscription | Select the Azure subscription that you want to use for your server. If you have multiple subscriptions, choose the subscription in which you get billed for the resource.
- Resource group | *myresourcegroup* | Provide a new or existing resource group name.
- Select source | *Blank* | Select *Blank* to create a new server from scratch. (You select *Backup* if you are creating a server from a geo-backup of an existing Azure Database for MySQL server).
- Server admin login | myadmin | A sign-in account to use when you're connecting to the server. The admin sign-in name cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
- Password | *Your choice* | Provide a new password for the server admin account. It must contain from 8 to 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
- Confirm password | *Your choice*| Confirm the admin account password.
- Location | *The region closest to your users*| Choose the location that is closest to your users or your other Azure applications.
- Version | *The latest version*| The latest version (unless you have specific requirements that require another version).
- Pricing tier | **General Purpose**, **Gen 5**, **2 vCores**, **5 GB**, **7 days**, **Geographically Redundant** | The compute, storage, and backup configurations for your new server. Select **Pricing tier**. Next, select the **General Purpose** tab. *Gen 5*, *2 vCores*, *5 GB*, and *7 days* are the default values for **Compute Generation**, **vCore**, **Storage**, and **Backup Retention Period**. You can leave those sliders as is. To enable your server backups in geo-redundant storage, select **Geographically Redundant** from the **Backup Redundancy Options**. To save this pricing tier selection, select **OK**. The next screenshot captures these selections.
-
- :::image type="content" source="./media/tutorial-design-database-using-portal/3-pricing-tier.png" alt-text="Pricing tier":::
-
- > [!TIP]
- > With **auto-growth** enabled your server increases storage when you are approaching the allocated limit, without impacting your workload.
-
-4. Click **Review + create**. You can click on the **Notifications** button on the toolbar to monitor the deployment process. Deployment can take up to 20 minutes.
-
-## Configure firewall
-
-Azure Databases for MySQL are protected by a firewall. By default, all connections to the server and the databases inside the server are rejected. Before connecting to Azure Database for MySQL for the first time, configure the firewall to add the client machine's public network IP address (or IP address range).
-
-1. Click your newly created server, and then click **Connection security**.
-
- :::image type="content" source="./media/tutorial-design-database-using-portal/1-Connection-security.png" alt-text="Connection security":::
-2. You can **Add My IP**, or configure firewall rules here. Remember to click **Save** after you have created the rules.
-You can now connect to the server using mysql command-line tool or MySQL Workbench GUI tool.
-
-> [!TIP]
-> Azure Database for MySQL server communicates over port 3306. If you are trying to connect from within a corporate network, outbound traffic over port 3306 may not be allowed by your network's firewall. If so, you cannot connect to Azure MySQL server unless your IT department opens port 3306.
-
-## Get connection information
-
-Get the fully qualified **Server name** and **Server admin login name** for your Azure Database for MySQL server from the Azure portal. You use the fully qualified server name to connect to your server using mysql command-line tool.
-
-1. In [Azure portal](https://portal.azure.com/), click **All resources** from the left-hand menu, type the name, and search for your Azure Database for MySQL server. Select the server name to view the details.
-
-2. From the **Overview** page, note down **Server Name** and **Server admin login name**. You may click the copy button next to each field to copy to the clipboard.
- :::image type="content" source="./media/tutorial-design-database-using-portal/2-server-properties.png" alt-text="4-2 server properties":::
-
-In this example, the server name is *mydemoserver.mysql.database.azure.com*, and the server admin login is *myadmin\@mydemoserver*.
-
-## Connect to the server using mysql
-
-Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.7/en/mysql.html) to establish a connection to your Azure Database for MySQL server. You can run the mysql command-line tool from the Azure Cloud Shell in the browser or from your own machine using mysql tools installed locally. To launch the Azure Cloud Shell, click the `Try It` button on a code block in this article, or visit the Azure portal and click the `>_` icon in the top right toolbar.
-
-Type the command to connect:
-
-```azurecli-interactive
-mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
-```
-
-## Create a blank database
-
-Once you're connected to the server, create a blank database to work with.
-
-```sql
-CREATE DATABASE mysampledb;
-```
-
-At the prompt, run the following command to switch connection to this newly created database:
-
-```sql
-USE mysampledb;
-```
-
-## Create tables in the database
-
-Now that you know how to connect to the Azure Database for MySQL database, you can complete some basic tasks:
-
-First, create a table and load it with some data. Let's create a table that stores inventory information.
-
-```sql
-CREATE TABLE inventory (
- id serial PRIMARY KEY,
- name VARCHAR(50),
- quantity INTEGER
-);
-```
-
-## Load data into the tables
-
-Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data.
-
-```sql
-INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
-INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
-```
-
-Now you have two rows of sample data into the table you created earlier.
-
-## Query and update the data in the tables
-
-Execute the following query to retrieve information from the database table.
-
-```sql
-SELECT * FROM inventory;
-```
-
-You can also update the data in the tables.
-
-```sql
-UPDATE inventory SET quantity = 200 WHERE name = 'banana';
-```
-
-The row gets updated accordingly when you retrieve data.
-
-```sql
-SELECT * FROM inventory;
-```
-
-## Restore a database to a previous point in time
-
-Imagine you have accidentally deleted an important database table, and cannot recover the data easily. Azure Database for MySQL allows you to restore the server to a point in time, creating a copy of the databases into new server. You can use this new server to recover your deleted data. The following steps restore the sample server to a point before the table was added.
-
-1. In the Azure portal, locate your Azure Database for MySQL. On the **Overview** page, click **Restore** on the toolbar. The Restore page opens.
-
- :::image type="content" source="./media/tutorial-design-database-using-portal/1-restore-a-db.png" alt-text="10-1 restore a database":::
-
-2. Fill out the **Restore** form with the required information.
-
- :::image type="content" source="./media/tutorial-design-database-using-portal/2-restore-form.png" alt-text="10-2 restore form":::
-
- - **Restore point**: Select a point-in-time that you want to restore to, within the timeframe listed. Make sure to convert your local timezone to UTC.
- - **Restore to new server**: Provide a new server name you want to restore to.
- - **Location**: The region is same as the source server, and cannot be changed.
- - **Pricing tier**: The pricing tier is the same as the source server, and cannot be changed.
-
-3. Click **OK** to restore the server to [restore to a point in time](./howto-restore-server-portal.md) before the table was deleted. Restoring a server creates a new copy of the server, as of the point in time you specify.
-
-## Clean up resources
-
-If you don't expect to need these resources in the future, you can delete them by deleting the resource group or just delete the MySQL server. To delete the resource group, follow these steps:
-1. In the Azure portal, search for and select **Resource groups**.
-2. In the resource group list, choose the name of your resource group.
-3. In the Overview page of your resource group, select **Delete resource group**.
-4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
-
-## Next steps
-
-In this tutorial, you use the Azure portal to learned how to:
-
-> [!div class="checklist"]
-> * Create an Azure Database for MySQL
-> * Configure the server firewall
-> * Use mysql command-line tool to create a database
-> * Load sample data
-> * Query data
-> * Update data
-> * Restore data
-
-> [!div class="nextstepaction"]
-> [How to connect applications to Azure Database for MySQL](./howto-connection-string.md)
mysql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/tutorial-design-database-using-powershell.md
- Title: 'Tutorial: Design a server - Azure PowerShell - Azure Database for MySQL'
-description: This tutorial explains how to create and manage Azure Database for MySQL server and database using PowerShell.
----- Previously updated : 04/29/2020---
-# Tutorial: Design an Azure Database for MySQL using PowerShell
--
-Azure Database for MySQL is a relational database service in the Microsoft cloud based on MySQL
-Community Edition database engine. In this tutorial, you use PowerShell and other utilities to learn
-how to:
-
-> [!div class="checklist"]
-> - Create an Azure Database for MySQL
-> - Configure the server firewall
-> - Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database
-> - Load sample data
-> - Query data
-> - Update data
-> - Restore data
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
-
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information
-about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
-
-> [!IMPORTANT]
-> While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.MySql -AllowPrerelease`.
-> Once the Az.MySql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If this is your first time using the Azure Database for MySQL service, you must register the
-**Microsoft.DBforMySQL** resource provider.
-
-```azurepowershell-interactive
-Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL
-```
--
-If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
-should be billed. Select a specific subscription ID using the
-[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
-
-```azurepowershell-interactive
-Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
-```
-
-## Create a resource group
-
-Create an [Azure resource group](../azure-resource-manager/management/overview.md)
-using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. A
-resource group is a logical container in which Azure resources are deployed and managed as a group.
-
-The following example creates a resource group named **myresourcegroup** in the **West US** region.
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name myresourcegroup -Location westus
-```
-
-## Create an Azure Database for MySQL server
-
-Create an Azure Database for MySQL server with the `New-AzMySqlServer` cmdlet. A server can manage
-multiple databases. Typically, a separate database is used for each project or for each user.
-
-The following example creates a MySQL server in the **West US** region named **mydemoserver** in the
-**myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5 server in
-the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document the
-password used in the first line of the example as this is the password for the MySQL server admin
-account.
-
-> [!TIP]
-> A server name maps to a DNS name and must be globally unique in Azure.
-
-```azurepowershell-interactive
-$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
-New-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
-```
-
-The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as
-shown in the following examples.
--- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.-- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.-- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.-
-For information about valid **Sku** values by region and for tiers, see
-[Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md).
-
-Consider using the basic pricing tier if light compute and I/O are adequate for your workload.
-
-> [!IMPORTANT]
-> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory-
-> optimized and cannot be geo-replicated.
-
-## Configure a firewall rule
-
-Create an Azure Database for MySQL server-level firewall rule using the `New-AzMySqlFirewallRule`
-cmdlet. A server-level firewall rule allows an external application, such as the `mysql`
-command-line tool or MySQL Workbench to connect to your server through the Azure Database for MySQL
-service firewall.
-
-The following example creates a firewall rule named **AllowMyIP** that allows connections from a
-specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond
-to the location that you are connecting from.
-
-```azurepowershell-interactive
-New-AzMySqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1
-```
-
-> [!NOTE]
-> Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from
-> within a corporate network, outbound traffic over port 3306 might not be allowed. In this
-> scenario, you can only connect to the server if your IT department opens port 3306.
-
-## Get the connection information
-
-To connect to your server, you need to provide host information and access credentials. Use the
-following example to determine the connection information. Make a note of the values for
-**FullyQualifiedDomainName** and **AdministratorLogin**.
-
-```azurepowershell-interactive
-Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Select-Object -Property FullyQualifiedDomainName, AdministratorLogin
-```
-
-```Output
-FullyQualifiedDomainName AdministratorLogin
-
-mydemoserver.mysql.database.azure.com myadmin
-```
-
-## Connect to the server using the mysql command-line tool
-
-Connect to your server using the `mysql` command-line tool. To download and install the command-line
-tool, see [MySQL Community Downloads](https://dev.mysql.com/downloads/shell/). You can also access a
-pre-installed version of the `mysql` command-line tool in Azure Cloud Shell by selecting the **Try
-It** button on a code sample in this article. Other ways to access Azure Cloud Shell are to select
-the **>_** button on the upper-right toolbar in the Azure portal or by visiting
-[shell.azure.com](https://shell.azure.com/).
-
-```azurepowershell-interactive
-mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p
-```
-
-## Create a database
-
-Once youΓÇÖre connected to the server, create a blank database.
-
-```sql
-mysql> CREATE DATABASE mysampledb;
-```
-
-At the prompt, run the following command to switch the connection to this newly created database:
-
-```sql
-mysql> USE mysampledb;
-```
-
-## Create tables in the database
-
-Now that you know how to connect to the Azure Database for MySQL database, complete some basic
-tasks.
-
-First, create a table and load it with some data. Let's create a table that stores inventory
-information.
-
-```sql
-CREATE TABLE inventory (
- id serial PRIMARY KEY,
- name VARCHAR(50),
- quantity INTEGER
-);
-```
-
-## Load data into the tables
-
-Now that you have a table, insert some data into it. At the open command prompt window, run the
-following query to insert some rows of data.
-
-```sql
-INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
-INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
-```
-
-Now you have two rows of sample data into the table you created earlier.
-
-## Query and update the data in the tables
-
-Execute the following query to retrieve information from the database table.
-
-```sql
-SELECT * FROM inventory;
-```
-
-You can also update the data in the tables.
-
-```sql
-UPDATE inventory SET quantity = 200 WHERE name = 'banana';
-```
-
-The row gets updated accordingly when you retrieve data.
-
-```sql
-SELECT * FROM inventory;
-```
-
-## Restore a database to a previous point in time
-
-You can restore the server to a previous point-in-time. The restored data is copied to a new server,
-and the existing server is left unchanged. For example, if a table is accidentally dropped, you can
-restore to the time just the drop occurred. Then, you can retrieve the missing table and data from
-the restored copy of the server.
-
-To restore the server, use the `Restore-AzMySqlServer` PowerShell cmdlet.
-
-### Run the restore command
-
-To restore the server, run the following example from PowerShell.
-
-```azurepowershell-interactive
-$restorePointInTime = (Get-Date).AddMinutes(-10)
-Get-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Restore-AzMySqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore
-```
-
-When you restore a server to an earlier point-in-time, a new server is created. The original server
-and its databases from the specified point-in-time are copied to the new server.
-
-The location and pricing tier values for the restored server remain the same as the original server.
-
-After the restore process finishes, locate the new server and verify that the data is restored as
-expected. The new server has the same server admin login name and password that was valid for the
-existing server at the time the restore was started. The password can be changed from the new
-server's **Overview** page.
-
-The new server created during a restore does not have the VNet service endpoints that existed on the
-original server. These rules must be set up separately for the new server. Firewall rules from the
-original server are restored.
-
-## Clean up resources
-
-If the resources created in this tutorial aren't needed for another quickstart or tutorial, you
-can delete them by running the following example.
-
-> [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this tutorial exist in the specified resource group, they will
-> also be deleted.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myresourcegroup
-```
-
-To delete only the server created in this tutorial without deleting the resource group, use the
-`Remove-AzMySqlServer` cmdlet.
-
-```azurepowershell-interactive
-Remove-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to back up and restore an Azure Database for MySQL server using PowerShell](howto-restore-server-powershell.md)
mysql Tutorial Provision Mysql Server Using Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/tutorial-provision-mysql-server-using-Azure-Resource-Manager-templates.md
- Title: 'Tutorial: Create Azure Database for MySQL - Azure Resource Manager template'
-description: This tutorial explains how to provision and automate Azure Database for MySQL server deployments using Azure Resource Manager template.
----- Previously updated : 12/02/2019---
-# Tutorial: Provision an Azure Database for MySQL server using Azure Resource Manager template
--
-The [Azure Database for MySQL REST API](/rest/api/mysql/) enables DevOps engineers to automate and integrate provisioning, configuration, and operations of managed MySQL servers and databases in Azure. The API allows the creation, enumeration, management, and deletion of MySQL servers and databases on the Azure Database for MySQL service.
-
-Azure Resource Manager leverages the underlying REST API to declare and program the Azure resources required for deployments at scale, aligning with infrastructure as a code concept. The template parameterizes the Azure resource name, SKU, network, firewall configuration, and settings, allowing it to be created one time and used multiple times. Azure Resource Manager templates can be easily created using [Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md) or [Visual Studio Code](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md?tabs=CLI). They enable application packaging, standardization, and deployment automation, which can be integrated in the DevOps CI/CD pipeline. For instance, if you are looking to quickly deploy a Web App with Azure Database for MySQL backend, you can perform the end-to-end deployment using this [QuickStart template](https://azure.microsoft.com/resources/templates/webapp-managed-mysql/) from the GitHub gallery.
-
-In this tutorial, you use Azure Resource Manager template and other utilities to learn how to:
-
-> [!div class="checklist"]
-> * Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template
-> * Use [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to create a database
-> * Load sample data
-> * Query data
-> * Update data
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
-
-## Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template
-
-To get the JSON template reference for an Azure Database for MySQL server, go to [Microsoft.DBforMySQL servers](/azure/templates/microsoft.dbformysql/servers) template reference. Below is the sample JSON template that can be used to create a new server running Azure Database for MySQL with VNet Service Endpoint.
-```json
-{
- "apiVersion": "2017-12-01",
- "type": "Microsoft.DBforMySQL/servers",
- "name": "string",
- "location": "string",
- "tags": "string",
- "properties": {
- "version": "string",
- "sslEnforcement": "string",
- "administratorLogin": "string",
- "administratorLoginPassword": "string",
- "storageProfile": {
- "storageMB": "string",
- "backupRetentionDays": "string",
- "geoRedundantBackup": "string"
- }
- },
- "sku": {
- "name": "string",
- "tier": "string",
- "capacity": "string",
- "family": "string"
- },
- "resources": [
- {
- "name": "AllowSubnet",
- "type": "virtualNetworkRules",
- "apiVersion": "2017-12-01",
- "properties": {
- "virtualNetworkSubnetId": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
- "ignoreMissingVnetServiceEndpoint": true
- },
- "dependsOn": [
- "[concat('Microsoft.DBforMySQL/servers/', parameters('serverName'))]"
- ]
- }
- ]
-}
-```
-In this request, the values that need to be customized are:
-+ `name` - Specify the name of your MySQL Server (without domain name).
-+ `location` - Specify a valid Azure data center region for your MySQL Server. For example, westus2.
-+ `properties/version` - Specify the MySQL server version to deploy. For example, 5.6 or 5.7.
-+ `properties/administratorLogin` - Specify the MySQL admin login for the server. The admin sign-in name cannot be azure_superuser, admin, administrator, root, guest, or public.
-+ `properties/administratorLoginPassword` - Specify the password for the MySQL admin user specified above.
-+ `properties/sslEnforcement` - Specify Enabled/Disabled to enable/disable sslEnforcement.
-+ `storageProfile/storageMB` - Specify the max provisioned storage size required for the server in megabytes. For example, 5120.
-+ `storageProfile/backupRetentionDays` - Specify the desired backup retention period in days. For example, 7.
-+ `storageProfile/geoRedundantBackup` - Specify Enabled/Disabled depending on Geo-DR requirements.
-+ `sku/tier` - Specify Basic, GeneralPurpose, or MemoryOptimized tier for deployment.
-+ `sku/capacity` - Specify the vCore capacity. Possible values include 2, 4, 8, 16, 32 or 64.
-+ `sku/family` - Specify Gen5 to choose hardware generation for server deployment.
-+ `sku/name` - Specify TierPrefix_family_capacity. For example B_Gen5_1, GP_Gen5_16, MO_Gen5_32. See the [pricing tiers](./concepts-pricing-tiers.md) documentation to understand the valid values per region and per tier.
-+ `resources/properties/virtualNetworkSubnetId` - Specify the Azure identifier of the subnet in VNet where Azure MySQL server should be placed.
-+ `tags(optional)` - Specify optional tags are key value pairs that you would use to categorize the resources for billing etc.
-
-If you are looking to build an Azure Resource Manager template to automate Azure Database for MySQL deployments for your organization, the recommendation would be to start from the sample [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json) in Azure Quickstart GitHub Gallery first and build on top of it.
-
-If you are new to Azure Resource Manager templates and would like to try it, you can start by following these steps:
-+ Clone or download the Sample [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.dbformysql/managed-mysql-with-vnet/azuredeploy.json) from Azure Quickstart gallery.
-+ Modify the azuredeploy.parameters.json to update the parameter values based on your preference and save the file.
-+ Use Azure CLI to create the Azure MySQL server using the following commands
-
-You may use the Azure Cloud Shell in the browser, or Install Azure CLI on your own computer to run the code blocks in this tutorial.
--
-```azurecli-interactive
-az login
-az group create -n ExampleResourceGroup -l "West US2"
-az deployment group create -g $ ExampleResourceGroup --template-file $ {templateloc} --parameters $ {parametersloc}
-```
-
-## Get the connection information
-To connect to your server, you need to provide host information and access credentials.
-```azurecli-interactive
-az mysql server show --resource-group myresourcegroup --name mydemoserver
-```
-
-The result is in JSON format. Make a note of the **fullyQualifiedDomainName** and **administratorLogin**.
-```json
-{
- "administratorLogin": "myadmin",
- "administratorLoginPassword": null,
- "fullyQualifiedDomainName": "mydemoserver.mysql.database.azure.com",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforMySQL/servers/mydemoserver",
- "location": "westus2",
- "name": "mydemoserver",
- "resourceGroup": "myresourcegroup",
- "sku": {
- "capacity": 2,
- "family": "Gen5",
- "name": "GP_Gen5_2",
- "size": null,
- "tier": "GeneralPurpose"
- },
- "sslEnforcement": "Enabled",
- "storageProfile": {
- "backupRetentionDays": 7,
- "geoRedundantBackup": "Disabled",
- "storageMb": 5120
- },
- "tags": null,
- "type": "Microsoft.DBforMySQL/servers",
- "userVisibleState": "Ready",
- "version": "5.7"
-}
-```
-
-## Connect to the server using mysql
-Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.6/en/mysql.html) to establish a connection to your Azure Database for MySQL server. In this example, the command is:
-```cmd
-mysql -h mydemoserver.database.windows.net -u myadmin@mydemoserver -p
-```
-
-## Create a blank database
-Once you're connected to the server, create a blank database.
-```sql
-mysql> CREATE DATABASE mysampledb;
-```
-
-At the prompt, run the following command to switch the connection to this newly created database:
-```sql
-mysql> USE mysampledb;
-```
-
-## Create tables in the database
-Now that you know how to connect to the Azure Database for MySQL database, complete some basic tasks.
-
-First, create a table and load it with some data. Let's create a table that stores inventory information.
-```sql
-CREATE TABLE inventory (
- id serial PRIMARY KEY,
- name VARCHAR(50),
- quantity INTEGER
-);
-```
-
-## Load data into the tables
-Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data.
-```sql
-INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
-INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
-```
-
-Now you have two rows of sample data into the table you created earlier.
-
-## Query and update the data in the tables
-Execute the following query to retrieve information from the database table.
-```sql
-SELECT * FROM inventory;
-```
-
-You can also update the data in the tables.
-```sql
-UPDATE inventory SET quantity = 200 WHERE name = 'banana';
-```
-
-The row gets updated accordingly when you retrieve data.
-```sql
-SELECT * FROM inventory;
-```
-
-## Clean up resources
-
-When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
-
-# [Portal](#tab/azure-portal)
-
-1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
-
-2. In the resource group list, choose the name of your resource group.
-
-3. In the **Overview** page of your resource group, select **Delete resource group**.
-
-4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-Write-Host "Press [ENTER] to continue..."
-```
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
---
-## Next steps
-In this tutorial you learned to:
-> [!div class="checklist"]
-> * Create an Azure Database for MySQL server with VNet Service Endpoint using Azure Resource Manager template
-> * Use the mysql command-line tool to create a database
-> * Load sample data
-> * Query data
-> * Update data
-
-> [!div class="nextstepaction"]
-> [How to connect applications to Azure Database for MySQL](./howto-connection-string.md)
openshift Howto Create Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md
This article explains how to create and use a service principal for your Azure R
The following sections explain how to use the Azure CLI to create a service principal for your Azure Red Hat OpenShift cluster
-## Prerequisite
+## Prerequisites - Azure CLI
If youΓÇÖre using the Azure CLI, youΓÇÖll need Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
+
+On [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) create a service principal. Be sure to save the client ID and the appID.
+ ## Create a resource group ```azurecli-interactive
az aro create \
The following sections explain how to use the Azure portal to create a service principal for your Azure Red Hat OpenShift cluster.
+## Prerequiste - Azure portal
+
+On [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) create a service principal. Be sure to save the client ID and the appID.
+ ## Create a service principal - Azure portal To create a service principal using the Azure portal, complete the following steps.
openshift Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-portal.md
Title: Deploy an Azure Red Hat OpenShift cluster using the Azure portal description: Deploy an Azure Red Hat OpenShift cluster using the Azure portal-+
Last updated 11/30/2021
-# Quickstart: Deploy an Azure Red Hat OpenShift (ARO) cluster using the Azure portal
+# Quickstart: Deploy an Azure Red Hat OpenShift cluster using the Azure portal
-Azure Red Hat OpenShift (ARO) is a managed OpenShift service that lets you quickly deploy and manage clusters. In this quickstart, we'll deploy an Azure Red Hat OpenShift cluster using the Azure portal.
+Azure Red Hat OpenShift is a managed OpenShift service that lets you quickly deploy and manage clusters. In this quickstart, we'll deploy an Azure Red Hat OpenShift cluster using the Azure portal.
## Prerequisites
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
+
+On [Use the portal to create an Azure AD application and service principal that can access resources](/active-directory/develop/howto-create-service-principal-portal) create a service principal. Be sure to save the client ID and the appID.
## Create an Azure Red Hat OpenShift cluster 1. On the Azure portal menu or from the **Home** page, select **All Services** under three horizontal bars on the top left hand page. 2. Select **Containers** > **Azure Red Hat OpenShift**.
-3. On the **Basics** page, configure the following options:
+3. On the **Basics** tab, configure the following options:
* **Project details**: * Select an **Azure Subscription**. * Select or create an **Azure Resource group**, such as *myResourceGroup*. * **Cluster details**:
- * Select a **Region** for the ARO cluster.
+ * Select a **Region** for the Azure Red Hat OpenShift cluster.
* Enter a OpenShift **cluster name**, such as *myAROCluster*. * Enter **Domain name**. * Select **Master VM Size** and **Worker VM Size**.
-![**Basics** tab on Azure portal](./media/Basics.png)
+ ![**Basics** tab on Azure portal](./media/Basics.png)
+
+4. On the **Authentication** tab of the **Azure Red Hat OpenShift** dialog, complete the following sections.
+
+ In the **Service principal information** section:
+
+ - **Service principal client ID** is your appId.
+ - **Service principal client secret** is the service principal's decrypted Secret value.
+
+ In the **Cluster pull secret** section:
+
+ - **Pull secret** is your cluster's pull secret's decrypted value. If you don't have a pull secret, leave this field blank.
-4. On the **Authentication page**, configure the following options:
- a. Under Application service principal, **Select Existing** or **Create New** from the Service Principal Type selector
- **Note**: Application service principal is the service principal associated with your Azure Active Directory application. For more information, consult the Azure product
- documentation
- b. Under **Cluster pull secret** Enter pull secret
- **Note**: A Red Hat pull secret enables your cluster to access Red Hat container registries along with additional content.
- c. Search for Azure Red Hat Open Shift RP and select that one.
- d. Azure Red Hat OpenShift Resource Provider is a system variable. The default value of the Azure Red Hat OpenShift RP should be selected, so there is no need to change the
- selector. An entry for the password field is not needed.
+ :::image type="content" source="./media/openshift-service-principal-portal.png" alt-text="Screenshot that shows how to use the Authentication tab with Azure portal to create a service principal." lightbox="./media/openshift-service-principal-portal.png":::
-![**Authentication** tab on Azure portal](./media/Authentication.png)
+5. On the **Networking** tab, which follows, make sure to configure the required options.
-5. On the **Networking** tab make sure to configure:
- **Note**: Azure Red Hat OpenShift clusters running OpenShift 4 require a virtual network with two empty subnets: one for the master and one for worker nodes.
+ **Note**: Azure Red Hat OpenShift clusters running OpenShift 4 require a virtual network with two empty subnets: one for the control plane and one for worker nodes.
![**Networking** tab on Azure portal](./media/Networking.png)
-6. On the **Tags** section, add tags to organize your resources.
+6. On the **Tags** tab, add tags to organize your resources.
![**Tags** tab on Azure portal](./media/Tags.png)
-7. Click **Review + create** and then **Create** when validation completes.
+7. Click **Review + create** and then **Create** when validation completes.
![**Review + create** tab on Azure portal](./media/Review+Create.png)
-8. It takes approximately 35- 45 minutes to create the Azure Red Hat OpenShift cluster. When your deployment is complete, navigate to your resource by either:
+8. It takes approximately 35 to 45 minutes to create the Azure Red Hat OpenShift cluster. When your deployment is complete, navigate to your resource by either:
* Clicking **Go to resource**, or * Browsing to the Azure Red Hat OpenShift cluster resource group and selecting the Azure Red Hat OpenShift resource. * Per example cluster dashboard below: browsing for *myResourceGroup* and selecting *myAROCluster* resource.
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/application-best-practices.md
- Title: App development best practices - Azure Database for PostgreSQL
-description: Learn about best practices for building an app by using Azure Database for PostgreSQL.
------ Previously updated : 12/10/2020--
-# Best practices for building an application with Azure Database for PostgreSQL
-
-Here are some best practices to help you build a cloud-ready application by using Azure Database for PostgreSQL. These best practices can reduce development time for your app.
-
-## Configuration of application and database resources
-
-### Keep the application and database in the same region
-Make sure all your dependencies are in the same region when deploying your application in Azure. Spreading instances across regions or availability zones creates network latency, which might affect the overall performance of your application.
-
-### Keep your PostgreSQL server secure
-Configure your PostgreSQL server to be [secure](./concepts-security.md) and not accessible publicly. Use one of these options to secure your server:
-- [Firewall rules](./concepts-firewall-rules.md)-- [Virtual networks](./concepts-data-access-and-security-vnet.md)-- [Azure Private Link](./concepts-data-access-and-security-private-link.md)-
-For security, you must always connect to your PostgreSQL server over SSL and configure your PostgreSQL server and your application to use TLS 1.2. See [How to configure SSL/TLS](./concepts-ssl-connection-security.md).
-
-### Tune your server parameters
-For read-heavy workloads tuning server parameters, `tmp_table_size` and `max_heap_table_size` can help optimize for better performance. To calculate the values required for these variables, look at the total per-connection memory values and the base memory. The sum of per-connection memory parameters, excluding `tmp_table_size`, combined with the base memory accounts for total memory of the server.
-
-### Use environment variables for connection information
-Do not save your database credentials in your application code. Depending on the front end application follow the guidance to setup environment variables. For App service use, see [how to configure app settings](../app-service/configure-common.md#configure-app-settings) and for Azure Kuberentes service, see [how to use Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
-
-## Performance and resiliency
-Here are a few tools and practices that you can use to help debug performance issues with your application.
-
-### Use Connection Pooling
-With connection pooling, a fixed set of connections are established at the startup time and maintained. This also helps reduce the memory fragmentation on the server that is caused by the dynamic new connections established on the database server. The connection pooling can be configured on the application side if the app framework or database driver supports it. If that is not supported, the other recommended option is to leverage a proxy connection pooler service like [PgBouncer](https://pgbouncer.github.io/) or [Pgpool](https://pgpool.net/mediawiki/index.php/Main_Page) running outside the application and connecting to the database server. Both PgBouncer and Pgpool are community based tools that work with Azure Database for PostgreSQL.
-
-### Retry logic to handle transient errors
-Your application might experience transient errors where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds. A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./concepts-connectivity.md) to learn more.
-
-### Enable read replication to mitigate failovers
-You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs.You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
--
-## Database deployment
-
-### Configure CI/CD deployment pipeline
-Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
-
-### Define manual database deployment process
-During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
--- Create a copy of a production database on a new database by using pg_dump.-- Update the new database with your new schema changes or updates needed for your database.-- Put the production database in a read-only state. You should not have write operations on the production database until deployment is completed.-- Test your application with the newly updated database from step 1.-- Deploy your application changes and make sure the application is now using the new database that has the latest updates.-- Keep the old production database so that you can roll back the changes. You can then evaluate to either delete the old production database or export it on Azure Storage if needed.-
-> [!NOTE]
-> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests. Make sure your application code also handles any failed requests.
-
-## Database schema and queries
-Here are few tips to keep in mind when you build your database schema and your queries.
-
-### Use BIGINT or UUID for Primary Keys
-When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/8.1/datatype.html).
-
-### Use indexes
-
-There are many types of [indexes](https://www.postgresql.org/docs/9.1/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes.
-
-### Use autovacuum
-You can optimize your server with autovacuum on an Azure Database for PostgreSQL server. PostgreSQL allow greater database concurrency but with every update results in insert and delete. For delete , the records are soft marked which will be purged later. To carry out these tasks, PostgreSQL runs a vacuum job. If you don't vacuum from time to time, the dead tuples that accumulate can result in:
--- Data bloat, such as larger databases and tables.-- Larger suboptimal indexes.-- Increased I/O.-
-Learn more about [how to optimize with autovacuum](howto-optimize-autovacuum.md).
-
-### Use pg_stats_statements
-Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](howto-optimize-query-stats-collection.md).
--
-### Use the Query Store
-The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements.
-
-### Optimize bulk inserts and use transient data
-If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables. It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties. See [how to optimize bulk inserts](howto-optimize-bulk-inserts.md).
-
-## Next Steps
-[Postgres Guide](http://postgresguide.com/)
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concept-reserved-pricing.md
- Title: Reserved compute pricing - Azure Database for PostgreSQL
-description: Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
------ Previously updated : 10/06/2021--
-# Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
--
-Azure Database for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL reserved capacity, you make an upfront commitment on PostgreSQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
-
-## How does the instance reservation work?
-You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br>
-
-> [!IMPORTANT]
-> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server), [Flexible Server](flexible-server/overview.md), and [Hyperscale Citus](./overview.md#azure-database-for-postgresql--hyperscale-citus) deployment options. For information about RI pricing on Hyperscale (Citus), see [this page](hyperscale/concepts-reserved-pricing.md).
-
-You can buy Azure Database for PostgreSQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
-
-* You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
-* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
-* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for PostgreSQL reserved capacity. </br>
-
-The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reserved-instance-usage.md).
-
-## Reservation exchanges and refunds
-
-You can exchange a reservation for another reservation of the same type, you can also exchange a reservation from Azure Database for PostgreSQL - Single Server with Flexible Server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
-
-## Reservation discount
-
-You may save up to 65% on compute costs with reserved instances. In order to find the discount for your case, please visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
-
-## Determine the right server size before purchase
-
-The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed servers within a specific region and using the same performance tier and hardware generation.</br>
-
-For example, let's suppose that you are running one general purpose Gen5 ΓÇô 32 vCore PostgreSQL database, and two memory-optimized Gen5 ΓÇô 16 vCore PostgreSQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose Gen5 ΓÇô 8 vCore database server, and one memory-optimized Gen5 ΓÇô 32 vCore database server. Let's suppose that you know that you will need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5
--
-## Buy Azure Database for PostgreSQL reserved capacity
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Select **All services** > **Reservations**.
-3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
-4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL servers that get the discount depend on the scope and quantity selected.
----
-The following table describes required fields.
-
-| Field | Description |
-| : | :- |
-| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
-| Scope | The vCore reservationΓÇÖs scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for PostgreSQL servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br>**Management group**, the reservation discount is applied to Azure Database for PostgreSQL running in any subscriptions that are a part of both the management group and billing scope.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for PostgreSQL servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for PostgreSQL servers in the selected subscription and the selected resource group within that subscription.
-| Region | The Azure region thatΓÇÖs covered by the Azure Database for PostgreSQL reserved capacity reservation.
-| Deployment Type | The Azure Database for PostgreSQL resource type that you want to buy the reservation for.
-| Performance Tier | The service tier for the Azure Database for PostgreSQL servers.
-| Term | One year
-| Quantity | The amount of compute resources being purchased within the Azure Database for PostgreSQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you are running or planning to run an Azure Database for PostgreSQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
-
-## Reserved instances API support
-
-Use Azure APIs to programmatically get information for your organization about Azure service or software reservations. For example, use the APIs to:
--- Find reservations to buy-- Buy a reservation-- View purchased reservations-- View and manage reservation access-- Split or merge reservations-- Change the scope of reservations
-
-For more information, see [APIs for Azure reservation automation](../cost-management-billing/reservations/reservation-apis.md).
-
-## vCore size flexibility
-
-vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit. If you scale to higher vCores than your reserved capacity, you will be billed for the excess vCores using pay-as-you-go pricing.
-
-## How to view reserved instance purchase details
-
-You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations). For more information, see [How a reservation discount is applied to Azure Database for PostgreSQL](../cost-management-billing/reservations/understand-reservation-charges-postgresql.md).
-
-## Reserved instance expiration
-
-You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Instances for Azure Database for PostgreSQL](../cost-management-billing/reservations/understand-reservation-charges-postgresql.md).
-
-## Need help? Contact us
-
-If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-## Next steps
-
-The vCore reservation discount is applied automatically to the number of Azure Database for PostgreSQL servers that match the Azure Database for PostgreSQL reserved capacity reservation scope and attributes. You can update the scope of the Azure database for PostgreSQL reserved capacity reservation through Azure portal, PowerShell, CLI or through the API.
-
-To learn more about Azure Reservations, see the following articles:
-
-* [What are Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md)?
-* [Manage Azure Reservations](../cost-management-billing/reservations/manage-reserved-vm-instance.md)
-* [Understand Azure Reservations discount](../cost-management-billing/reservations/understand-reservation-charges.md)
-* [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reservation-charges-postgresql.md)
-* [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
-* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
postgresql Concepts Aad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-aad-authentication.md
- Title: Active Directory authentication - Azure Database for PostgreSQL - Single Server
-description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for PostgreSQL - Single Server
----- Previously updated : 07/23/2020--
-# Use Azure Active Directory for authenticating with PostgreSQL
-
-Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for PostgreSQL using identities defined in Azure AD.
-With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
-
-Benefits of using Azure AD include:
--- Authentication of users across Azure Services in a uniform way-- Management of password policies and password rotation in a single place-- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords-- Customers can manage database permissions using external (Azure AD) groups.-- Azure AD authentication uses PostgreSQL database roles to authenticate identities at the database level-- Support of token-based authentication for applications connecting to Azure Database for PostgreSQL-
-To configure and use Azure Active Directory authentication, use the following process:
-
-1. Create and populate Azure Active Directory with user identities as needed.
-2. Optionally associate or change the Active Directory currently associated with your Azure subscription.
-3. Create an Azure AD administrator for the Azure Database for PostgreSQL server.
-4. Create database users in your database mapped to Azure AD identities.
-5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
-
-> [!NOTE]
-> To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for PostgreSQL, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](howto-configure-sign-in-aad-authentication.md).
-
-## Architecture
-
-The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for PostgreSQL. The arrows indicate communication pathways.
-
-![authentication flow][1]
-
-## Administrator structure
-
-When using Azure AD authentication, there are two Administrator accounts for the PostgreSQL server; the original PostgreSQL administrator and the Azure AD administrator. Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the PostgreSQL server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the PostgreSQL server. Only one Azure AD administrator (a user or group) can be configured at any time.
-
-![admin structure][2]
-
-## Permissions
-
-To create new users that can authenticate with Azure AD, you must have the `azure_ad_admin` role in the database. This role is assigned by configuring the Azure AD Administrator account for a specific Azure Database for PostgreSQL server.
-
-To create a new Azure AD database user, you must connect as the Azure AD administrator. This is demonstrated in [Configure and Login with Azure AD for Azure Database for PostgreSQL](howto-configure-sign-in-aad-authentication.md).
-
-Any Azure AD authentication is only possible if the Azure AD admin was created for Azure Database for PostgreSQL. If the Azure Active Directory admin was removed from the server, existing Azure Active Directory users created previously can no longer connect to the database using their Azure Active Directory credentials.
-
-## Connecting using Azure AD identities
-
-Azure Active Directory authentication supports the following methods of connecting to a database using Azure AD identities:
--- Azure Active Directory Password-- Azure Active Directory Integrated-- Azure Active Directory Universal with MFA-- Using Active Directory Application certificates or client secrets-- [Managed Identity](howto-connect-with-managed-identity.md)-
-Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in.
-
-Please note that management operations, such as adding new users, are only supported for Azure AD user roles at this point.
-
-> [!NOTE]
-> For more details on how to connect with an Active Directory token, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](howto-configure-sign-in-aad-authentication.md).
-
-## Additional considerations
--- To enhance manageability, we recommend you provision a dedicated Azure AD group as an administrator.-- Only one Azure AD administrator (a user or group) can be configured for a Azure Database for PostgreSQL server at any time.-- Only an Azure AD administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users.-- If a user is deleted from Azure AD, that user will no longer be able to authenticate with Azure AD, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching role will still be in the database, it will not be possible to connect to the server with that role.
-> [!NOTE]
-> Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for PostgreSQL this access will be revoked immediately.
-- If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will reenable Azure AD logins.-- Azure Database for PostgreSQL matches access tokens to the Azure Database for PostgreSQL role using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for PostgreSQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing role. To allow that, the Azure Database for PostgreSQL Azure AD admin must revoke and then grant the role ΓÇ£azure_ad_userΓÇ¥ to the user to refresh the Azure AD user ID.-
-## Next steps
--- To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for PostgreSQL, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](howto-configure-sign-in-aad-authentication.md).-- For an overview of logins, users, and database roles Azure Database for PostgreSQL, see [Create users in Azure Database for PostgreSQL - Single Server](howto-create-users.md).-
-<!--Image references-->
-
-[1]: ./media/concepts-aad-authentication/authentication-flow.png
-[2]: ./media/concepts-aad-authentication/admin-structure.png
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-aks.md
- Title: Connect to Azure Kubernetes Service - Azure Database for PostgreSQL - Single Server
-description: Learn about connecting Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Single Server
------ Previously updated : 07/14/2020--
-# Connecting Azure Kubernetes Service and Azure Database for PostgreSQL - Single Server
-
-Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for PostgreSQL together to create an application.
-
-## Accelerated networking
-Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../virtual-network/create-vm-accelerated-networking-cli.md).
-
-From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
-
-You can confirm whether your AKS cluster has accelerated networking:
-1. Go to the Azure portal and select your AKS cluster.
-2. Select the Properties tab.
-3. Copy the name of the **Infrastructure Resource Group**.
-4. Use the portal search bar to locate and open the infrastructure resource group.
-5. Select a VM in that resource group.
-6. Go to the VM's **Networking** tab.
-7. Confirm whether **Accelerated networking** is 'Enabled.'
-
-Or through the Azure CLI using the following two commands:
-```azurecli
-az aks show --resource-group myResourceGroup --name myAKSCluster --query "nodeResourceGroup"
-```
-The output will be the generated resource group that AKS creates containing the network interface. Take the "nodeResourceGroup" name and use it in the next command. **EnableAcceleratedNetworking** will either be true or false:
-```azurecli
-az network nic list --resource-group nodeResourceGroup -o table
-```
-
-## Connection pooling
-A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
-
-There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
-
-## Next steps
-
-Create an AKS cluster [using the Azure CLI][./learn/quick-kubernetes-deploy-cli], [using Azure PowerShell][./learn/quick-kubernetes-deploy-powershell], or [using the Azure portal][./learn/quick-kubernetes-deploy-portal].
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-audit.md
- Title: Audit logging in Azure Database for PostgreSQL - Single Server
-description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 01/28/2020--
-# Audit logging in Azure Database for PostgreSQL - Single Server
-
-Audit logging of database activities in Azure Database for PostgreSQL - Single Server is available through the PostgreSQL Audit extension, [pgAudit](https://www.pgaudit.org/). The pgAudit extension provides detailed session and object audit logging.
-
-> [!NOTE]
-> The pgAudit extension is in preview on Azure Database for PostgreSQL. It can be enabled on general purpose and memory-optimized servers only.
-
-If you want Azure resource-level logs for operations like compute and storage scaling, see [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md).
-
-## Usage considerations
-
-By default, pgAudit log statements are emitted along with your regular log statements by using the Postgres standard logging facility. In Azure Database for PostgreSQL, these .log files can be downloaded through the Azure portal or the Azure CLI. The maximum storage for the collection of files is 1 GB. Each file is available for a maximum of seven days. The default is three days. This service is a short-term storage option.
-
-Alternatively, you can configure all logs to be sent to the Azure Monitor Logs store for later analytics in Log Analytics. If you enable Monitor resource logging, your logs are automatically sent in JSON format to Azure Storage, Azure Event Hubs, or Monitor Logs, depending on your choice.
-
-Enabling pgAudit generates a large volume of logging on a server, which affects performance and log storage. We recommend that you use Monitor Logs, which offers longer-term storage options and analysis and alerting features. Turn off standard logging to reduce the performance impact of additional logging:
-
- 1. Set the parameter `logging_collector` to **OFF**.
- 1. Restart the server to apply this change.
-
-To learn how to set up logging to Storage, Event Hubs, or Monitor Logs, see the resource logs section of [Logs in Azure Database for PostgreSQL - Single Server](concepts-server-logs.md).
-
-## Install pgAudit
-
-To install pgAudit, you need to include it in the server's shared preloaded libraries. A change to the Postgres `shared_preload_libraries` parameter requires a server restart to take effect. You can change parameters by using the [portal](howto-configure-server-parameters-using-portal.md), the [CLI](howto-configure-server-parameters-using-cli.md), or the [REST API](/rest/api/postgresql/singleserver/configurations/createorupdate).
-
-To use the [portal](https://portal.azure.com):
-
- 1. Select your Azure Database for PostgreSQL server.
- 1. On the left, under **Settings**, select **Server parameters**.
- 1. Search for **shared_preload_libraries**.
- 1. Select **PGAUDIT**.
-
- :::image type="content" source="./media/concepts-audit/share-preload-parameter.png" alt-text="Screenshot that shows Azure Database for PostgreSQL enabling shared_preload_libraries for PGAUDIT.":::
-
- 1. Restart the server to apply the change.
- 1. Check that `pgaudit` is loaded in `shared_preload_libraries` by executing the following query in psql:
-
- ```SQL
- show shared_preload_libraries;
- ```
- You should see `pgaudit` in the query result that will return `shared_preload_libraries`.
-
- 1. Connect to your server by using a client like psql, and enable the pgAudit extension:
-
- ```SQL
- CREATE EXTENSION pgaudit;
- ```
-
-> [!TIP]
-> If you see an error, confirm that you restarted your server after you saved `shared_preload_libraries`.
-
-## pgAudit settings
-
-By using pgAudit, you can configure session or object audit logging. [Session audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#session-audit-logging) emits detailed logs of executed statements. [Object audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#object-audit-logging) is audit scoped to specific relations. You can choose to set up one or both types of logging.
-
-> [!NOTE]
-> The pgAudit settings are specified globally and can't be specified at a database or role level.
-
-After you [install pgAudit](#install-pgaudit), you can configure its parameters to start logging.
-
-To configure pgAudit, in the [portal](https://portal.azure.com):
-
- 1. Select your Azure Database for PostgreSQL server.
- 1. On the left, under **Settings**, select **Server parameters**.
- 1. Search for the **pgaudit** parameters.
- 1. Select appropriate settings parameters to edit. For example, to start logging, set **pgaudit.log** to **WRITE**.
-
- :::image type="content" source="./media/concepts-audit/pgaudit-config.png" alt-text="Screenshot that shows Azure Database for PostgreSQL configuring logging with pgAudit.":::
- 1. Select **Save** to save your changes.
-
-The [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#settings) provides the definition of each parameter. Test the parameters first, and confirm that you're getting the expected behavior. For example:
--- When the setting **pgaudit.log_client** is turned on, it redirects logs to a client process like psql instead of being written to a file. In general, leave this setting disabled.-- The parameter **pgaudit.log_level** is only enabled when **pgaudit.log_client** is on.-
-> [!NOTE]
-> In Azure Database for PostgreSQL, **pgaudit.log** can't be set by using a minus-sign shortcut (`-`) as described in the pgAudit documentation. All required statement classes, such as READ and WRITE, should be individually specified.
-
-### Audit log format
-
-Each audit entry is indicated by `AUDIT:` near the beginning of the log line. The format of the rest of the entry is detailed in the [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format).
-
-If you need any other fields to satisfy your audit requirements, use the Postgres parameter `log_line_prefix`. The string `log_line_prefix` is output at the beginning of every Postgres log line. For example, the following `log_line_prefix` setting provides timestamp, username, database name, and process ID:
-
-```
-t=%m u=%u db=%d pid=[%p]:
-```
-
-To learn more about `log_line_prefix`, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LINE-PREFIX).
-
-### Get started
-
-To quickly get started, set **pgaudit.log** to **WRITE**. Then open your logs to review the output.
-
-## View audit logs
-
-If you're using .log files, your audit logs are included in the same file as your PostgreSQL error logs. You can download log files from the [portal](howto-configure-server-logs-in-portal.md) or the [CLI](howto-configure-server-logs-using-cli.md).
-
-If you're using Azure resource logging, the way you access the logs depends on which endpoint you choose. For Storage, see [Azure resource logs](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage). For Event Hubs, also see [Azure resource logs](../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
-
-For Monitor Logs, the logs are sent to the workspace you selected. The Postgres logs use the `AzureDiagnostics` collection mode, so they can be queried from the `AzureDiagnostics` table, as shown. To learn more about querying and alerting, see [Log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
-
-Use this query to get started. You can configure alerts based on queries.
-
-Search for all Postgres logs for a particular server in the last day:
-
-```
-AzureDiagnostics
-| where LogicalServerName_s == "myservername"
-| where TimeGenerated > ago(1d)
-| where Message contains "AUDIT:"
-```
-
-## Next steps
--- [Learn about logging in Azure Database for PostgreSQL](concepts-server-logs.md).-- Learn how to set parameters by using the [Azure portal](howto-configure-server-parameters-using-portal.md), the [Azure CLI](howto-configure-server-parameters-using-cli.md), or the [REST API](/rest/api/postgresql/singleserver/configurations/createorupdate).
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-azure-advisor-recommendations.md
- Title: Azure Advisor for PostgreSQL
-description: Learn about Azure Advisor recommendations for PostgreSQL.
----- Previously updated : 04/08/2021-
-# Azure Advisor for PostgreSQL
-Learn about how Azure Advisor is applied to Azure Database for PostgreSQL and get answers to common questions.
-## What is Azure Advisor for PostgreSQL?
-The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your PostgreSQL database.
-Advisor recommendations are split among our PostgreSQL database offerings:
-* Azure Database for PostgreSQL - Single Server
-* Azure Database for PostgreSQL - Flexible Server
-* Azure Database for PostgreSQL - Hyperscale (Citus)
-
-Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations.
-## Where can I view my recommendations?
-Recommendations are available from the **Overview** navigation sidebar in the Azure portal. A preview will appear as a banner notification, and details can be viewed in the **Notifications** section located just below the resource usage graphs.
--
-## Recommendation types
-Azure Database for PostgreSQL prioritize the following types of recommendations:
-* **Performance**: To improve the speed of your PostgreSQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../advisor/advisor-performance-recommendations.md).
-* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, connection limits, and hyperscale data distribution recommendations. For more information, see [Advisor Reliability recommendations](../advisor/advisor-high-availability-recommendations.md).
-* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../advisor/advisor-cost-recommendations.md).
-
-## Understanding your recommendations
-* **Daily schedule**: For Azure PostgreSQL databases, we check server telemetry and issue recommendations on a daily schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry on the following day.
-* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
-
-## Next steps
-For more information, see [Azure Advisor Overview](../advisor/advisor-overview.md).
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-backup.md
- Title: Backup and restore - Azure Database for PostgreSQL - Single Server
-description: Learn about automatic backups and restoring your Azure Database for PostgreSQL server - Single Server.
----- Previously updated : 11/08/2021--
-# Backup and restore in Azure Database for PostgreSQL - Single Server
-
-Azure Database for PostgreSQL automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion.
-
-## Backups
-
-Azure Database for PostgreSQL takes backups of the data files and the transaction log. Depending on the supported maximum storage size, we either take full and differential backups (4-TB max storage servers) or snapshot backups (up to 16-TB max storage servers). These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can optionally configure it up to 35 days. All backups are encrypted using AES 256-bit encryption.
-
-These backup files cannot be exported. The backups can only be used for restore operations in Azure Database for PostgreSQL. You can use [pg_dump](howto-migrate-using-dump-and-restore.md) to copy a database.
-
-### Backup frequency
-
-#### Servers with up to 4-TB storage
-
-For servers which support up to 4-TB maximum storage, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes.
--
-#### Servers with up to 16-TB storage
-
-In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only. Differential snapshot backups do not occur on a fixed schedule. In a day, three differential snapshot backups are performed. Transaction log backups occur every five minutes.
-
-> [!NOTE]
-> Automatic backups are performed for [replica servers](./concepts-read-replicas.md) that are configured with up to 4TB storage configuration.
-
-### Backup retention
-
-Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./howto-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./howto-restore-server-cli.md#set-backup-configuration).
-
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example - if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days:
-- Servers with up to 4-TB storage will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.-- Servers with up to 16-TB storage will retain the full database snapshot, all the differential snapshots and transaction log backups in last 8 days.-
-### Backup redundancy options
-
-Azure Database for PostgreSQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../availability-zones/cross-region-replication-azure.md). This provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
-
-> [!IMPORTANT]
-> Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option.
-
-### Backup storage cost
-
-Azure Database for PostgreSQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/).
-
-You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available in the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
-
-The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups.
-
-## Restore
-
-In Azure Database for PostgreSQL, performing a restore creates a new server from the original server's backups.
-
-There are two types of restore available:
--- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server.-- **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region.-
-The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time varies depending on the the last data backup and the amount of recovery needs to be performed. It is usually less than 12 hours.
-
-> [!NOTE]
-> If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
-
-> [!NOTE]
-> If you want to restore a deleted PostgreSQL server, follow the procedure documented [here](howto-restore-dropped-server.md).
-
-### Point-in-time restore
-
-Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option.
-
-Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect.
-
-You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes.
-
-If you want to restore a dropped table,
-1. Restore source server using Point-in-time method.
-2. Dump the table using `pg_dump` from restored server.
-3. Rename source table on original server.
-4. Import table using psql command line on original server.
-5. You can optionally delete the restored server.
-
->[!Note]
-> It is recommended not to create multiple restores for the same server at the same time.
-
-### Geo-restore
-
-You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups. Servers that support up to 4 TB of storage can be restored to the geo-paired region, or to any region that supports up to 16 TB of storage. For servers that support up to 16 TB of storage, geo-backups can be restored in any region that support 16 TB servers as well. Review [Azure Database for PostgreSQL pricing tiers](concepts-pricing-tiers.md) for the list of supported regions.
-
-Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
-
-During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size is not supported.
-
-> [!NOTE]
-> If your source server uses infrastructure double encryption, for restoring the server, there are limitations including available regions. Please see the [infrastructure double encryption](concepts-infrastructure-double-encryption.md) for more details.
-
-### Perform post-restore tasks
-
-After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running:
--- To access the restored server, since it has a different name than the original server, please change the servername to the restored server name and the user name to `username@new-restored-server-name` in your connection string.-- If the new server is meant to replace the original server, redirect clients and client applications to the new server. -- Ensure appropriate server-level firewall and VNet rules are in place for users to connect. These rules are not copied over from the original server.-- Ensure appropriate logins and database level permissions are in place-- Configure alerts, as appropriate-
-## Next steps
--- Learn how to restore usingΓÇ»[the Azure portal](howto-restore-server-portal.md).-- Learn how to restore usingΓÇ»[the Azure CLI](howto-restore-server-cli.md).-- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-business-continuity.md
- Title: Business continuity - Azure Database for PostgreSQL - Single Server
-description: This article describes business continuity (point in time restore, data center outage, geo-restore, replicas) when using Azure Database for PostgreSQL.
----- Previously updated : 08/07/2020--
-# Overview of business continuity with Azure Database for PostgreSQL - Single Server
-
-This overview describes the capabilities that Azure Database for PostgreSQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance.
-
-## Features that you can use to provide business continuity
-
-As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after the disruptive event - this is your Recovery Time Objective (RTO). You also need to understand the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event - this is your Recovery Point Objective (RPO).
-
-Azure Database for PostgreSQL provides business continuity features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in a different region. Each has different characteristics for the recovery time and the potential data loss. With [Geo-restore](concepts-backup.md) feature, a new server is created using the backup data that is replicated from another region. The overall time it takes to restore and recover depends on the size of the database and the amount of logs to recover. The overall time to establish the server varies from few minutes to few hours. With [read replicas](concepts-read-replicas.md), transaction logs from the primary are asynchronously streamed to the replica. In the event of a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss.
-
-> [!NOTE]
-> The lag between the primary and the replica depends on the latency between the sites, the amount of data to be transmitted and most importantly on the write workload of the primary server. Heavy write workloads can generate significant lag.
->
-> Because of asynchronous nature of replication used for read-replicas, they **should not** be considered as a High Availability (HA) solution since the higher lags can mean higher RTO and RPO. Only for workloads where the lag remains smaller through the peak and non-peak times of the workload, read replicas can act as a HA alternative. Otherwise read replicas are intended for true read-scale for ready heavy workloads and for (Disaster Recovery) DR scenarios.
-
-The following table compares RTO and RPO in a **typical workload** scenario:
-
-| **Capability** | **Basic** | **General Purpose** | **Memory optimized** |
-| :: | :-: | :--: | :: |
-| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
-| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h |
-| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
-
- \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
-
-## Recover a server after a user or application error
-
-You can use the serviceΓÇÖs backups to recover a server from various disruptive events. A user may accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data due to an application defect, and so on.
-
-You can perform a **point-in-time-restore** to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server.
-
-We recommend that you use [Azure resource lock](../azure-resource-manager/management/lock-resources.md) to help prevent accidental deletion of your server. If you accidentally deleted your server, you might be able to restore it if the deletion happened within the last 5 days. For more information, see [Restore a dropped Azure Database for PostgreSQL server](howto-restore-dropped-server.md).
-
-## Recover from an Azure data center outage
-
-Although rare, an Azure data center can have an outage. When an outage occurs, it causes a business disruption that might only last a few minutes, but could last for hours.
-
-One option is to wait for your server to come back online when the data center outage is over. This works for applications that can afford to have the server offline for some period of time, for example a development environment. When a data center has an outage, you do not know how long the outage might last, so this option only works if you don't need your server for a while.
-
-## Geo-restore
-
-The geo-restore feature restores the server using geo-redundant backups. The backups are hosted in your server's [paired region](../availability-zones/cross-region-replication-azure.md). You can restore from these backups to any other region. The geo-restore creates a new server with the data from the backups. Learn more about geo-restore from the [backup and restore concepts article](concepts-backup.md).
-
-> [!IMPORTANT]
-> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using pg_dump of your existing server and restore it to a newly created server configured with geo-redundant backups.
-
-## Cross-region read replicas
-You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
-
-## FAQ
-### Where does Azure Database for PostgreSQL store customer data?
-By default, Azure Database for PostgreSQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
--
-## Next steps
-- Learn more about the [automated backups in Azure Database for PostgreSQL](concepts-backup.md). -- Learn how to restore using [the Azure portal](howto-restore-server-portal.md) or [the Azure CLI](howto-restore-server-cli.md).-- Learn about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-certificate-rotation.md
- Title: Certificate rotation for Azure Database for PostgreSQL Single server
-description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for PostgreSQL Single server
------ Previously updated : 09/02/2020--
-# Understanding the changes in the Root CA change for Azure Database for PostgreSQL Single server
-
-Azure Database for PostgreSQL Single Server successfully completed the root certificate change on **February 15, 2021 (02/15/2021)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
-
-## Why root certificate update is required?
-
-Azure database for PostgreSQL users can only use the predefined certificate to connect to their PostgreSQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
-
-As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-
-The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
-
-## What change was performed on February 15, 2021 (02/15/2021)?
-
-On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers do not need to change anything and there is no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
-
-## Do I need to make any changes on my client to maintain connectivity?
-
-There is no change required on client side. if you followed our previous recommendation below, you will still be able to continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **We recommend to not remove the BaltimoreCyberTrustRoot from your combined CA certificate until further notice to maintain connectivity.**
-
-### Previous Recommendation
-
-* Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from links below:
- * https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem
- * https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem
-
-* Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
- * For Java (PostgreSQL JDBC) users using DefaultJavaSSLFactory, execute:
-
- ```console
- keytool -importcert -alias PostgreSQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
- ```
-
- ```console
- keytool -importcert -alias PostgreSQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
- ```
-
- Then replace the original keystore file with the new generated one:
- * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
- * System.setProperty("javax.net.ssl.trustStorePassword","password");
-
- * For .NET (Npgsql) users on Windows, make sure **Baltimore CyberTrust Root** and **DigiCert Global Root G2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates do not exist, import the missing certificate.
-
- ![Azure Database for PostgreSQL .net cert](media/overview/netconnecter-cert.png)
-
- * For .NET (Npgsql) users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates do not exist, create the missing certificate file.
-
- * For other PostgreSQL client users, you can merge two CA certificate files like this format below
-
- </br>--BEGIN CERTIFICATE--
- </br>(Root CA1: BaltimoreCyberTrustRoot.crt.pem)
- </br>--END CERTIFICATE--
- </br>--BEGIN CERTIFICATE--
- </br>(Root CA2: DigiCertGlobalRootG2.crt.pem)
- </br>--END CERTIFICATE--
-
-* Replace the original root CA pem file with the combined root CA file and restart your application/client.
-* In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
-
-> [!NOTE]
-> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
-
-## Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-
-We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
-
-Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
-
-## What if we removed the BaltimoreCyberTrustRoot certificate?
-
-You will start to connectivity errors while connecting to your Azure Database for PostgreSQL server. You will need to configure SSL with [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
--
-## Frequently asked questions
-
-### 1. If I am not using SSL/TLS, do I still need to update the root CA?
-No actions required if you are not using SSL/TLS.
-
-### 2. If I am using SSL/TLS, do I need to restart my database server to update the root CA?
-No, you do not need to restart the database server to start using the new certificate. This is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
-
-### 3. How do I know if I'm using SSL/TLS with root certificate verification?
-
-You can identify whether your connections verify the root certificate by reviewing your connection string.
-- If your connection string includes `sslmode=verify-ca` or `sslmode=verify-full`, you need to update the certificate.-- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates. -- If your connection string does not specify sslmode, you do not need to update certificates.-
-If you are using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. To understand PostgreSQL sslmode review the [SSL mode descriptions](https://www.postgresql.org/docs/11/libpq-ssl.html#ssl-mode-descriptions) in PostgreSQL documentation.
-
-### 4. What is the impact if using App Service with Azure Database for PostgreSQL?
-For Azure app services, connecting to Azure Database for PostgreSQL, we can have two possible scenarios and it depends on how on you are using SSL with your application.
-* This new certificate has been added to App Service at platform level. If you are using the SSL certificates included on App Service platform in your application, then no action is needed.
-* If you are explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress)
-
-### 5. What is the impact if using Azure Kubernetes Services (AKS) with Azure Database for PostgreSQL?
-If you are trying to connect to the Azure Database for PostgreSQL using Azure Kubernetes Services (AKS), it is similar to access from a dedicated customers host environment. Refer to the steps [here](../aks/ingress-own-tls.md).
-
-### 6. What is the impact if using Azure Data Factory to connect to Azure Database for PostgreSQL?
-For connector using Azure Integration Runtime, the connector leverage certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates and therefore no action is needed.
-
-For connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you will need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
-
-### 7. Do I need to plan a database server maintenance downtime for this change?
-No. Since the change here is only on the client side to connect to the database server, there is no maintenance downtime needed for the database server for this change.
-
-### 8. If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
-For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL.
-
-### 9. How often does Microsoft update their certificates or what is the expiry policy?
-These certificates used by Azure Database for PostgreSQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times.
-
-### 10. If I am using read replicas, do I need to perform this update only on the primary server or the read replicas?
-Since this update is a client-side change, if the client used to read data from the replica server, you will need to apply the changes for those clients as well.
-
-### 11. Do we have server-side query to verify if SSL is being used?
-To verify if you are using SSL connection to connect to the server refer [SSL verification](concepts-ssl-connection-security.md#applications-that-require-certificate-verification-for-tls-connectivity).
-
-### 12. Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
-No. There is no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
-
-### 13. What if you are using docker image of PgBouncer sidecar provided by Microsoft?
-A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting February 15, 2021.
-
-### 14. What if I have further questions?
-If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com)
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-connectivity-architecture.md
- Title: Connectivity architecture - Azure Database for PostgreSQL - Single Server
-description: Describes the connectivity architecture of your Azure Database for PostgreSQL - Single Server.
----- Previously updated : 10/15/2021--
-# Connectivity architecture in Azure Database for PostgreSQL
-This article explains the Azure Database for PostgreSQL connectivity architecture as well as how the traffic is directed to your Azure Database for PostgreSQL database instance from clients both within and outside Azure.
-
-## Connectivity architecture
-Connection to your Azure Database for PostgreSQL is established through a gateway that is responsible for routing incoming connections to the physical location of your server in our clusters. The following diagram illustrates the traffic flow.
---
-As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 5432. Inside the database cluster, traffic is forwarded to appropriate Azure Database for PostgreSQL. Therefore, in order to connect to your server, such as from corporate networks, it is necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
-
-## Azure Database for PostgreSQL gateway IP addresses
-
-The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for PostgreSQL server.
-
-As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
-
-* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**.You should use fully qualified domain name (FQDN) of your server in the format `<servername>.postgres.database.azure.com`, in the connection string for your application.
-* You do not update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
-
-The following table lists the gateway IP addresses of the Azure Database for PostgreSQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
-
-* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you are provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
-* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you are provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we have not decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you are expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there is no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
--
-| **Region name** | **Gateway IP addresses** |**Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
-|:-|:-|:-|:|
-| Australia Central| 20.36.105.0 | | |
-| Australia Central2 | 20.36.113.0 | | |
-| Australia East | 13.75.149.87, 40.79.161.1 | | |
-| Australia South East |13.77.48.10, 13.77.49.32, 13.73.109.251 | | |
-| Brazil South |191.233.201.8, 191.233.200.16 | | 104.41.11.5|
-| Canada Central |40.85.224.249, 52.228.35.221 | | |
-| Canada East | 40.86.226.166, 52.242.30.154 | | |
-| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | |
-| China East | 139.219.130.35 | | |
-| China East 2 | 40.73.82.1, 52.130.120.89 |
-| China East 3 | 52.131.155.192 |
-| China North | 139.219.15.17 | | |
-| China North 2 | 40.73.50.0 | | |
-| China North 3 | 52.131.27.192 | | |
-| East Asia | 13.75.33.20, 52.175.33.150, 13.75.33.20, 13.75.33.21 | | |
-| East US |40.71.8.203, 40.71.83.113 |40.121.158.30|191.238.6.43 |
-| East US 2 | 40.70.144.38, 52.167.105.38 | 52.177.185.181 | |
-| France Central | 40.79.137.0, 40.79.129.1 | | |
-| France South | 40.79.177.0 | | |
-| Germany Central | 51.4.144.100 | | |
-| Germany North | 51.116.56.0 | |
-| Germany North East | 51.5.144.179 | | |
-| Germany West Central | 51.116.152.0 | |
-| India Central | 104.211.96.159 | | |
-| India South | 104.211.224.146 | | |
-| India West | 104.211.160.80 | | |
-| Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | |
-| Japan West | 104.214.148.156, 40.74.96.6, 40.74.96.7 | 104.214.148.156 | |
-| Korea Central | 52.231.17.13 | 52.231.32.42 | |
-| Korea South | 52.231.145.3 | 52.231.151.97 | |
-| North Central US | 52.162.104.35, 52.162.104.36 | 23.96.178.199 | |
-| North Europe | 52.138.224.6, 52.138.224.7 | 40.113.93.91 |191.235.193.75 |
-| South Africa North | 102.133.152.0 | | |
-| South Africa West | 102.133.24.0 | | |
-| South Central US |104.214.16.39, 20.45.120.0 |13.66.62.124 |23.98.162.75 |
-| South East Asia | 40.78.233.2, 23.98.80.12 | 104.43.15.0 | |
-| Switzerland North | 51.107.56.0 ||
-| Switzerland West | 51.107.152.0| ||
-| UAE Central | 20.37.72.64 | | |
-| UAE North | 65.52.248.0 | | |
-| UK South | 51.140.184.11, 51.140.144.32, 51.105.64.0 | | |
-| UK West | 51.141.8.11 | | |
-| West Central US | 13.78.145.25, 52.161.100.158 | | |
-| West Europe |13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 |
-| West US |13.86.216.212, 13.86.217.212 |104.42.238.205 | 23.99.34.75|
-| West US 2 | 13.66.226.202, 13.66.136.192,13.66.136.195 | | |
-| West US 3 | 20.150.184.2 | | |
-||||
-
-## Frequently asked questions
-
-### What you need to know about this planned maintenance?
-This is a DNS change only which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
-
-### What are we decommissioning?
-Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We are decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
-
-### How can you validate if your connections are going to old gateway nodes or new gateway nodes?
-Ping your server's FQDN, for example ``ping xxx.postgres.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway.
-
-You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses
-
-### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
-You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
-
-### What do I do if my client applications are still connecting to old gateway server ?
-This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code.
-
-### Is there any impact for my application connections?
-This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-This maintenance operation will not drop the existing connections. It only makes the new connection requests go to new gateway ring.
-
-### Can I request for a specific time window for the maintenance?
-As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for majority of users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-
-### I am using private link, will my connections get affected?
-No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
---
-## Next steps
-
-* [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](./howto-manage-firewall-using-portal.md)
-* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](./howto-manage-firewall-using-cli.md)
postgresql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-connectivity.md
- Title: Handle transient connectivity errors - Azure Database for PostgreSQL - Single Server
-description: Learn how to handle transient connectivity errors for Azure Database for PostgreSQL - Single Server.
----- Previously updated : 5/6/2019--
-# Handling transient connectivity errors for Azure Database for PostgreSQL - Single Server
-
-This article describes how to handle transient errors connecting to Azure Database for PostgreSQL.
-
-## Transient errors
-
-A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens. Another reason could be a new version of a PaaS service that is being rolled out. Most of these events are automatically mitigated by the system in less than 60 seconds. A best practice for designing and developing applications in the cloud is to expect transient errors. Assume they can happen in any component at any time and to have the appropriate logic in place to handle these situations.
-
-## Handling transient errors
-
-Transient errors should be handled using retry logic. Situations that must be considered:
-
-* An error occurs when you try to open a connection
-* An idle connection is dropped on the server side. When you try to issue a command it can't be executed
-* An active connection that currently is executing a command is dropped.
-
-The first and second case are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for PostgreSQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
-
-* Wait for 5 seconds before your first retry.
-* For each following retry, the increase the wait exponentially, up to 60 seconds.
-* Set a max number of retries at which point your application considers the operation failed.
-
-When a connection with an active transaction fails, it is more difficult to handle the recovery correctly. There are two cases: If the transaction was read-only in nature, it is safe to reopen the connection and to retry the transaction. If however if the transaction was also writing to the database, you must determine if the transaction was rolled back, or if it succeeded before the transient error happened. In that case, you might just not have received the commit acknowledgment from the database server.
-
-One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully.
-
-When your program communicates with Azure Database for PostgreSQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors.
-
-Make sure to test you retry logic. For example, try to execute your code while scaling up or down the compute resources of you Azure Database for PostgreSQL server. Your application should handle the brief downtime that is encountered during this operation without any problems.
-
-## Next steps
-
-* [Troubleshoot connection issues to Azure Database for PostgreSQL](howto-troubleshoot-common-connection-issues.md)
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-data-access-and-security-private-link.md
- Title: Private Link - Azure Database for PostgreSQL - Single server
-description: Learn how Private link works for Azure Database for PostgreSQL - Single server.
------ Previously updated : 03/10/2020--
-# Private Link for Azure Database for PostgreSQL-Single server
-
-Private Link allows you to create private endpoints for Azure Database for PostgreSQL - Single server to bring it inside your Virtual Network (VNet). The private endpoint exposes a private IP within a subnet that you can use to connect to your database server just like any other resource in the VNet.
-
-For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../virtual-network/virtual-networks-overview.md) and Subnet.
-
-> [!NOTE]
-> The private link feature is only available for Azure Database for PostgreSQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
-
-## Data exfiltration prevention
-
-Data ex-filtration in Azure Database for PostgreSQL Single server is when an authorized user, such as a database admin, is able to extract data from one system and move it to another location or system outside the organization. For example, the user moves the data to a storage account owned by a third party.
-
-Consider a scenario with a user running PGAdmin inside an Azure Virtual Machine (VM) that is connecting to an Azure Database for PostgreSQL Single server provisioned in West US. The example below shows how to limit access with public endpoints on Azure Database for PostgreSQL Single server using network access controls.
-
-* Disable all Azure service traffic to Azure Database for PostgreSQL Single server via the public endpoint by setting *Allow Azure Services* to OFF. Ensure no IP addresses or ranges are allowed to access the server either via [firewall rules](./concepts-firewall-rules.md) or [virtual network service endpoints](./concepts-data-access-and-security-vnet.md).
-
-* Only allow traffic to the Azure Database for PostgreSQL Single server using the Private IP address of the VM. For more information, see the articles on [Service Endpoint](concepts-data-access-and-security-vnet.md) and [VNet firewall rules.](howto-manage-vnet-using-portal.md)
-
-* On the Azure VM, narrow down the scope of outgoing connection by using Network Security Groups (NSGs) and Service Tags as follows
-
- * Specify an NSG rule to allow traffic for *Service Tag = SQL.WestUS* - only allowing connection to Azure Database for PostgreSQL Single server in West US
- * Specify an NSG rule (with a higher priority) to deny traffic for *Service Tag = SQL* - denying connections to PostgreSQL Database in all regions</br></br>
-
-At the end of this setup, the Azure VM can connect only to Azure Database for PostgreSQL Single server in the West US region. However, the connectivity isn't restricted to a single Azure Database for PostgreSQL Single server. The VM can still connect to any Azure Database for PostgreSQL Single server in the West US region, including the databases that aren't part of the subscription. While we've reduced the scope of data exfiltration in the above scenario to a specific region, we haven't eliminated it altogether.</br>
-
-With Private Link, you can now set up network access controls like NSGs to restrict access to the private endpoint. Individual Azure PaaS resources are then mapped to specific private endpoints. A malicious insider can only access the mapped PaaS resource (for example an Azure Database for PostgreSQL Single server) and no other resource.
-
-## On-premises connectivity over private peering
-
-When you connect to the public endpoint from on-premises machines, your IP address needs to be added to the IP-based firewall using a Server-level firewall rule. While this model works well for allowing access to individual machines for dev or test workloads, it's difficult to manage in a production environment.
-
-With Private Link, you can enable cross-premises access to the private endpoint using [Express Route](https://azure.microsoft.com/services/expressroute/) (ER), private peering or [VPN tunnel](../vpn-gateway/index.yml). They can subsequently disable all access via public endpoint and not use the IP-based firewall.
-
-> [!NOTE]
-> In some cases the Azure Database for PostgreSQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Make sure that both the subscription has the **Microsoft.DBforPostgreSQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Configure Private Link for Azure Database for PostgreSQL Single server
-
-### Creation Process
-
-Private endpoints are required to enable Private Link. This can be done using the following how-to guides.
-
-* [Azure portal](./howto-configure-privatelink-portal.md)
-* [CLI](./howto-configure-privatelink-cli.md)
-
-### Approval Process
-Once the network admin creates the private endpoint (PE), the PostgreSQL admin can manage the private endpoint Connection (PEC) to Azure Database for PostgreSQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for PostgreSQL connectivity.
-
-* Navigate to the Azure Database for PostgreSQL server resource in the Azure portal.
- * Select the private endpoint connections in the left pane
- * Shows a list of all private endpoint Connections (PECs)
- * Corresponding private endpoint (PE) created
--
-* Select an individual PEC from the list by selecting it.
--
-* The PostgreSQL server admin can choose to approve or reject a PEC and optionally add a short text response.
--
-* After approval or rejection, the list will reflect the appropriate state along with the response text
--
-## Use cases of Private Link for Azure Database for PostgreSQL
--
-Clients can connect to the private endpoint from the same VNet, [peered VNet](../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases.
--
-### Connecting from an Azure VM in Peered Virtual Network (VNet)
-Configure [VNet peering](../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for PostgreSQL - Single server from an Azure VM in a peered VNet.
-
-### Connecting from an Azure VM in VNet-to-VNet environment
-Configure [VNet-to-VNet VPN gateway connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for PostgreSQL - Single server from an Azure VM in a different region or subscription.
-
-### Connecting from an on-premises environment over VPN
-To establish connectivity from an on-premises environment to the Azure Database for PostgreSQL - Single server, choose and implement one of the options:
-
-* [Point-to-Site connection](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
-* [Site-to-Site VPN connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md)
-* [ExpressRoute circuit](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)
-
-## Private Link combined with firewall rules
-
-The following situations and outcomes are possible when you use Private Link in combination with firewall rules:
-
-* If you don't configure any firewall rules, then by default, no traffic will be able to access the Azure Database for PostgreSQL Single server.
-
-* If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule.
-
-* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for PostgreSQL Single server is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for PostgreSQL Single server.
-
-## Deny public access for Azure Database for PostgreSQL Single server
-
-If you want to rely only on private endpoints for accessing their Azure Database for PostgreSQL Single server, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
-
-When this setting is set to *YES* only connections via private endpoints are allowed to your Azure Database for PostgreSQL. When this setting is set to *NO* clients can connect to your Azure Database for PostgreSQL based on your firewall or VNet service endpoint setting. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
-
-> [!Note]
-> This feature is available in all Azure regions where Azure Database for PostgreSQL - Single server supports General Purpose and Memory Optimized pricing tiers.
->
-> This setting does not have any impact on the SSL and TLS configurations for your Azure Database for PostgreSQL Single server.
-
-To learn how to set the **Deny Public Network Access** for your Azure Database for PostgreSQL Single server from Azure portal, refer to [How to configure Deny Public Network Access](howto-deny-public-network-access.md).
-
-## Next steps
-
-To learn more about Azure Database for PostgreSQL Single server security features, see the following articles:
-
-* To configure a firewall for Azure Database for PostgreSQL Single server, see [Firewall support](./concepts-firewall-rules.md).
-
-* To learn how to configure a virtual network service endpoint for your Azure Database for PostgreSQL Single server, see [Configure access from virtual networks](./concepts-data-access-and-security-vnet.md).
-
-* For an overview of Azure Database for PostgreSQL Single server connectivity, see [Azure Database for PostgreSQL Connectivity Architecture](./concepts-connectivity-architecture.md)
-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
postgresql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-data-access-and-security-vnet.md
- Title: Virtual network rules - Azure Database for PostgreSQL - Single Server
-description: Learn how to use virtual network (vnet) service endpoints to connect to Azure Database for PostgreSQL - Single Server.
----- Previously updated : 07/17/2020--
-# Use Virtual Network service endpoints and rules for Azure Database for PostgreSQL - Single Server
-
-*Virtual network rules* are one firewall security feature that controls whether your Azure Database for PostgreSQL server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for PostgreSQL server.
-
-To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for PostgreSQL:
--
-> [!NOTE]
-> This feature is available in all regions of Azure public cloud where Azure Database for PostgreSQL is deployed for General Purpose and Memory Optimized servers.
-> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
-
-You can also consider using [Private Link](concepts-data-access-and-security-private-link.md) for connections. Private Link provides a private IP address in your VNet for the Azure Database for PostgreSQL server.
-
-<a name="anch-terminology-and-description-82f"></a>
-## Terminology and description
-
-**Virtual network:** You can have virtual networks associated with your Azure subscription.
-
-**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) within the VNet is assigned to a subnet. A subnet can contain multiple VMs and/or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access.
-
-**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for Azure Database
-
-**Virtual network rule:** A virtual network rule for your Azure Database for PostgreSQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for PostgreSQL server. To be in the ACL for your Azure Database for PostgreSQL server, the subnet must contain the **Microsoft.Sql** type name.
-
-A virtual network rule tells your Azure Database for PostgreSQL server to accept communications from every node that is on the subnet.
-
-<a name="anch-details-about-vnet-rules-38q"></a>
-
-## Benefits of a virtual network rule
-
-Until you take action, the VMs in your subnet(s) cannot communicate with your Azure Database for PostgreSQL server. One action that establishes the communication is the creation of a virtual network rule. The rationale for choosing the VNet rule approach requires a compare-and-contrast discussion involving the competing security options offered by the firewall.
-
-### Allow access to Azure services
-
-The Connection security pane has an **ON/OFF** button that is labeled **Allow access to Azure services**. The **ON** setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This **ON** setting is probably more open than you want your Azure Database for PostgreSQL Database to be. The virtual network rule feature offers much finer granular control.
-
-### IP rules
-
-The Azure Database for PostgreSQL firewall allows you to specify IP address ranges from which communications are accepted into the Azure Database for PostgreSQL Database. This approach is fine for stable IP addresses that are outside the Azure private network. But many nodes inside the Azure private network are configured with *dynamic* IP addresses. Dynamic IP addresses might change, such as when your VM is restarted. It would be folly to specify a dynamic IP address in a firewall rule, in a production environment.
-
-You can salvage the IP option by obtaining a *static* IP address for your VM. For details, see [Configure private IP addresses for a virtual machine by using the Azure portal][vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w].
-
-However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage.
--
-<a name="anch-details-about-vnet-rules-38q"></a>
-
-## Details about virtual network rules
-
-This section describes several details about virtual network rules.
-
-### Only one geographic region
-
-Each Virtual Network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet.
-
-Any virtual network rule is limited to the region that its underlying endpoint applies to.
-
-### Server-level, not database-level
-
-Each virtual network rule applies to your whole Azure Database for PostgreSQL server, not just to one particular database on the server. In other words, virtual network rule applies at the server-level, not at the database-level.
-
-#### Security administration roles
-
-There is a separation of security roles in the administration of Virtual Network service endpoints. Action is required from each of the following roles:
--- **Network Admin:** &nbsp; Turn on the endpoint.-- **Database Admin:** &nbsp; Update the access control list (ACL) to add the given subnet to the Azure Database for PostgreSQL server.-
-*Azure RBAC alternative:*
-
-The roles of Network Admin and Database Admin have more capabilities than are needed to manage virtual network rules. Only a subset of their capabilities is needed.
-
-You have the option of using [Azure role-based access control (Azure RBAC)][rbac-what-is-813s] in Azure to create a single custom role that has only the necessary subset of capabilities. The custom role could be used instead of involving either the Network Admin or the Database Admin. The surface area of your security exposure is lower if you add a user to a custom role, versus adding the user to the other two major administrator roles.
-
-> [!NOTE]
-> In some cases the Azure Database for PostgreSQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Both subscriptions must be in the same Azure Active Directory tenant.
-> - The user has the required permissions to initiate operations, such as enabling service endpoints and adding a VNet-subnet to the given Server.
-> - Make sure that both the subscription have the **Microsoft.Sql** and **Microsoft.DBforPostgreSQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Limitations
-
-For Azure Database for PostgreSQL, the virtual network rules feature has the following limitations:
--- A Web App can be mapped to a private IP in a VNet/subnet. Even if service endpoints are turned ON from the given VNet/subnet, connections from the Web App to the server will have an Azure public IP source, not a VNet/subnet source. To enable connectivity from a Web App to a server that has VNet firewall rules, you must Allow Azure services to access server on the server.--- In the firewall for your Azure Database for PostgreSQL, each virtual network rule references a subnet. All these referenced subnets must be hosted in the same geographic region that hosts the Azure Database for PostgreSQL.--- Each Azure Database for PostgreSQL server can have up to 128 ACL entries for any given virtual network.--- Virtual network rules apply only to Azure Resource Manager virtual networks; and not to [classic deployment model][arm-deployment-model-568f] networks.--- Turning ON virtual network service endpoints to Azure Database for PostgreSQL using the **Microsoft.Sql** service tag also enables the endpoints for all Azure Database --- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.--- If **Microsoft.Sql** is enabled in a subnet, it indicates that you only want to use VNet rules to connect. [Non-VNet firewall rules](concepts-firewall-rules.md) of resources in that subnet will not work.--- On the firewall, IP address ranges do apply to the following networking items, but virtual network rules do not:
- - [Site-to-Site (S2S) virtual private network (VPN)][vpn-gateway-indexmd-608y]
- - On-premises via [ExpressRoute][expressroute-indexmd-744v]
-
-## ExpressRoute
-
-If your network is connected to the Azure network through use of [ExpressRoute][expressroute-indexmd-744v], each circuit is configured with two public IP addresses at the Microsoft Edge. The two IP addresses are used to connect to Microsoft Services, such as to Azure Storage, by using Azure Public Peering.
-
-To allow communication from your circuit to Azure Database for PostgreSQL, you must create IP network rules for the public IP addresses of your circuits. In order to find the public IP addresses of your ExpressRoute circuit, open a support ticket with ExpressRoute by using the Azure portal.
-
-## Adding a VNET Firewall rule to your server without turning on VNET Service Endpoints
-
-Merely setting a VNet firewall rule does not help secure the server to the VNet. You must also turn VNet service endpoints **On** for the security to take effect. When you turn service endpoints **On**, your VNet-subnet experiences downtime until it completes the transition from **Off** to **On**. This is especially true in the context of large VNets. You can use the **IgnoreMissingServiceEndpoint** flag to reduce or eliminate the downtime during transition.
-
-You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal.
-
-## Related articles
-- [Azure virtual networks][vm-virtual-network-overview]-- [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d]-
-## Next steps
-For articles on creating VNet rules, see:
-- [Create and manage Azure Database for PostgreSQL VNet rules using the Azure portal](howto-manage-vnet-using-portal.md)-- [Create and manage Azure Database for PostgreSQL VNet rules using Azure CLI](howto-manage-vnet-using-cli.md)--
-<!-- Link references, to text, Within this same GitHub repo. -->
-[arm-deployment-model-568f]: ../azure-resource-manager/management/deployment-models.md
-
-[vm-virtual-network-overview]: ../virtual-network/virtual-networks-overview.md
-
-[vm-virtual-network-service-endpoints-overview-649d]: ../virtual-network/virtual-network-service-endpoints-overview.md
-
-[vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]: ../virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md
-
-[rbac-what-is-813s]: ../role-based-access-control/overview.md
-
-[vpn-gateway-indexmd-608y]: ../vpn-gateway/index.yml
-
-[expressroute-indexmd-744v]: ../expressroute/index.yml
-
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
postgresql Concepts Data Encryption Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-data-encryption-postgresql.md
- Title: Data encryption with customer-managed key - Azure Database for PostgreSQL - Single server
-description: Azure Database for PostgreSQL Single server data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.
------ Previously updated : 01/13/2020--
-# Azure Database for PostgreSQL Single server data encryption with a customer-managed key
-
-Azure PostgreSQL leverages [Azure Storage encryption](../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it is a very similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control on access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you are responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
-
-Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../key-vault/general/security-features.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article.
-
-Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key, but does provide services of encryption and decryption to authorized entities. Key Vault can generate the key, imported it, or [have it transferred from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md).
-
-> [!NOTE]
-> This feature is available in all Azure regions where Azure Database for PostgreSQL Single server supports "General Purpose" and "Memory Optimized" pricing tiers. For other limitations, refer to the [limitation](concepts-data-encryption-postgresql.md#limitations) section.
-
-## Benefits
-
-Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server provides the following benefits:
-
-* Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible.
-* Full control over the key-lifecycle, including rotation of the key to align with corporate policies.
-* Central management and organization of keys in Azure Key Vault.
-* Enabling encryption does not have any additional performance impact with or without customers managed key (CMK) as PostgreSQL relies on Azure storage layer for data encryption in both the scenarios ,the only difference is when CMK is used **Azure Storage Encryption Key** which performs actual data encryption is encrypted using CMK.
-* Ability to implement separation of duties between security officers, and DBA and system administrators.
--
-## Terminology and description
-
-**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
-
-**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK.
-
-The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../security/fundamentals/encryption-atrest.md).
-
-## How data encryption with a customer-managed key work
--
-For a PostgreSQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server:
-
-* **get**: For retrieving the public part and properties of the key in the key vault.
-* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for PostgreSQL.
-* **unwrapKey**: To be able to decrypt the DEK. Azure Database for PostgreSQL needs the decrypted DEK to encrypt/decrypt the data
-
-The key vault administrator can also [enable logging of Key Vault audit events](../azure-monitor/insights/key-vault-insights-overview.md), so they can be audited later.
-
-When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
-
-## Requirements for configuring data encryption for Azure Database for PostgreSQL Single server
-
-The following are requirements for configuring Key Vault:
-
-* Key Vault and Azure Database for PostgreSQL Single server must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving the Key Vault resource afterwards requires you to reconfigure the data encryption.
-* The key vault must be set with 90 days for 'Days to retain deleted vaults'. If the existing key vault has been configured with a lower number, you will need to create a new key vault as it cannot be modified after creation.
-* Enable the soft-delete feature on the key vault, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days, unless the user recovers or purges them in the meantime. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal).
-* Enable Purge protection to enforce a mandatory retention period for deleted vaults and vault objects
-* Grant the Azure Database for PostgreSQL Single server access to the key vault with the get, wrapKey, and unwrapKey permissions by using its unique managed identity. In the Azure portal, the unique 'Service' identity is automatically created when data encryption is enabled on the PostgreSQL Single server. See [Data encryption for Azure Database for PostgreSQL Single server by using the Azure portal](howto-data-encryption-portal.md) for detailed, step-by-step instructions when you're using the Azure portal.
-
-The following are requirements for configuring the customer-managed key:
-
-* The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048.
-* The key activation date (if set) must be a date and time in the past. The expiration date (if set) must be a future date and time.
-* The key must be in the *Enabled* state.
-* If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key) into the key vault, make sure to provide it in the supported file formats (`.pfx`, `.byok`, `.backup`).
-
-## Recommendations
-
-When you're using data encryption by using a customer-managed key, here are recommendations for configuring Key Vault:
-
-* Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion.
-* Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated.
-* Ensure that Key Vault and Azure Database for PostgreSQL Single server reside in the same region, to ensure a faster access for DEK wrap, and unwrap operations.
-* Lock down the Azure KeyVault to only **private endpoint and selected networks** and allow only *trusted Microsoft* services to secure the resources.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/keyvault-trusted-service.png" alt-text="trusted-service-with-AKV":::
-
-Here are recommendations for configuring a customer-managed key:
-
-* Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service.
-
-* If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey).
-
-## Inaccessible customer-managed key condition
-
-When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*. Some of the reason why the server can reach this state are:
-
-* If we create a Point In Time Restore server for your Azure Database for PostgreSQL Single server, which has data encryption enabled, the newly created server will be in *Inaccessible* state. You can fix the server state through [Azure portal](howto-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](howto-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
-* If we create a read replica for your Azure Database for PostgreSQL Single server, which has data encryption enabled, the replica server will be in *Inaccessible* state. You can fix the server state through [Azure portal](howto-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](howto-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
-* If you delete the KeyVault, the Azure Database for PostgreSQL Single server will be unable to access the key and will move to *Inaccessible* state. Recover the [Key Vault](../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
-* If we delete the key from the KeyVault, the Azure Database for PostgreSQL Single server will be unable to access the key and will move to *Inaccessible* state. Recover the [Key](../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
-* If the key stored in the Azure KeyVault expires, the key will become invalid and the Azure Database for PostgreSQL Single server will transition into *Inaccessible* state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the server *Available*.
-
-### Accidental key access revocation from Key Vault
-
-It might happen that someone with sufficient access rights to Key Vault accidentally disables server access to the key by:
-
-* Revoking the key vault's get, wrapKey, and unwrapKey permissions from the server.
-* Deleting the key.
-* Deleting the key vault.
-* Changing the key vault's firewall rules.
-
-* Deleting the managed identity of the server in Azure AD.
-
-## Monitor the customer-managed key in Key Vault
-
-To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features:
-
-* [Azure Resource Health](../service-health/resource-health-overview.md): An inaccessible database that has lost access to the customer key shows as "Inaccessible" after the first connection to the database has been denied.
-* [Activity log](../service-health/alerts-activity-log-service-notifications-portal.md): When access to the customer key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible, if you create alerts for these events.
-
-* [Action groups](../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences.
-
-## Restore and replicate with a customer's managed key in Key Vault
-
-After Azure Database for PostgreSQL Single server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through read replicas. However, the copy can be changed to reflect a new customer's managed key for encryption. When the customer-managed key is changed, old backups of the server start using the latest key.
-
-To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the primary and restored/replica servers:
-
-* Initiate the restore or read replica creation process from the primary Azure Database for PostgreSQL Single server.
-* Keep the newly created server (restored/replica) in an inaccessible state, because its unique identity hasn't yet been given permissions to Key Vault.
-* On the restored/replica server, revalidate the customer-managed key in the data encryption settings. This ensures that the newly created server is given wrap and unwrap permissions to the key stored in Key Vault.
-
-## Limitations
-
-For Azure Database for PostgreSQL, the support for encryption of data at rest using customers managed key (CMK) has few limitations -
-
-* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
-* This feature is only supported in regions and servers which support storage up to 16TB. For the list of Azure regions supporting storage up to 16TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
-
- > [!NOTE]
- > - All new PostgreSQL servers created in the regions listed above, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
- > - To validate if your provisioned server supports up to 16TB, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server may not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforPostgreSQL@service.microsoft.com if you have any questions.
-
-* Encryption is only supported with RSA 2048 cryptographic key.
-
-## Next steps
-
-Learn how to [set up data encryption with a customer-managed key for your Azure database for PostgreSQL Single server by using the Azure portal](howto-data-encryption-portal.md).
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-extensions.md
- Title: Extensions - Azure Database for PostgreSQL - Single Server
-description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Single Server
----- Previously updated : 03/25/2021-
-# PostgreSQL extensions in Azure Database for PostgreSQL - Single Server
-PostgreSQL provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions function like built-in features.
-
-## How to use PostgreSQL extensions
-PostgreSQL extensions must be installed in your database before you can use them. To install a particular extension, run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command from psql tool to load the packaged objects into your database.
-
-Azure Database for PostgreSQL supports a subset of key extensions as listed below. This information is also available by running `SELECT * FROM pg_available_extensions;`. Extensions beyond the ones listed are not supported. You cannot create your own extension in Azure Database for PostgreSQL.
-
-## Postgres 11 extensions
-
-The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 11.
-
-> [!div class="mx-tableFixed"]
-> | **Extension**| **Extension version** | **Description** |
-> ||||
-> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
-> |[btree_gin](https://www.postgresql.org/docs/11/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
-> |[btree_gist](https://www.postgresql.org/docs/11/btree-gist.html) | 1.5 | support for indexing common datatypes in GiST|
-> |[citext](https://www.postgresql.org/docs/11/citext.html) | 1.5 | data type for case-insensitive character strings|
-> |[cube](https://www.postgresql.org/docs/11/cube.html) | 1.4 | data type for multidimensional cubes|
-> |[dblink](https://www.postgresql.org/docs/11/dblink.html) | 1.2 | connect to other PostgreSQL databases from within a database|
-> |[dict_int](https://www.postgresql.org/docs/11/dict-int.html) | 1.0 | text search dictionary template for integers|
-> |[earthdistance](https://www.postgresql.org/docs/11/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth|
-> |[fuzzystrmatch](https://www.postgresql.org/docs/11/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
-> |[hstore](https://www.postgresql.org/docs/11/hstore.html) | 1.5 | data type for storing sets of (key, value) pairs|
-> |[hypopg](https://hypopg.readthedocs.io/en/latest/) | 1.1.2 | Hypothetical indexes for PostgreSQL|
-> |[intarray](https://www.postgresql.org/docs/11/intarray.html) | 1.2 | functions, operators, and index support for 1-D arrays of integers|
-> |[isn](https://www.postgresql.org/docs/11/isn.html) | 1.2 | data types for international product numbering standards|
-> |[ltree](https://www.postgresql.org/docs/11/ltree.html) | 1.1 | data type for hierarchical tree-like structures|
-> |[orafce](https://github.com/orafce/orafce) | 3.7 | Functions and operators that emulate a subset of functions and packages from commercial RDBMS|
-> |[pgaudit](https://www.pgaudit.org/) | 1.3.1 | provides auditing functionality|
-> |[pgcrypto](https://www.postgresql.org/docs/11/pgcrypto.html) | 1.3 | cryptographic functions|
-> |[pgrouting](https://pgrouting.org/) | 2.6.2 | pgRouting Extension|
-> |[pgrowlocks](https://www.postgresql.org/docs/11/pgrowlocks.html) | 1.2 | show row-level locking information|
-> |[pgstattuple](https://www.postgresql.org/docs/11/pgstattuple.html) | 1.5 | show tuple-level statistics|
-> |[pg_buffercache](https://www.postgresql.org/docs/11/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
-> |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.0.0 | Extension to manage partitioned tables by time or ID|
-> |[pg_prewarm](https://www.postgresql.org/docs/11/pgprewarm.html) | 1.2 | prewarm relation data|
-> |[pg_stat_statements](https://www.postgresql.org/docs/11/pgstatstatements.html) | 1.6 | track execution statistics of all SQL statements executed|
-> |[pg_trgm](https://www.postgresql.org/docs/11/pgtrgm.html) | 1.4 | text similarity measurement and index searching based on trigrams|
-> |[plpgsql](https://www.postgresql.org/docs/11/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
-> |[plv8](https://plv8.github.io/) | 2.3.11 | PL/JavaScript (v8) trusted procedural language|
-> |[postgis](https://www.postgis.net/) | 2.5.1 | PostGIS geometry, geography, and raster spatial types and functions|
-> |[postgis_sfcgal](https://www.postgis.net/) | 2.5.1 | PostGIS SFCGAL functions|
-> |[postgis_tiger_geocoder](https://www.postgis.net/) | 2.5.1 | PostGIS tiger geocoder and reverse geocoder|
-> |[postgis_topology](https://postgis.net/docs/Topology.html) | 2.5.1 | PostGIS topology spatial types and functions|
-> |[postgres_fdw](https://www.postgresql.org/docs/11/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
-> |[tablefunc](https://www.postgresql.org/docs/11/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
-> |[timescaledb](https://docs.timescale.com/timescaledb/latest/) |1.7.4 | Enables scalable inserts and complex queries for time-series data|
-> |[unaccent](https://www.postgresql.org/docs/11/unaccent.html) | 1.1 | text search dictionary that removes accents|
-> |[uuid-ossp](https://www.postgresql.org/docs/11/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
-
-## Postgres 10 extensions
-
-The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 10.
-
-> [!div class="mx-tableFixed"]
-> | **Extension**| **Extension version** | **Description** |
-> ||||
-> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
-> |[btree_gin](https://www.postgresql.org/docs/10/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
-> |[btree_gist](https://www.postgresql.org/docs/10/btree-gist.html) | 1.5 | support for indexing common datatypes in GiST|
-> |[chkpass](https://www.postgresql.org/docs/10/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
-> |[citext](https://www.postgresql.org/docs/10/citext.html) | 1.4 | data type for case-insensitive character strings|
-> |[cube](https://www.postgresql.org/docs/10/cube.html) | 1.2 | data type for multidimensional cubes|
-> |[dblink](https://www.postgresql.org/docs/10/dblink.html) | 1.2 | connect to other PostgreSQL databases from within a database|
-> |[dict_int](https://www.postgresql.org/docs/10/dict-int.html) | 1.0 | text search dictionary template for integers|
-> |[earthdistance](https://www.postgresql.org/docs/10/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth|
-> |[fuzzystrmatch](https://www.postgresql.org/docs/10/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
-> |[hstore](https://www.postgresql.org/docs/10/hstore.html) | 1.4 | data type for storing sets of (key, value) pairs|
-> |[hypopg](https://hypopg.readthedocs.io/en/latest/) | 1.1.1 | Hypothetical indexes for PostgreSQL|
-> |[intarray](https://www.postgresql.org/docs/10/intarray.html) | 1.2 | functions, operators, and index support for 1-D arrays of integers|
-> |[isn](https://www.postgresql.org/docs/10/isn.html) | 1.1 | data types for international product numbering standards|
-> |[ltree](https://www.postgresql.org/docs/10/ltree.html) | 1.1 | data type for hierarchical tree-like structures|
-> |[orafce](https://github.com/orafce/orafce) | 3.7 | Functions and operators that emulate a subset of functions and packages from commercial RDBMS|
-> |[pgaudit](https://www.pgaudit.org/) | 1.2 | provides auditing functionality|
-> |[pgcrypto](https://www.postgresql.org/docs/10/pgcrypto.html) | 1.3 | cryptographic functions|
-> |[pgrouting](https://pgrouting.org/) | 2.5.2 | pgRouting Extension|
-> |[pgrowlocks](https://www.postgresql.org/docs/10/pgrowlocks.html) | 1.2 | show row-level locking information|
-> |[pgstattuple](https://www.postgresql.org/docs/10/pgstattuple.html) | 1.5 | show tuple-level statistics|
-> |[pg_buffercache](https://www.postgresql.org/docs/10/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
-> |[pg_partman](https://github.com/pgpartman/pg_partman) | 2.6.3 | Extension to manage partitioned tables by time or ID|
-> |[pg_prewarm](https://www.postgresql.org/docs/10/pgprewarm.html) | 1.1 | prewarm relation data|
-> |[pg_stat_statements](https://www.postgresql.org/docs/10/pgstatstatements.html) | 1.6 | track execution statistics of all SQL statements executed|
-> |[pg_trgm](https://www.postgresql.org/docs/10/pgtrgm.html) | 1.3 | text similarity measurement and index searching based on trigrams|
-> |[plpgsql](https://www.postgresql.org/docs/10/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
-> |[plv8](https://plv8.github.io/) | 2.1.0 | PL/JavaScript (v8) trusted procedural language|
-> |[postgis](https://www.postgis.net/) | 2.4.3 | PostGIS geometry, geography, and raster spatial types and functions|
-> |[postgis_sfcgal](https://www.postgis.net/) | 2.4.3 | PostGIS SFCGAL functions|
-> |[postgis_tiger_geocoder](https://www.postgis.net/) | 2.4.3 | PostGIS tiger geocoder and reverse geocoder|
-> |[postgis_topology](https://postgis.net/docs/Topology.html) | 2.4.3 | PostGIS topology spatial types and functions|
-> |[postgres_fdw](https://www.postgresql.org/docs/10/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
-> |[tablefunc](https://www.postgresql.org/docs/10/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
-> |[timescaledb](https://docs.timescale.com/timescaledb/latest/) | 1.7.4 | Enables scalable inserts and complex queries for time-series data|
-> |[unaccent](https://www.postgresql.org/docs/10/unaccent.html) | 1.1 | text search dictionary that removes accents|
-> |[uuid-ossp](https://www.postgresql.org/docs/10/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
-
-## Postgres 9.6 extensions
-
-The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 9.6.
-
-> [!div class="mx-tableFixed"]
-> | **Extension**| **Extension version** | **Description** |
-> ||||
-> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.2 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.2 | Address Standardizer US dataset example|
-> |[btree_gin](https://www.postgresql.org/docs/9.6/btree-gin.html) | 1.0 | support for indexing common datatypes in GIN|
-> |[btree_gist](https://www.postgresql.org/docs/9.6/btree-gist.html) | 1.2 | support for indexing common datatypes in GiST|
-> |[chkpass](https://www.postgresql.org/docs/9.6/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
-> |[citext](https://www.postgresql.org/docs/9.6/citext.html) | 1.3 | data type for case-insensitive character strings|
-> |[cube](https://www.postgresql.org/docs/9.6/cube.html) | 1.2 | data type for multidimensional cubes|
-> |[dblink](https://www.postgresql.org/docs/9.6/dblink.html) | 1.2 | connect to other PostgreSQL databases from within a database|
-> |[dict_int](https://www.postgresql.org/docs/9.6/dict-int.html) | 1.0 | text search dictionary template for integers|
-> |[earthdistance](https://www.postgresql.org/docs/9.6/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth|
-> |[fuzzystrmatch](https://www.postgresql.org/docs/9.6/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
-> |[hstore](https://www.postgresql.org/docs/9.6/hstore.html) | 1.4 | data type for storing sets of (key, value) pairs|
-> |[hypopg](https://hypopg.readthedocs.io/en/latest/) | 1.1.1 | Hypothetical indexes for PostgreSQL|
-> |[intarray](https://www.postgresql.org/docs/9.6/intarray.html) | 1.2 | functions, operators, and index support for 1-D arrays of integers|
-> |[isn](https://www.postgresql.org/docs/9.6/isn.html) | 1.1 | data types for international product numbering standards|
-> |[ltree](https://www.postgresql.org/docs/9.6/ltree.html) | 1.1 | data type for hierarchical tree-like structures|
-> |[orafce](https://github.com/orafce/orafce) | 3.7 | Functions and operators that emulate a subset of functions and packages from commercial RDBMS|
-> |[pgaudit](https://www.pgaudit.org/) | 1.1.2 | provides auditing functionality|
-> |[pgcrypto](https://www.postgresql.org/docs/9.6/pgcrypto.html) | 1.3 | cryptographic functions|
-> |[pgrouting](https://pgrouting.org/) | 2.3.2 | pgRouting Extension|
-> |[pgrowlocks](https://www.postgresql.org/docs/9.6/pgrowlocks.html) | 1.2 | show row-level locking information|
-> |[pgstattuple](https://www.postgresql.org/docs/9.6/pgstattuple.html) | 1.4 | show tuple-level statistics|
-> |[pg_buffercache](https://www.postgresql.org/docs/9.6/pgbuffercache.html) | 1.2 | examine the shared buffer cache|
-> |[pg_partman](https://github.com/pgpartman/pg_partman) | 2.6.3 | Extension to manage partitioned tables by time or ID|
-> |[pg_prewarm](https://www.postgresql.org/docs/9.6/pgprewarm.html) | 1.1 | prewarm relation data|
-> |[pg_stat_statements](https://www.postgresql.org/docs/9.6/pgstatstatements.html) | 1.4 | track execution statistics of all SQL statements executed|
-> |[pg_trgm](https://www.postgresql.org/docs/9.6/pgtrgm.html) | 1.3 | text similarity measurement and index searching based on trigrams|
-> |[plpgsql](https://www.postgresql.org/docs/9.6/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
-> |[plv8](https://plv8.github.io/) | 2.1.0 | PL/JavaScript (v8) trusted procedural language|
-> |[postgis](https://www.postgis.net/) | 2.3.2 | PostGIS geometry, geography, and raster spatial types and functions|
-> |[postgis_sfcgal](https://www.postgis.net/) | 2.3.2 | PostGIS SFCGAL functions|
-> |[postgis_tiger_geocoder](https://www.postgis.net/) | 2.3.2 | PostGIS tiger geocoder and reverse geocoder|
-> |[postgis_topology](https://postgis.net/docs/Topology.html) | 2.3.2 | PostGIS topology spatial types and functions|
-> |[postgres_fdw](https://www.postgresql.org/docs/9.6/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
-> |[tablefunc](https://www.postgresql.org/docs/9.6/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
-> |[timescaledb](https://docs.timescale.com/timescaledb/latest/) | 1.7.4 | Enables scalable inserts and complex queries for time-series data|
-> |[unaccent](https://www.postgresql.org/docs/9.6/unaccent.html) | 1.1 | text search dictionary that removes accents|
-> |[uuid-ossp](https://www.postgresql.org/docs/9.6/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
-
-## Postgres 9.5 extensions
-
->[!NOTE]
-> PostgreSQL version 9.5 has been retired.
-
-The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 9.5.
-
-> [!div class="mx-tableFixed"]
-> | **Extension**| **Extension version** | **Description** |
-> ||||
-> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.0 | Used to parse an address into constituent elements. |
-> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.0 | Address Standardizer US dataset example|
-> |[btree_gin](https://www.postgresql.org/docs/9.5/btree-gin.html) | 1.0 | support for indexing common datatypes in GIN|
-> |[btree_gist](https://www.postgresql.org/docs/9.5/btree-gist.html) | 1.1 | support for indexing common datatypes in GiST|
-> |[chkpass](https://www.postgresql.org/docs/9.5/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
-> |[citext](https://www.postgresql.org/docs/9.5/citext.html) | 1.1 | data type for case-insensitive character strings|
-> |[cube](https://www.postgresql.org/docs/9.5/cube.html) | 1.0 | data type for multidimensional cubes|
-> |[dblink](https://www.postgresql.org/docs/9.5/dblink.html) | 1.1 | connect to other PostgreSQL databases from within a database|
-> |[dict_int](https://www.postgresql.org/docs/9.5/dict-int.html) | 1.0 | text search dictionary template for integers|
-> |[earthdistance](https://www.postgresql.org/docs/9.5/earthdistance.html) | 1.0 | calculate great-circle distances on the surface of the Earth|
-> |[fuzzystrmatch](https://www.postgresql.org/docs/9.5/fuzzystrmatch.html) | 1.0 | determine similarities and distance between strings|
-> |[hstore](https://www.postgresql.org/docs/9.5/hstore.html) | 1.3 | data type for storing sets of (key, value) pairs|
-> |[hypopg](https://hypopg.readthedocs.io/en/latest/) | 1.1.1 | Hypothetical indexes for PostgreSQL|
-> |[intarray](https://www.postgresql.org/docs/9.5/intarray.html) | 1.0 | functions, operators, and index support for 1-D arrays of integers|
-> |[isn](https://www.postgresql.org/docs/9.5/isn.html) | 1.0 | data types for international product numbering standards|
-> |[ltree](https://www.postgresql.org/docs/9.5/ltree.html) | 1.0 | data type for hierarchical tree-like structures|
-> |[orafce](https://github.com/orafce/orafce) | 3.7 | Functions and operators that emulate a subset of functions and packages from commercial RDBMS|
-> |[pgaudit](https://www.pgaudit.org/) | 1.0.7 | provides auditing functionality|
-> |[pgcrypto](https://www.postgresql.org/docs/9.5/pgcrypto.html) | 1.2 | cryptographic functions|
-> |[pgrouting](https://pgrouting.org/) | 2.3.0 | pgRouting Extension|
-> |[pgrowlocks](https://www.postgresql.org/docs/9.5/pgrowlocks.html) | 1.1 | show row-level locking information|
-> |[pgstattuple](https://www.postgresql.org/docs/9.5/pgstattuple.html) | 1.3 | show tuple-level statistics|
-> |[pg_buffercache](https://www.postgresql.org/docs/9.5/pgbuffercache.html) | 1.1 | examine the shared buffer cache|
-> |[pg_partman](https://github.com/pgpartman/pg_partman) | 2.6.3 | Extension to manage partitioned tables by time or ID|
-> |[pg_prewarm](https://www.postgresql.org/docs/9.5/pgprewarm.html) | 1.0 | prewarm relation data|
-> |[pg_stat_statements](https://www.postgresql.org/docs/9.5/pgstatstatements.html) | 1.3 | track execution statistics of all SQL statements executed|
-> |[pg_trgm](https://www.postgresql.org/docs/9.5/pgtrgm.html) | 1.1 | text similarity measurement and index searching based on trigrams|
-> |[plpgsql](https://www.postgresql.org/docs/9.5/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
-> |[postgis](https://www.postgis.net/) | 2.3.0 | PostGIS geometry, geography, and raster spatial types and functions|
-> |[postgis_sfcgal](https://www.postgis.net/) | 2.3.0 | PostGIS SFCGAL functions|
-> |[postgis_tiger_geocoder](https://www.postgis.net/) | 2.3.0 | PostGIS tiger geocoder and reverse geocoder|
-> |[postgis_topology](https://postgis.net/docs/Topology.html) | 2.3.0 | PostGIS topology spatial types and functions|
-> |[postgres_fdw](https://www.postgresql.org/docs/9.5/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
-> |[tablefunc](https://www.postgresql.org/docs/9.5/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
-> |[unaccent](https://www.postgresql.org/docs/9.5/unaccent.html) | 1.0 | text search dictionary that removes accents|
-> |[uuid-ossp](https://www.postgresql.org/docs/9.5/uuid-ossp.html) | 1.0 | generate universally unique identifiers (UUIDs)|
--
-## pg_stat_statements
-The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL server to provide you a means of tracking execution statistics of SQL statements.
-The setting `pg_stat_statements.track`, which controls what statements are counted by the extension, defaults to `top`, meaning all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter through the [Azure portal](./howto-configure-server-parameters-using-portal.md) or the [Azure CLI](./howto-configure-server-parameters-using-cli.md).
-
-There is a tradeoff between the query execution information pg_stat_statements provides and the impact on server performance as it logs each SQL statement. If you are not actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Note that some third party monitoring services may rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not.
-
-## dblink and postgres_fdw
-[dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) and [postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allow you to connect from one PostgreSQL server to another, or to another database in the same server. The receiving server needs to allow connections from the sending server through its firewall. When using these extensions to connect between Azure Database for PostgreSQL servers, this can be done by setting "Allow access to Azure services" to ON. This is also needed if you want to use the extensions to loop back to the same server. The "Allow access to Azure services" setting can be found in the Azure portal page for the Postgres server, under Connection Security. Turning "Allow access to Azure services" ON puts all Azure IPs on the allow list.
-
-> [!NOTE]
-> Currently, outbound connections from Azure Database for PostgreSQL via foreign data wrapper extensions such as postgres_fdw are not supported, except for connections to other Azure Database for PostgreSQL servers in the same Azure region.
-
-## uuid
-If you are planning to use `uuid_generate_v4()` from the [uuid-ossp extension](https://www.postgresql.org/docs/current/uuid-ossp.html), consider comparing with `gen_random_uuid()` from the [pgcrypto extension](https://www.postgresql.org/docs/current/pgcrypto.html) for performance benefits.
-
-## pgAudit
-The [pgAudit extension](https://github.com/pgaudit/pgaudit/blob/master/README.md) provides session and object audit logging. To learn how to use this extension in Azure Database for PostgreSQL, visit the [auditing concepts article](concepts-audit.md).
-
-## pg_prewarm
-The pg_prewarm extension loads relational data into cache. Prewarming your caches means that your queries have better response times on their first run after a restart. In Postgres 10 and below, prewarming is done manually using the [prewarm function](https://www.postgresql.org/docs/10/pgprewarm.html).
-
-In Postgres 11 and above, you can configure prewarming to happen [automatically](https://www.postgresql.org/docs/current/pgprewarm.html). You need to include pg_prewarm in your `shared_preload_libraries` parameter's list and restart the server to apply the change. Parameters can be set from the [Azure portal](howto-configure-server-parameters-using-portal.md), [CLI](howto-configure-server-parameters-using-cli.md), REST API, or ARM template.
-
-## TimescaleDB
-TimescaleDB is a time-series database that is packaged as an extension for PostgreSQL. TimescaleDB provides time-oriented analytical functions, optimizations, and scales Postgres for time-series workloads.
-
-[Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of [Timescale, Inc.](https://www.timescale.com/). Azure Database for PostgreSQL provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses).
-
-### Installing TimescaleDB
-To install TimescaleDB, you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
-
-Using the [Azure portal](https://portal.azure.com/):
-
-1. Select your Azure Database for PostgreSQL server.
-
-2. On the sidebar, select **Server Parameters**.
-
-3. Search for the `shared_preload_libraries` parameter.
-
-4. Select **TimescaleDB**.
-
-5. Select **Save** to preserve your changes. You get a notification once the change is saved.
-
-6. After the notification, **restart** the server to apply these changes. To learn how to restart a server, see [Restart an Azure Database for PostgreSQL server](howto-restart-server-portal.md).
--
-You can now enable TimescaleDB in your Postgres database. Connect to the database and issue the following command:
-```sql
-CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
-```
-> [!TIP]
-> If you see an error, confirm that you [restarted your server](howto-restart-server-portal.md) after saving shared_preload_libraries.
-
-You can now create a TimescaleDB hypertable [from scratch](https://docs.timescale.com/getting-started/creating-hypertables) or migrate [existing time-series data in PostgreSQL](https://docs.timescale.com/getting-started/migrating-data).
-
-### Restoring a Timescale database using pg_dump and pg_restore
-To restore a Timescale database using pg_dump and pg_restore, you need to run two helper procedures in the destination database: `timescaledb_pre_restore()` and `timescaledb_post restore()`.
-
-First prepare the destination database:
-
-```SQL
create the new database where you'll perform the restore
-CREATE DATABASE tutorial;
-\c tutorial --connect to the database
-CREATE EXTENSION timescaledb;
-
-SELECT timescaledb_pre_restore();
-```
-
-Now you can run pg_dump on the original database and then do pg_restore. After the restore, be sure to run the following command in the restored database:
-
-```SQL
-SELECT timescaledb_post_restore();
-```
-For more details on restore method wiith Timescale enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup)
--
-### Restoring a Timescale database using timescaledb-backup
-
- While running `SELECT timescaledb_post_restore()` procedure listed above you may get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
- To do so you should do following
- 1. Install tools as detailed [here](https://github.com/timescale/timescaledb-backup#installing-timescaledb-backup)
- 2. Create target Azure Database for PostgreSQL server and database
- 3. Enable Timescale extension as shown above
- 4. Grant azure_pg_admin [role](https://www.postgresql.org/docs/11/database-roles.html) to user that will be used by [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore)
- 5. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database
-
- More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
-> [!NOTE]
-> When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
-
-## Next steps
-If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
-
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-firewall-rules.md
- Title: Firewall rules - Azure Database for PostgreSQL - Single Server
-description: This article describes how to use firewall rules to connect to Azure Database for PostgreSQL - Single Server.
----- Previously updated : 07/17/2020--
-# Firewall rules in Azure Database for PostgreSQL - Single Server
-Azure Database for PostgreSQL server is secure by default preventing all access to your database server until you specify which IP hosts are allowed to access it. The firewall grants access to the server based on the originating IP address of each request.
-To configure your firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
-
-**Firewall rules:** These rules enable clients to access your entire Azure Database for PostgreSQL Server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or using Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
-
-## Firewall overview
-All access to your Azure Database for PostgreSQL server is blocked by the firewall by default. To access your server from another computer/client or application, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify allowed public IP address ranges. Access to the Azure portal website itself is not impacted by the firewall rules.
-Connection attempts from the internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram:
--
-## Connecting from the Internet
-Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL server.
-If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted otherwise it is rejected. For example, if your application connects with JDBC driver for PostgreSQL, you may encounter this error attempting to connect when the firewall is blocking the connection.
-> java.util.concurrent.ExecutionException: java.lang.RuntimeException:
-> org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL
-
-## Connecting from Azure
-It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
-
-If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is rejected by firewall rules, it does not reach the Azure Database for PostgreSQL server.
-
-> [!IMPORTANT]
-> The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
--
-## Connecting from a VNet
-To connect securely to your Azure Database for PostgreSQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
-
-## Programmatically managing firewall rules
-In addition to the Azure portal, firewall rules can be managed programmatically using Azure CLI.
-See also [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](howto-manage-firewall-using-cli.md)
-
-## Troubleshooting firewall issues
-Consider the following points when access to the Microsoft Azure Database for PostgreSQL Server service does not behave as you expect:
-
-* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for PostgreSQL Server firewall configuration to take effect.
-
-* **The login is not authorized or an incorrect password was used:** If a login does not have permissions on the Azure Database for PostgreSQL server or the password used is incorrect, the connection to the Azure Database for PostgreSQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must still provide the necessary security credentials.
-
- For example, using a JDBC client, the following error may appear.
- > java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "yourusername"
-
-* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you could try one of the following solutions:
-
- * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for PostgreSQL Server, and then add the IP address range as a firewall rule.
-
- * Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
-
-* **Server's IP appears to be public:** Connections to the Azure Database for PostgreSQL server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
-
-* **Cannot connect from Azure resource with allowed IP:** Check whether the **Microsoft.Sql** service endpoint is enabled for the subnet you are connecting from. If **Microsoft.Sql** is enabled, it indicates that you only want to use [VNet service endpoint rules](concepts-data-access-and-security-vnet.md) on that subnet.
-
- For example, you may see the following error if you are connecting from an Azure VM in a subnet that has **Microsoft.Sql** enabled but has no corresponding VNet rule:
- `FATAL: Client from Azure Virtual Networks is not allowed to access the server`
-
-* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error.
--
-## Next steps
-* [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](howto-manage-firewall-using-portal.md)
-* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](howto-manage-firewall-using-cli.md)
-* [VNet service endpoints in Azure Database for PostgreSQL](./concepts-data-access-and-security-vnet.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-high-availability.md
- Title: High availability - Azure Database for PostgreSQL - Single Server
-description: This article provides information on high availability in Azure Database for PostgreSQL - Single Server
----- Previously updated : 6/15/2020--
-# High availability in Azure Database for PostgreSQL ΓÇô Single Server
-The Azure Database for PostgreSQL ΓÇô Single Server service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) uptime. Azure Database for PostgreSQL provides high availability during planned events such as user-initated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for PostgreSQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
-
-Azure Database for PostgreSQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
-
-## Components in Azure Database for PostgreSQL ΓÇô Single Server
-
-| **Component** | **Description**|
-| | -- |
-| <b>PostgreSQL Database Server | Azure Database for PostgreSQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in seconds. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (WAL) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://www.postgresql.org/docs/11/sql-checkpoint.html) process, data pages from the database server memory are also flushed to the storage. |
-| <b>Remote Storage | All PostgreSQL physical data files and WAL files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within few seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
-| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. |
-
-## Planned downtime mitigation
-Azure Database for PostgreSQL is architected to provide high availability during planned downtime operations.
--
-1. Scale up and down PostgreSQL database servers in seconds
-2. Gateway that acts as a proxy to route client connects to the proper database server
-3. Scaling up of storage can be performed without any downtime. Remote storage enables fast detach/re-attach after the failover.
-Here are some planned maintenance scenarios:
-
-| **Scenario** | **Description**|
-| | -- |
-| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.|
-| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.|
-| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
-| <b>Minor version upgrades | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
--
-## Unplanned downtime mitigation
-
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for PostgreSQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
---
-1. Azure PostgreSQL servers with fast-scaling capabilities.
-2. Gateway that acts as a proxy to route client connections to the proper database server
-3. Azure storage with three copies for reliability, availability, and redundancy.
-4. Remote storage also enables fast detach/re-attach after the server failover.
-
-### Unplanned downtime: failure scenarios and service recovery
-Here are some failure scenarios and how Azure Database for PostgreSQL automatically recovers:
-
-| **Scenario** | **Automatic recovery** |
-| - | - |
-| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. |
-| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
-<B>Compute failure | Compute failures are rare event. In the event of compute failure a new compute container is provisioned and the storage with data files is mapped to it, PostgreSQL database engine is then brought online on the new container and gateway service ensures transparent failover without any need of application changes.Please also note that compute layer has built in Availability Zone resiliency and a new compute is spin up in different Availability zone in the event of AZ compute failure.
-
-Here are some failure scenarios that require user action to recover:
-
-| **Scenario** | **Recovery plan** |
-| - | - |
-| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](./howto-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
-| <b> Availability zone failure | Failure of a Availability zone is also a rare event. However, if you need protection from a Availability zone failure, you can configure one or more read replicas or consider using our [Flexible Server](./flexible-server/concepts-high-availability.md) offering which provides zone redundant high availability.
-| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](./concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/11/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/11/app-pgrestore.html) to restore those tables into your database. |
---
-## Summary
-
-Azure Database for PostgreSQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for PostgreSQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/postgresql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications.
-
-## Next steps
-- Learn about [Azure regions](../availability-zones/az-overview.md)-- Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
postgresql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-infrastructure-double-encryption.md
- Title: Infrastructure double encryption - Azure Database for PostgreSQL
-description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service-managed keys.
------ Previously updated : 6/30/2020--
-# Azure Database for PostgreSQL Infrastructure double encryption
-
-Azure Database for PostgreSQL uses storage [encryption of data at-rest](concepts-security.md#at-rest) for data using Microsoft's managed keys. Data, including backups, are encrypted on disk and this encryption is always on and can't be disabled. The encryption uses FIPS 140-2 validated cryptographic module and an AES 256-bit cipher for the Azure storage encryption.
-
-Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for PostgreSQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact.
-
-> [!NOTE]
-> This feature is only supported for "General Purpose" and "Memory Optimized" pricing tiers in Azure Database for PostgreSQL.
-
-Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for PostgreSQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-postgresql.md) for the provisioned PostgreSQL server.
-
-Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
-
-> [!NOTE]
-> Using Infrastructure double encryption will have performance impact on the Azure Database for PostgreSQL server due to the additional encryption process.
-
-## Benefits
-
-Infrastructure double encryption for Azure Database for PostgreSQL provides the following benefits:
-
-1. **Additional diversity of crypto implementation** - The planned move to hardware-based encryption will further diversify the implementations by providing a hardware-based implementation in addition to the software-based implementation.
-2. **Implementation errors** - Two layers of encryption at infrastructure layer protects against any errors in caching or memory management in higher layers that exposes plaintext data. Additionally, the two layers also ensures against errors in the implementation of the encryption in general.
-
-The combination of these provides strong protection against common threats and weaknesses used to attack cryptography.
-
-## Supported scenarios with infrastructure double encryption
-
-The encryption capabilities that are provided by Azure Database for PostgreSQL can be used together. Below is a summary of the various scenarios that you can use:
-
-| ## | Default encryption | Infrastructure double encryption | Data encryption using Customer-managed keys |
-|:|::|:--:|:--:|
-| 1 | *Yes* | *No* | *No* |
-| 2 | *Yes* | *Yes* | *No* |
-| 3 | *Yes* | *No* | *Yes* |
-| 4 | *Yes* | *Yes* | *Yes* |
-| | | | |
-
-> [!Important]
-> - Scenario 2 and 4 will have performance impact on the Azure Database for PostgreSQL server due to the additional layer of infrastructure encryption.
-> - Configuring Infrastructure double encryption for Azure Database for PostgreSQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption. However, you can still enable Data encryption using customer-managed keys for the server created with/without infrastructure double encryption.
-
-## Limitations
-
-For Azure Database for PostgreSQL, the support for infrastructure double encryption using service-managed key has the following limitations:
-
-* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
-* This feature is only supported in regions and servers, which support storage up to 16 TB. For the list of Azure regions supporting storage up to 16 TB, refer to the [storage documentation](concepts-pricing-tiers.md#storage).
-
- > [!NOTE]
- > - All **new** PostgreSQL servers created in the regions listed above also support data encryption with customer manager keys. In this case, servers created through point-in-time restore (PITR) or read replicas do not qualify as "new".
- > - To validate if your provisioned server supports up to 16 TB, you can go to the pricing tier blade in the portal and see if the storage slider can be moved up to 16 TB. If you can only move the slider up to 4 TB, your server may not support encryption with customer managed keys; however, the data is encrypted using service-managed keys at all times. Please reach out to AskAzureDBforPostgreSQL@service.microsoft.com if you have any questions.
-
-## Next steps
-
-Learn how to [set up Infrastructure double encryption for Azure database for PostgreSQL](howto-double-encryption.md).
postgresql Concepts Known Issues Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-known-issues-limitations.md
- Title: Known issues and limitations for Azure Database for PostgreSQL - Single Server and Flexible Server
-description: Lists the known issues that customers should be aware of.
----- Previously updated : 11/30/2021--
-# Azure Database for PostgreSQL - Known issues and limitations
-
-This page provides a list of known issues in Azure Database for PostgreSQL that could impact your application. It also lists any mitigation and recommendations to workaround the issue.
-
-## Intelligent Performance - Query Store
-
-Applicable to Azure Database for PostgreSQL - Single Server.
-
-| Applicable | Cause | Remediation|
-| -- | | - |
-| PostgreSQL 9.6, 10, 11 | Turning on the server parameter `pg_qs.replace_parameter_placeholders` might lead to a server shutdown in some rare scenarios. | Through Azure Portal, Server Parameters section, turn the parameter `pg_qs.replace_parameter_placeholders` value to `OFF` and save. |
--
-## Next steps
-- See Query Store [best practices](./concepts-query-store-best-practices.md)
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-limits.md
- Title: Limits - Azure Database for PostgreSQL - Single Server
-description: This article describes limits in Azure Database for PostgreSQL - Single Server, such as number of connection and storage engine options.
----- Previously updated : 01/28/2020--
-# Limits in Azure Database for PostgreSQL - Single Server
-The following sections describe capacity and functional limits in the database service. If you'd like to learn about resource (compute, memory, storage) tiers, see the [pricing tiers](concepts-pricing-tiers.md) article.
--
-## Maximum connections
-The maximum number of connections per pricing tier and vCores are shown below. The Azure system requires five connections to monitor the Azure Database for PostgreSQL server.
-
-|**Pricing Tier**| **vCore(s)**| **Max Connections** | **Max User Connections** |
-|||||
-|Basic| 1| 55 | 50|
-|Basic| 2| 105 | 100|
-|General Purpose| 2| 150| 145|
-|General Purpose| 4| 250| 245|
-|General Purpose| 8| 480| 475|
-|General Purpose| 16| 950| 945|
-|General Purpose| 32| 1500| 1495|
-|General Purpose| 64| 1900| 1895|
-|Memory Optimized| 2| 300| 295|
-|Memory Optimized| 4| 500| 495|
-|Memory Optimized| 8| 960| 955|
-|Memory Optimized| 16| 1900| 1895|
-|Memory Optimized| 32| 1987| 1982|
-
-When connections exceed the limit, you may receive the following error:
-> FATAL: sorry, too many clients already
-
-> [!IMPORTANT]
-> For best experience, we recommend that you use a connection pooler like pgBouncer to efficiently manage connections.
-
-A PostgreSQL connection, even idle, can occupy about 10MB of memory. Also, creating new connections takes time. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. A connection pooler that decreases idle connections and reuses existing connections will help avoid this. To learn more, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
-
-## Functional limitations
-### Scale operations
-- Dynamic scaling to and from the Basic pricing tiers is currently not supported.-- Decreasing server storage size is currently not supported.-
-### Server version upgrades
-- Automated migration between major database engine versions is currently not supported. If you would like to upgrade to the next major version, take a [dump and restore](./howto-migrate-using-dump-and-restore.md) it to a server that was created with the new engine version.-
-> Note that prior to PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number (for example, 9.5 to 9.6 was considered a _major_ version upgrade).
-> As of version 10, only a change in the first number is considered a major version upgrade (for example, 10.0 to 10.1 is a _minor_ version upgrade, and 10 to 11 is a _major_ version upgrade).
-
-### VNet service endpoints
-- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.-
-### Restoring a server
-- When using the PITR feature, the new server is created with the same pricing tier configurations as the server it is based on.-- The new server created during a restore does not have the firewall rules that existed on the original server. Firewall rules need to be set up separately for this new server.-- Restoring a deleted server is not supported.-
-### UTF-8 characters on Windows
-- In some scenarios, UTF-8 characters are not supported fully in open source PostgreSQL on Windows, which affects Azure Database for PostgreSQL. Please see the thread on [Bug #15476 in the postgresql-archive](https://www.postgresql.org/message-id/2101.1541220270%40sss.pgh.pa.us) for more information.-
-### GSS error
-If you see an error related to **GSS**, you are likely using a newer client/driver version which Azure Postgres Single Server does not yet fully support. This error is known to affect [JDBC driver versions 42.2.15 and 42.2.16](https://github.com/pgjdbc/pgjdbc/issues/1868).
- - We plan to complete the update by the end of November. Consider using a working driver version in the meantime.
- - Or, consider disabling the GSS request. Use a connection parameter like `gssEncMode=disable`.
-
-### Storage size reduction
-Storage size cannot be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](./howto-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server.
-
-## Next steps
-- Understand [whatΓÇÖs available in each pricing tier](concepts-pricing-tiers.md)-- Learn about [Supported PostgreSQL Database Versions](concepts-supported-versions.md)-- Review [how to back up and restore a server in Azure Database for PostgreSQL using the Azure portal](howto-restore-server-portal.md)
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-logical.md
- Title: Logical decoding - Azure Database for PostgreSQL - Single Server
-description: Describes logical decoding and wal2json for change data capture in Azure Database for PostgreSQL - Single Server
----- Previously updated : 12/09/2020--
-# Logical decoding
-
-[Logical decoding in PostgreSQL](https://www.postgresql.org/docs/current/logicaldecoding.html) allows you to stream data changes to external consumers. Logical decoding is popularly used for event streaming and change data capture scenarios.
-
-Logical decoding uses an output plugin to convert PostgresΓÇÖs write ahead log (WAL) into a readable format. Azure Database for PostgreSQL provides the output plugins [wal2json](https://github.com/eulerto/wal2json), [test_decoding](https://www.postgresql.org/docs/current/test-decoding.html) and pgoutput. pgoutput is made available by PostgreSQL from PostgreSQL version 10 and up.
-
-For an overview of how Postgres logical decoding works, [visit our blog](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/change-data-capture-in-postgres-how-to-use-logical-decoding-and/ba-p/1396421).
-
-> [!NOTE]
-> Logical replication using PostgreSQL publication/subscription is not supported with Azure Database for PostgreSQL - Single Server.
--
-## Set up your server
-Logical decoding and [read replicas](concepts-read-replicas.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas.
-
-To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
-
-* **Off** - Puts the least information in the WAL. This setting is not available on most Azure Database for PostgreSQL servers.
-* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers.
-* **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting.
--
-### Using Azure CLI
-
-1. Set azure.replication_support to `logical`.
- ```azurecli-interactive
- az postgres server configuration set --resource-group mygroup --server-name myserver --name azure.replication_support --value logical
- ```
-
-2. Restart the server to apply the change.
- ```azurecli-interactive
- az postgres server restart --resource-group mygroup --name myserver
- ```
-3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. To create a new firewall rule on the server, run the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command.
-
-### Using Azure portal
-
-1. Set Azure replication support to **logical**. Select **Save**.
-
- :::image type="content" source="./media/concepts-logical/replication-support.png" alt-text="Azure Database for PostgreSQL - Replication - Azure replication support":::
-
-2. Restart the server to apply the change by selecting **Yes**.
-
- :::image type="content" source="./media/concepts-logical/confirm-restart.png" alt-text="Azure Database for PostgreSQL - Replication - Confirm restart":::
-
-3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. Then click **Save**.
-
- :::image type="content" source="./media/concepts-logical/client-replrule-firewall.png" alt-text="Azure Database for PostgreSQL - Replication - Add firewall rule":::
-
-## Start logical decoding
-
-Logical decoding can be consumed via streaming protocol or SQL interface. Both methods use [replication slots](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS). A slot represents a stream of changes from a single database.
-
-Using a replication slot requires Postgres's replication privileges. At this time, the replication privilege is only available for the server's admin user.
-
-### Streaming protocol
-Consuming changes using the streaming protocol is often preferable. You can create your own consumer / connector, or use a tool like [Debezium](https://debezium.io/).
-
-Visit the wal2json documentation for [an example using the streaming protocol with pg_recvlogical](https://github.com/eulerto/wal2json#pg_recvlogical).
--
-### SQL interface
-In the example below, we use the SQL interface with the wal2json plugin.
-
-1. Create a slot.
- ```SQL
- SELECT * FROM pg_create_logical_replication_slot('test_slot', 'wal2json');
- ```
-
-2. Issue SQL commands. For example:
- ```SQL
- CREATE TABLE a_table (
- id varchar(40) NOT NULL,
- item varchar(40),
- PRIMARY KEY (id)
- );
-
- INSERT INTO a_table (id, item) VALUES ('id1', 'item1');
- DELETE FROM a_table WHERE id='id1';
- ```
-
-3. Consume the changes.
- ```SQL
- SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'pretty-print', '1');
- ```
-
- The output will look like:
- ```
- {
- "change": [
- ]
- }
- {
- "change": [
- {
- "kind": "insert",
- "schema": "public",
- "table": "a_table",
- "columnnames": ["id", "item"],
- "columntypes": ["character varying(40)", "character varying(40)"],
- "columnvalues": ["id1", "item1"]
- }
- ]
- }
- {
- "change": [
- {
- "kind": "delete",
- "schema": "public",
- "table": "a_table",
- "oldkeys": {
- "keynames": ["id"],
- "keytypes": ["character varying(40)"],
- "keyvalues": ["id1"]
- }
- }
- ]
- }
- ```
-
-4. Drop the slot once you are done using it.
- ```SQL
- SELECT pg_drop_replication_slot('test_slot');
- ```
--
-## Monitoring slots
-
-You must monitor logical decoding. Any unused replication slot must be dropped. Slots hold on to Postgres WAL logs and relevant system catalogs until changes have been read by a consumer. If your consumer fails or has not been properly configured, the unconsumed logs will pile up and fill your storage. Also, unconsumed logs increase the risk of transaction ID wraparound. Both situations can cause the server to become unavailable. Therefore, it is critical that logical replication slots are consumed continuously. If a logical replication slot is no longer used, drop it immediately.
-
-The 'active' column in the pg_replication_slots view will indicate whether there is a consumer connected to a slot.
-```SQL
-SELECT * FROM pg_replication_slots;
-```
-
-[Set alerts](howto-alert-on-metric.md) on *Storage used* and *Max lag across replicas* metrics to notify you when the values increase past normal thresholds.
-
-> [!IMPORTANT]
-> You must drop unused replication slots. Failing to do so can lead to server unavailability.
-
-## How to drop a slot
-If you are not actively consuming a replication slot you should drop it.
-
-To drop a replication slot called `test_slot` using SQL:
-```SQL
-SELECT pg_drop_replication_slot('test_slot');
-```
-
-> [!IMPORTANT]
-> If you stop using logical decoding, change azure.replication_support back to `replica` or `off`. The WAL details retained by `logical` are more verbose, and should be disabled when logical decoding is not in use.
-
-
-## Next steps
-
-* Visit the Postgres documentation to [learn more about logical decoding](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html).
-* Reach out to [our team](mailto:AskAzureDBforPostgreSQL@service.microsoft.com) if you have questions about logical decoding.
-* Learn more about [read replicas](concepts-read-replicas.md).
-
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-monitoring.md
- Title: Monitor and tune - Azure Database for PostgreSQL - Single Server
-description: This article describes monitoring and tuning features in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 10/21/2020--
-# Monitor and tune Azure Database for PostgreSQL - Single Server
-Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for PostgreSQL provides various monitoring options to provide insight into the behavior of your server.
-
-## Metrics
-Azure Database for PostgreSQL provides various metrics that give insight into the behavior of the resources supporting the PostgreSQL server. Each metric is emitted at a one-minute frequency, and has up to [93 days of history](../azure-monitor/essentials/data-platform-metrics.md#retention-of-metrics). You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](howto-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../azure-monitor/data-platform.md).
-
-### List of metrics
-These metrics are available for Azure Database for PostgreSQL:
-
-|Metric|Metric Display Name|Unit|Description|
-|||||
-|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
-|memory_percent|Memory percent|Percent|The percentage of memory in use.|
-|io_consumption_percent|IO percent|Percent|The percentage of IO in use. (Not applicable for Basic tier servers.)|
-|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
-|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
-|storage_limit|Storage limit|Bytes|The maximum storage for this server.|
-|serverlog_storage_percent|Server Log storage percent|Percent|The percentage of server log storage used out of the server's maximum server log storage.|
-|serverlog_storage_usage|Server Log storage used|Bytes|The amount of server log storage in use.|
-|serverlog_storage_limit|Server Log storage limit|Bytes|The maximum server log storage for this server.|
-|active_connections|Active Connections|Count|The number of active connections to the server.|
-|connections_failed|Failed Connections|Count|The number of established connections that failed.|
-|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
-|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
-|backup_storage_used|Backup Storage Used|Bytes|The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.|
-|pg_replica_log_delay_in_bytes|Max Lag Across Replicas|Bytes|The lag in bytes between the primary and the most-lagging replica. This metric is available on the primary server only.|
-|pg_replica_log_delay_in_seconds|Replica Lag|Seconds|The time since the last replayed transaction. This metric is available for replica servers only.|
-
-## Server logs
-You can enable logging on your server. These resource logs can be sent to [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), Event Hubs, and a Storage Account. To learn more about logging, visit the [server logs](concepts-server-logs.md) page.
-
-## Query Store
-[Query Store](concepts-query-store.md) keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in a system database named **azure_sys** under the query_store schema. You can control the collection and storage of data via various configuration knobs.
-
-## Query Performance Insight
-[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible from the **Support + troubleshooting** section of your Azure Database for PostgreSQL server's portal page.
-
-## Performance Recommendations
-The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
-
-## Planned maintenance notification
-
-[Planned maintenance notifications](./concepts-planned-maintenance-notification.md) allow you to receive alerts for upcoming planned maintenance to your Azure Database for PostgreSQL - Single Server. These notifications are integrated with [Service Health's](../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 hours before the event.
-
-Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document.
-
-## Next steps
-- See [how to set up alerts](howto-alert-on-metric.md) for guidance on creating an alert on a metric.-- For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../azure-monitor/data-platform.md)-- Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-postgresql-monitoring/).-- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for PostgreSQL - Single Server.
postgresql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-planned-maintenance-notification.md
- Title: Planned maintenance notification - Azure Database for PostgreSQL - Single Server
-description: This article describes the Planned maintenance notification feature in Azure Database for PostgreSQL - Single Server
----- Previously updated : 2/17/2022--
-# Planned maintenance notification in Azure Database for PostgreSQL - Single Server
-
-Learn how to prepare for planned maintenance events on your Azure Database for PostgreSQL.
-
-## What is a planned maintenance?
-
-Azure Database for PostgreSQL service performs automated patching of the underlying hardware, OS, and database engine. The patch includes new service features, security, and software updates. For PostgreSQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patch is tested extensively and rolled out using safe deployment practices.
-
-A planned maintenance is a maintenance window when these service updates are deployed to servers in a given Azure region. During planned maintenance, a notification event is created to inform customers when the service update is deployed in the Azure region hosting their servers. Minimum duration between two planned maintenance is 30 days. You receive a notification of the next maintenance window 72 hours in advance.
-
-## Planned maintenance - duration and customer impact
-
-A planned maintenance for a given Azure region is typically expected to complete within 15 hours. This time-window also includes buffer time to execute a rollback plan if necessary. Azure Database for PostgreSQL servers are running in containers so database server restarts typically take 60-120 seconds to complete but there is no deterministic way to know when within this 15 hours window your server will be impacted. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failover time is dependent on database recovery, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events.
-
-In summary, while the planned maintenance event runs for 15 hours, the individual server impact generally lasts 60 seconds depending on the transactional activity on the server. A notification is sent 72 calendar hours before planned maintenance starts and another one while maintenance is in progress for a given region.
-
-## How can I get notified of planned maintenance?
-
-You can utilize the planned maintenance notifications feature to receive alerts for an upcoming planned maintenance event. You will receive the notification about the upcoming maintenance 72 calendar hours before the event and another one while maintenance is in-progress for a given region.
-
-### Planned maintenance notification
--
-**Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for PostgreSQL. These notifications are integrated with [Service Health's](../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event.
-
-We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted.
-
-You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
-
-### Check planned maintenance notification from Azure portal
-
-1. In the [Azure portal](https://portal.azure.com), select **Service Health**.
-2. Select **Planned Maintenance** tab
-3. Select **Subscription**, **Region, and **Service** for which you want to check the planned maintenance notification.
-
-### To receive planned maintenance notification
-
-1. In the [portal](https://portal.azure.com), select **Service Health**.
-2. In the **Alerts** section, select **Health alerts**.
-3. Select **+ Add service health alert** and fill in the fields.
-4. Fill out the required fields.
-5. Choose the **Event type**, select **Planned maintenance** or **Select all**
-6. In **Action groups** define how you would like to receive the alert (get an email, trigger a logic app etc.)
-7. Ensure Enable rule upon creation is set to Yes.
-8. Select **Create alert rule** to complete your alert
-
-For detailed steps on how to create **service health alerts**, refer to [Create activity log alerts on service notifications](../service-health/alerts-activity-log-service-notifications-portal.md).
-
-## Can I cancel or postpone planned maintenance?
-
-Maintenance is needed to keep your server secure, stable, and up-to-date. The planned maintenance event cannot be canceled or postponed. Once the notification is sent to a given Azure region, the patching schedule changes cannot be made for any individual server in that region. The patch is rolled out for entire region at once. Azure Database for PostgreSQL - Single server service is designed for cloud native application that doesn't require granular control or customization of the service. If you are looking to have ability to schedule maintenance for your servers, we recommend you consider [Flexible servers](./flexible-server/overview.md).
-
-## Are all the Azure regions patched at the same time?
-
-No, all the Azure regions are patched during the deployment wise window timings. The deployment wise window generally stretches from 5 PM - 8 AM local time next day, in a given Azure region. Geo-paired Azure regions are patched on different days. For high availability and business continuity of database servers, leveraging [cross region read replicas](./concepts-read-replicas.md#cross-region-replication) is recommended.
-
-## Retry logic
-
-A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors).
--
-## Next steps
--- For any questions or suggestions you might have about working with Azure Database for PostgreSQL, send an email to the Azure Database for PostgreSQL Team at AskAzureDBforPostgreSQL@service.microsoft.com-- See [How to set up alerts](howto-alert-on-metric.md) for guidance on creating an alert on a metric.-- [Troubleshoot connection issues to Azure Database for PostgreSQL - Single Server](howto-troubleshoot-common-connection-issues.md)-- [Handle transient errors and connect efficiently to Azure Database for PostgreSQL - Single Server](concepts-connectivity.md)
postgresql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-pricing-tiers.md
- Title: Pricing tiers - Azure Database for PostgreSQL - Single Server
-description: This article describes the compute and storage options in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 10/14/2020---
-# Pricing tiers in Azure Database for PostgreSQL - Single Server
-
-You can create an Azure Database for PostgreSQL server in one of three different pricing tiers: Basic, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the PostgreSQL server level. A server can have one or many databases.
-
-| Resource / Tier | **Basic** | **General Purpose** | **Memory Optimized** |
-|:|:-|:--|:|
-| Compute generation | Gen 4, Gen 5 | Gen 4, Gen 5 | Gen 5 |
-| vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 |
-| Memory per vCore | 2 GB | 5 GB | 10 GB |
-| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB |
-| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days |
-
-To choose a pricing tier, use the following table as a starting point.
-
-| Pricing tier | Target workloads |
-|:-|:--|
-| Basic | Workloads that require light compute and I/O performance. Examples include servers used for development or testing or small-scale infrequently used applications. |
-| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.|
-| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
-
-After you create a server, the number of vCores, hardware generation, and pricing tier (except to and from Basic) can be changed up or down within seconds. You also can independently adjust the amount of storage up and the backup retention period up or down with no application downtime. You can't change the backup storage type after a server is created. For more information, see the [Scale resources](#scale-resources) section.
-
-## Compute generations and vCores
-
-Compute resources are provided as vCores, which represent the logical CPU of the underlying hardware. China East 1, China North 1, US DoD Central, and US DoD East utilize Gen 4 logical CPUs that are based on Intel E5-2673 v3 (Haswell) 2.4-GHz processors. All other regions utilize Gen 5 logical CPUs that are based on Intel E5-2673 v4 (Broadwell) 2.3-GHz processors.
-
-## Storage
-
-The storage you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and the PostgreSQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.
-
-| Storage attributes | **Basic** | **General Purpose** | **Memory Optimized** |
-|:|:-|:--|:|
-| Storage type | Basic Storage | General Purpose Storage | General Purpose Storage |
-| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB |
-| Storage increment size | 1 GB | 1 GB | 1 GB |
-| IOPS | Variable |3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS | 3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS |
-
-> [!NOTE]
-> Storage up to 16TB and 20,000 IOPS is supported in the following regions: Australia East, Australia South East, Brazil South, Canada Central, Canada East, Central US, China East 2, China North 2, East Asia, East US, East US 1, East US 2, France Central, India Central, India South, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, Switzerland North, Switzerland West, US Gov East, US Gov SouthCentral, US Gov SouthWest, UK South, UK West, West Europe, West Central US, West US, and West US 2.
->
-> All other regions support up to 4TB of storage and 6000 IOPS.
->
-
-You can add additional storage capacity during and after the creation of the server, and allow the system to grow storage automatically based on the storage consumption of your workload.
-
->[!NOTE]
-> Storage can only be scaled up, not down.
-
-The Basic tier does not provide an IOPS guarantee. In the General Purpose and Memory Optimized pricing tiers, the IOPS scale with the provisioned storage size in a 3:1 ratio.
-
-You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and IO percent](concepts-monitoring.md).
-
-### Reaching the storage limit
-
-Servers with less than equal to 100 GB provisioned storage are marked read-only if the free storage is less than 512MB or 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB.
-
-For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 512 MB.
-
-When the server is set to read-only, all existing sessions are disconnected and uncommitted transactions are rolled back. Any subsequent write operations and transaction commits fail. All subsequent read queries will work uninterrupted.
-
-You can either increase the amount of provisioned storage to your server or start a new session in read-write mode and drop data to reclaim free storage. Running `SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE;` sets the current session to read write mode. In order to avoid data corruption, do not perform any write operations when the server is still in read-only status.
-
-We recommend that you turn on storage auto-grow or to set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on [how to set up an alert](howto-alert-on-metric.md).
-
-### Storage auto-grow
-
-Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below the greater of 10 GB or 5% of the provisioned storage size. Maximum storage limits as specified above apply.
-
-For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 950 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free.
-
-Remember that storage can only be scaled up, not down.
-
-## Backup storage
-
-Azure Database for PostgreSQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any backup storage you use in excess of this amount is charged in GB per month. For example, if you provision a server with 250 GB of storage, youΓÇÖll have 250 GB of additional storage available for server backups at no charge. Storage for backups in excess of the 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/). To understand factors influencing backup storage usage, monitoring and controlling backup storage cost, you can refer to the [backup documentation](concepts-backup.md).
-
-## Scale resources
-
-After you create your server, you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI. For an example of scaling by using Azure CLI, see [Monitor and scale an Azure Database for PostgreSQL server by using Azure CLI](scripts/sample-scale-server-up-or-down.md).
-
-> [!NOTE]
-> The storage size can only be increased. You cannot go back to a smaller storage size after the increase.
-
-When you change the number of vCores, the hardware generation, or the pricing tier, a copy of the original server is created with the new compute allocation. After the new server is up and running, connections are switched over to the new server. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This window varies, but in most cases, is less than a minute.
-
-Scaling storage and changing the backup retention period are true online operations. There is no downtime, and your application isn't affected. As IOPS scale with the size of the provisioned storage, you can increase the IOPS available to your server by scaling up storage.
-
-## Pricing
-
-For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/PostgreSQL/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for PostgreSQL** to customize the options.
-
-## Next steps
--- Learn how to [create a PostgreSQL server in the portal](tutorial-design-database-using-azure-portal.md).-- Learn about [service limits](concepts-limits.md).-- Learn how to [scale out with read replicas](howto-read-replicas-portal.md).
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-query-store-best-practices.md
- Title: Query Store best practices in Azure Database for PostgreSQL - Single Server
-description: This article describes best practices for the Query Store in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 5/6/2019--
-# Best practices for Query Store
-
-**Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6, 10, 11
-
-This article outlines best practices for using Query Store in Azure Database for PostgreSQL.
-
-## Set the optimal query capture mode
-Let Query Store capture the data that matters to you.
-
-|**pg_qs.query_capture_mode** | **Scenario**|
-|||
-|_All_ |Analyze your workload thoroughly in terms of all queries and their execution frequencies and other statistics. Identify new queries in your workload. Detect if ad hoc queries are used to identify opportunities for user or auto parameterization. _All_ comes with an increased resource consumption cost. |
-|_Top_ |Focus your attention on top queries - those issued by clients.
-|_None_ |You've already captured a query set and time window that you want to investigate and you want to eliminate the distractions that other queries may introduce. _None_ is suitable for testing and bench-marking environments. _None_ should be used with caution as you might miss the opportunity to track and optimize important new queries. You can't recover data on those past time windows. |
-
-Query Store also includes a store for wait statistics. There is an additional capture mode query that governs wait statistics: **pgms_wait_sampling.query_capture_mode** can be set to _none_ or _all_.
-
-> [!NOTE]
-> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is _none_, the pgms_wait_sampling.query_capture_mode setting has no effect.
--
-## Keep the data you need
-The **pg_qs.retention_period_in_days** parameter specifies in days the data retention period for Query Store. Older query and statistics data is deleted. By default, Query Store is configured to retain the data for 7 days. Avoid keeping historical data you do not plan to use. Increase the value if you need to keep data longer.
--
-## Set the frequency of wait stats sampling
-The **pgms_wait_sampling.history_period** parameter specifies how often (in milliseconds) wait events are sampled. The shorter the period, the more frequent the sampling. More information is retrieved, but that comes with the cost of greater resource consumption. Increase this period if the server is under load or you don't need the granularity
--
-## Get quick insights into Query Store
-You can use [Query Performance Insight](concepts-query-performance-insight.md) in the Azure portal to get quick insights into the data in Query Store. The visualizations surface the longest running queries and longest wait events over time.
-
-## Next steps
-- Learn how to get or set parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-query-store.md
- Title: Query Store - Azure Database for PostgreSQL - Single Server
-description: This article describes the Query Store feature in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 07/01/2020--
-# Monitor performance with the Query Store
-
-**Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6 and above
-
-The Query Store feature in Azure Database for PostgreSQL provides a way to track query performance over time. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in a database named **azure_sys** in the Azure Database for PostgreSQL instance.
-
-> [!IMPORTANT]
-> Do not modify the **azure_sys** database or its schemas. Doing so will prevent Query Store and related performance features from functioning correctly.
-
-## Enabling Query Store
-Query Store is an opt-in feature, so it isn't active by default on a server. The store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database.
-
-### Enable Query Store using the Azure portal
-1. Sign in to the Azure portal and select your Azure Database for PostgreSQL server.
-2. Select **Server Parameters** in the **Settings** section of the menu.
-3. Search for the `pg_qs.query_capture_mode` parameter.
-4. Set the value to `TOP` and **Save**.
-
-To enable wait statistics in your Query Store:
-1. Search for the `pgms_wait_sampling.query_capture_mode` parameter.
-1. Set the value to `ALL` and **Save**.
--
-Alternatively you can set these parameters using the Azure CLI.
-```azurecli-interactive
-az postgres server configuration set --name pg_qs.query_capture_mode --resource-group myresourcegroup --server mydemoserver --value TOP
-az postgres server configuration set --name pgms_wait_sampling.query_capture_mode --resource-group myresourcegroup --server mydemoserver --value ALL
-```
-
-Allow up to 20 minutes for the first batch of data to persist in the azure_sys database.
-
-## Information in Query Store
-Query Store has two stores:
-- A runtime stats store for persisting the query execution statistics information.-- A wait stats store for persisting wait statistics information.-
-Common scenarios for using Query Store include:
-- Determining the number of times a query was executed in a given time window-- Comparing the average execution time of a query across time windows to see large deltas-- Identifying longest running queries in the past X hours-- Identifying top N queries that are waiting on resources-- Understanding wait nature for a particular query-
-To minimize space usage, the runtime execution statistics in the runtime stats store are aggregated over a fixed, configurable time window. The information in these stores is visible by querying the query store views.
-
-## Access Query Store information
-
-Query Store data is stored in the azure_sys database on your Postgres server.
-
-The following query returns information about queries in Query Store:
-```sql
-SELECT * FROM query_store.qs_view;
-```
-
-Or this query for wait stats:
-```sql
-SELECT * FROM query_store.pgms_wait_sampling_view;
-```
-
-## Finding wait queries
-Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics.
-
-Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store:
-
-| **Observation** | **Action** |
-|||
-|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level.|
-| High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, in order to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries.|
-| High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries.|
-
-## Configuration options
-When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window.
-
-The following options are available for configuring Query Store parameters.
-
-| **Parameter** | **Description** | **Default** | **Range**|
-|||||
-| pg_qs.query_capture_mode | Sets which statements are tracked. | none | none, top, all |
-| pg_qs.max_query_text_length | Sets the maximum query length that can be saved. Longer queries will be truncated. | 6000 | 100 - 10K |
-| pg_qs.retention_period_in_days | Sets the retention period. | 7 | 1 - 30 |
-| pg_qs.track_utility | Sets whether utility commands are tracked | on | on, off |
-
-The following options apply specifically to wait statistics.
-
-| **Parameter** | **Description** | **Default** | **Range**|
-|||||
-| pgms_wait_sampling.query_capture_mode | Sets which statements are tracked for wait stats. | none | none, all|
-| Pgms_wait_sampling.history_period | Set the frequency, in milliseconds, at which wait events are sampled. | 100 | 1-600000 |
-
-> [!NOTE]
-> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is NONE, the pgms_wait_sampling.query_capture_mode setting has no effect.
--
-Use the [Azure portal](howto-configure-server-parameters-using-portal.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md) to get or set a different value for a parameter.
-
-## Views and functions
-View and manage Query Store using the following views and functions. Anyone in the PostgreSQL public role can use these views to see the data in Query Store. These views are only available in the **azure_sys** database.
-
-Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash.
-
-### query_store.qs_view
-This view returns query text data in Query Store. There is one row for each distinct query_text. The data isn't available via the Intelligent Performance section in the portal, APIs, or the CLI - but It can be found by connecting to azure_sys and querying 'query_store.query_texts_view'.
-
-|**Name** |**Type** | **References** | **Description**|
-|||||
-|runtime_stats_entry_id |bigint | | ID from the runtime_stats_entries table|
-|user_id |oid |pg_authid.oid |OID of user who executed the statement|
-|db_id |oid |pg_database.oid |OID of database in which the statement was executed|
-|query_id |bigint || Internal hash code, computed from the statement's parse tree|
-|query_sql_text |Varchar(10000) || Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster.|
-|plan_id |bigint | |ID of the plan corresponding to this query, not available yet|
-|start_time |timestamp || Queries are aggregated by time buckets - the time span of a bucket is 15 minutes by default. This is the start time corresponding to the time bucket for this entry.|
-|end_time |timestamp || End time corresponding to the time bucket for this entry.|
-|calls |bigint || Number of times the query executed|
-|total_time |double precision || Total query execution time, in milliseconds|
-|min_time |double precision || Minimum query execution time, in milliseconds|
-|max_time |double precision || Maximum query execution time, in milliseconds|
-|mean_time |double precision || Mean query execution time, in milliseconds|
-|stddev_time| double precision || Standard deviation of the query execution time, in milliseconds |
-|rows |bigint || Total number of rows retrieved or affected by the statement|
-|shared_blks_hit| bigint || Total number of shared block cache hits by the statement|
-|shared_blks_read| bigint || Total number of shared blocks read by the statement|
-|shared_blks_dirtied| bigint || Total number of shared blocks dirtied by the statement |
-|shared_blks_written| bigint || Total number of shared blocks written by the statement|
-|local_blks_hit| bigint || Total number of local block cache hits by the statement|
-|local_blks_read| bigint || Total number of local blocks read by the statement|
-|local_blks_dirtied| bigint || Total number of local blocks dirtied by the statement|
-|local_blks_written| bigint || Total number of local blocks written by the statement|
-|temp_blks_read |bigint || Total number of temp blocks read by the statement|
-|temp_blks_written| bigint || Total number of temp blocks written by the statement|
-|blk_read_time |double precision || Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)|
-|blk_write_time |double precision || Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)|
-
-### query_store.query_texts_view
-This view returns query text data in Query Store. There is one row for each distinct query_text.
-
-| **Name** | **Type** | **Description** |
-|--|--|--|
-| query_text_id | bigint | ID for the query_texts table |
-| query_sql_text | Varchar(10000) | Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster. |
-
-### query_store.pgms_wait_sampling_view
-This view returns query text data in Query Store. There is one row for each distinct query_text. The data isn't available via the Intelligent Performance section in the portal, APIs, or the CLI - but It can be found by connecting to azure_sys and querying 'query_store.query_texts_view'.
-
-| **Name** | **Type** | **References** | **Description** |
-|--|--|--|--|
-| user_id | oid | pg_authid.oid | OID of user who executed the statement |
-| db_id | oid | pg_database.oid | OID of database in which the statement was executed |
-| query_id | bigint | | Internal hash code, computed from the statement's parse tree |
-| event_type | text | | The type of event for which the backend is waiting |
-| event | text | | The wait event name if backend is currently waiting |
-| calls | Integer | | Number of the same event captured |
-
-### Functions
-
-Query_store.qs_reset() returns void
-
-`qs_reset` discards all statistics gathered so far by Query Store. This function can only be executed by the server admin role.
-
-Query_store.staging_data_reset() returns void
-
-`staging_data_reset` discards all statistics gathered in memory by Query Store (that is, the data in memory that has not been flushed yet to the database). This function can only be executed by the server admin role.
--
-## Azure Monitor
-Azure Database for PostgreSQL is integrated with [Azure Monitor diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md). Diagnostic settings allows you to send your Postgres logs in JSON format to [Azure Monitor Logs](../azure-monitor/logs/log-query-overview.md) for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
-
->[!IMPORTANT]
-> This diagnostic feature for is only available in the General Purpose and Memory Optimized pricing tiers.
-
-### Configure diagnostic settings
-You can enable diagnostic settings for your Postgres server using the Azure portal, CLI, REST API, and PowerShell. The log categories to configure are **QueryStoreRuntimeStatistics** and **QueryStoreWaitStatistics**.
-
-To enable resource logs using the Azure portal:
-
-1. In the portal, go to Diagnostic Settings in the navigation menu of your Postgres server.
-2. Select Add Diagnostic Setting.
-3. Name this setting.
-4. Select your preferred endpoint (storage account, event hub, log analytics).
-5. Select the log types **QueryStoreRuntimeStatistics** and **QueryStoreWaitStatistics**.
-6. Save your setting.
-
-To enable this setting using PowerShell, CLI, or REST API, visit the [diagnostic settings article](../azure-monitor/essentials/diagnostic-settings.md).
-
-### JSON log format
-The following tables describes the fields for the two log types. Depending on the output endpoint you choose, the fields included and the order in which they appear may vary.
-
-#### QueryStoreRuntimeStatistics
-|**Field** | **Description** |
-|||
-| TimeGenerated [UTC] | Time stamp when the log was recorded in UTC |
-| ResourceId | Postgres server's Azure resource URI |
-| Category | `QueryStoreRuntimeStatistics` |
-| OperationName | `QueryStoreRuntimeStatisticsEvent` |
-| LogicalServerName_s | Postgres server name |
-| runtime_stats_entry_id_s | ID from the runtime_stats_entries table |
-| user_id_s | OID of user who executed the statement |
-| db_id_s | OID of database in which the statement was executed |
-| query_id_s | Internal hash code, computed from the statement's parse tree |
-| end_time_s | End time corresponding to the time bucket for this entry |
-| calls_s | Number of times the query executed |
-| total_time_s | Total query execution time, in milliseconds |
-| min_time_s | Minimum query execution time, in milliseconds |
-| max_time_s | Maximum query execution time, in milliseconds |
-| mean_time_s | Mean query execution time, in milliseconds |
-| ResourceGroup | The resource group |
-| SubscriptionId | Your subscription ID |
-| ResourceProvider | `Microsoft.DBForPostgreSQL` |
-| Resource | Postgres server name |
-| ResourceType | `Servers` |
--
-#### QueryStoreWaitStatistics
-|**Field** | **Description** |
-|||
-| TimeGenerated [UTC] | Time stamp when the log was recorded in UTC |
-| ResourceId | Postgres server's Azure resource URI |
-| Category | `QueryStoreWaitStatistics` |
-| OperationName | `QueryStoreWaitEvent` |
-| user_id_s | OID of user who executed the statement |
-| db_id_s | OID of database in which the statement was executed |
-| query_id_s | Internal hash code of the query |
-| calls_s | Number of the same event captured |
-| event_type_s | The type of event for which the backend is waiting |
-| event_s | The wait event name if the backend is currently waiting |
-| start_time_t | Event start time |
-| end_time_s | Event end time |
-| LogicalServerName_s | Postgres server name |
-| ResourceGroup | The resource group |
-| SubscriptionId | Your subscription ID |
-| ResourceProvider | `Microsoft.DBForPostgreSQL` |
-| Resource | Postgres server name |
-| ResourceType | `Servers` |
-
-## Limitations and known issues
-- If a PostgreSQL server has the parameter default_transaction_read_only on, Query Store cannot capture data.-- Query Store functionality can be interrupted if it encounters long Unicode queries (>= 6000 bytes).-- [Read replicas](concepts-read-replicas.md) replicate Query Store data from the primary server. This means that a read replica's Query Store does not provide statistics about queries run on the read replica.--
-## Next steps
-- Learn more about [scenarios where Query Store can be especially helpful](concepts-query-store-scenarios.md).-- Learn more about [best practices for using Query Store](concepts-query-store-best-practices.md).
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-read-replicas.md
- Title: Read replicas - Azure Database for PostgreSQL - Single Server
-description: This article describes the read replica feature in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 05/29/2021--
-# Read replicas in Azure Database for PostgreSQL - Single Server
-
-The read replica feature allows you to replicate data from an Azure Database for PostgreSQL server to a read-only server. Replicas are updated **asynchronously** with the PostgreSQL engine native physical replication technology. You can replicate from the primary server to up to five replicas.
-
-Replicas are new servers that you manage similar to regular Azure Database for PostgreSQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
-
-Learn how to [create and manage replicas](howto-read-replicas-portal.md).
-
-## When to use a read replica
-The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the primary. Read replicas can also be deployed on a different region and can be promoted to be a read/write server in the event of a disaster recovery.
-
-A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
-
-Because replicas are read-only, they don't directly reduce write-capacity burdens on the primary.
-
-### Considerations
-The feature is meant for scenarios where the lag is acceptable and meant for offloading queries. It isn't meant for synchronous replication scenarios where the replica data is expected to be up-to-date. There will be a measurable delay between the primary and the replica. This can be in minutes or even hours depending on the workload and the latency between the primary and the replica. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
-
-> [!NOTE]
-> For most workloads read replicas offer near-real-time updates from the primary. However, with persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica. If this situation persists, deleting and recreating the read replica after the write-intensive workloads completes is the option to bring the replica back to a good state with respect to lag.
-> Asynchronous read replicas are not suitable for such heavy write workloads. When evaluating read replicas for your application, monitor the lag on the replica for a full app work load cycle thru its peak and non-peak times to access the possible lag and the expected RTO/RPO at various points of the workload cycle.
-
-> [!NOTE]
-> Automatic backups are performed for replica servers that are configured with up to 4TB storage configuration.
-
-## Cross-region replication
-You can create a read replica in a different region from your primary server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
-
->[!NOTE]
-> Basic tier servers only support same-region replication.
-
-You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can have a replica in its paired region or the universal replica regions. The picture below shows which replica regions are available depending on your primary region.
-
-[ :::image type="content" source="media/concepts-read-replica/read-replica-regions.png" alt-text="Read replica regions":::](media/concepts-read-replica/read-replica-regions.png#lightbox)
-
-### Universal replica regions
-You can always create a read replica in any of the following regions, regardless of where your primary server is located. These are the universal replica regions:
-
-Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East Asia, East US, East US 2, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, UK South, UK West, West Europe, West US, West US 2, West Central US.
-
-### Paired regions
-In addition to the universal replica regions, you can create a read replica in the Azure paired region of your primary server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../availability-zones/cross-region-replication-azure.md).
-
-If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
-
-There are limitations to consider:
-
-* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South.
- This means that a primary server in West India can create a replica in South India. However, a primary server in South India cannot create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region is not West India.
--
-## Create a replica
-When you start the create replica workflow, a blank Azure Database for PostgreSQL server is created. The new server is filled with the data that was on the primary server. The creation time depends on the amount of data on the primary and the time since the last weekly full backup. The time can range from a few minutes to several hours.
-
-Every replica is enabled for storage [auto-grow](concepts-pricing-tiers.md#storage-auto-grow). The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent a break in replication caused by out of storage errors.
-
-The read replica feature uses PostgreSQL physical replication, not logical replication. Streaming replication by using replication slots is the default operation mode. When necessary, log shipping is used to catch up.
-
-Learn how to [create a read replica in the Azure portal](howto-read-replicas-portal.md).
-
-If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
-
-## Connect to a replica
-When you create a replica, it doesn't inherit the firewall rules or VNet service endpoint of the primary server. These rules must be set up independently for the replica.
-
-The replica inherits the admin account from the primary server. All user accounts on the primary server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the primary server.
-
-You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for PostgreSQL server. For a server named **my replica** with the admin username **myadmin**, you can connect to the replica by using psql:
-
-```bash
-psql -h myreplica.postgres.database.azure.com -U myadmin@myreplica -d postgres
-```
-
-At the prompt, enter the password for the user account.
-
-## Monitor replication
-Azure Database for PostgreSQL provides two metrics for monitoring replication. The two metrics are **Max Lag Across Replicas** and **Replica Lag**. To learn how to view these metrics, see the **Monitor a replica** section of the [read replica how-to article](howto-read-replicas-portal.md).
-
-The **Max Lag Across Replicas** metric shows the lag in bytes between the primary and the most-lagging replica. This metric is applicable and available on the primary server only, and will be available only if at least one of the read replica is connected to the primary and the primary is in streaming replication mode. The lag information does not show details when the replica is in the process of catching up with the primary using the archived logs of the primary in a file-shipping replication mode.
-
-The **Replica Lag** metric shows the time since the last replayed transaction. If there are no transactions occurring on your primary server, the metric reflects this time lag. This metric is applicable and available for replica servers only. Replica Lag is calculated from the `pg_stat_wal_receiver` view:
-
-```SQL
-SELECT EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp());
-```
-
-Set an alert to inform you when the replica lag reaches a value that isnΓÇÖt acceptable for your workload.
-
-For additional insight, query the primary server directly to get the replication lag in bytes on all replicas.
-
-> [!NOTE]
-> If a primary server or read replica restarts, the time it takes to restart and catch up is reflected in the Replica Lag metric.
-
-## Stop replication / Promote replica
-You can stop the replication between a primary and a replica at any time. The stop action causes the replica to restart and promotes the replica to be an independent, standalone read-writeable server. The data in the standalone server is the data that was available on the replica server at the time the replication is stopped. Any subsequent updates at the primary are not propagated to the replica. However, replica server may have accumulated logs that are not applied yet. As part of the restart process, the replica applies all the pending logs before accepting client connections.
-
->[!NOTE]
-> Resetting admin password on replica server is currently not supported. Additionally, updating admin password along with promote replica operation in the same request is also not supported. If you wish to do this you must first promote the replica server then update the password on the newly promoted server separately.
-
-### Considerations
-- Before you stop replication on a read replica, check for the replication lag to ensure the replica has all the data that you require. -- As the read replica has to apply all pending logs before it can be made a standalone server, RTO can be higher for write heavy workloads when the stop replication happens as there could be a significant delay on the replica. Please pay attention to this when planning to promote a replica.-- The promoted replica server cannot be made into a replica again.-- If you promote a replica to be the primary server, you cannot establish replication back to the old primary server. If you want to go back to the old primary region, you can either establish a new replica server with a new name (or) delete the old primary and create a replica using the old primary name.-- If you have multiple read replicas, and if you promote one of them to be your primary server, other replica servers are still connected to the old primary. You may have to recreate replicas from the new, promoted server.-
-When you stop replication, the replica loses all links to its previous primary and other replicas.
-
-Learn how to [stop replication to a replica](howto-read-replicas-portal.md).
-
-## Failover to replica
-
-In the event of a primary server failure, it is **not** automatically failed over to the read replica.
-
-Since replication is asynchronous, there could be a considerable lag between the primary and the replica. The amount of lag is influenced by a number of factors such as the type of workload running on the primary server and the latency between the primary and the replica server. In typical cases with nominal write workload, replica lag is expected between a few seconds to few minutes. However, in cases where the primary runs very heavy write-intensive workload and the replica is not catching up fast enough, the lag can be much higher. You can track the replication lag for each replica using the metric *Replica Lag*. This metric shows the time since the last replayed transaction at the replica. We recommend that you identify the average lag by observing the replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you will be notified to take action.
-
-> [!Tip]
-> If you failover to the replica, the lag at the time you delink the replica from the primary will indicate how much data is lost.
-
-Once you have decided you want to failover to a replica,
-
-1. Stop replication to the replica<br/>
- This step is necessary to make the replica server to become a standalone server and be able to accept writes. As part of this process, the replica server will restart and be delinked from the primary. Once you initiate stop replication, the backend process typically takes few minutes to apply any residual logs that were not yet applied and to open the database as a read-writeable server. See the [stop replication](#stop-replication--promote-replica) section of this article to understand the implications of this action.
-
-2. Point your application to the (former) replica<br/>
- Each server has a unique connection string. Update your application connection string to point to the (former) replica instead of the primary.
-
-Once your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 above.
-
-### Disaster recovery
-
-When there is a major disaster event such as availability zone-level or regional failures, you can perform disaster recovery operation by promoting your read replica. From the UI portal, you can navigate to the read replica server. Then click the replication tab, and you can stop the replica to promote it to be an independent server. Alternatively, you can use the [Azure CLI](/cli/azure/postgres/server/replica#az-postgres-server-replica-stop) to stop and promote the replica server.
-
-## Considerations
-
-This section summarizes considerations about the read replica feature.
-
-### Prerequisites
-Read replicas and [logical decoding](concepts-logical.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas.
-
-To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
-
-* **Off** - Puts the least information in the WAL. This setting is not available on most Azure Database for PostgreSQL servers.
-* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers.
-* **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting.
--
-### New replicas
-A read replica is created as a new Azure Database for PostgreSQL server. An existing server can't be made into a replica. You can't create a replica of another read replica.
-
-### Replica configuration
-A replica is created by using the same compute and storage settings as the primary. After a replica is created, several settings can be changed including storage and backup retention period.
-
-Firewall rules, virtual network rules, and parameter settings are not inherited from the primary server to the replica when the replica is created or afterwards.
-
-### Scaling
-Scaling vCores or between General Purpose and Memory Optimized:
-* PostgreSQL requires the `max_connections` setting on a secondary server to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html), otherwise the secondary will not start.
-* In Azure Database for PostgreSQL, the maximum allowed connections for each server is fixed to the compute sku since connections occupy memory. You can learn more about the [mapping between max_connections and compute skus](concepts-limits.md).
-* **Scaling up**: First scale up a replica's compute, then scale up the primary. This order will prevent errors from violating the `max_connections` requirement.
-* **Scaling down**: First scale down the primary's compute, then scale down the replica. If you try to scale the replica lower than the primary, there will be an error since this violates the `max_connections` requirement.
-
-Scaling storage:
-* All replicas have storage auto-grow enabled to prevent replication issues from a storage-full replica. This setting cannot be disabled.
-* You can also manually scale up storage, as you would do on any other server
--
-### Basic tier
-Basic tier servers only support same-region replication.
-
-### max_prepared_transactions
-[PostgreSQL requires](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PREPARED-TRANSACTIONS) the value of the `max_prepared_transactions` parameter on the read replica to be greater than or equal to the primary value; otherwise, the replica won't start. If you want to change `max_prepared_transactions` on the primary, first change it on the replicas.
-
-### Stopped replicas
-If you stop replication between a primary server and a read replica, the replica restarts to apply the change. The stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again.
-
-### Deleted primary and standalone servers
-When a primary server is deleted, all of its read replicas become standalone servers. The replicas are restarted to reflect this change.
-
-## Next steps
-* Learn how to [create and manage read replicas in the Azure portal](howto-read-replicas-portal.md).
-* Learn how to [create and manage read replicas in the Azure CLI and REST API](howto-read-replicas-cli.md).
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-security.md
- Title: Security in Azure Database for PostgreSQL - Single Server
-description: An overview of the security features in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 11/22/2019--
-# Security in Azure Database for PostgreSQL - Single Server
-
-There are multiple layers of security that are available to protect the data on your Azure Database for PostgreSQL server. This article outlines those security options.
-
-## Information protection and encryption
-
-### In-transit
-Azure Database for PostgreSQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default.
-
-### At-rest
-The Azure Database for PostgreSQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
--
-## Network security
-Connections to an Azure Database for PostgreSQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
-
-A newly created Azure Database for PostgreSQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
-
-### IP firewall rules
-IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information.
-
-### Virtual network firewall rules
-Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for PostgreSQL server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-and-security-vnet.md).
-
-### Private IP
-Private Link allows you to connect to your Azure Database for PostgreSQL Single server in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-and-security-private-link.md)
--
-## Access management
-
-While creating the Azure Database for PostgreSQL server, you provide credentials for an administrator role. This administrator role can be used to create additional [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
-
-You can also connect to the server using [Azure Active Directory (AAD) authentication](concepts-aad-authentication.md).
--
-## Threat protection
-
-You can opt in to [Advanced Threat Protection](../security-center/defender-for-databases-introduction.md) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
-
-[Audit logging](concepts-audit.md) is available to track activity in your databases.
-
-## Migrating from Oracle
-
-Oracle supports Transparent Data Encryption (TDE) to encrypt table and tablespace data. In Azure for PostgreSQL, the data is automatically encrypted at various layers. See the "At-rest" section in this page and also refer to various Security topics, including [customer managed keys](./concepts-data-encryption-postgresql.md) and [Infrastructure double encryption](./concepts-infrastructure-double-encryption.md). You may also consider using [pgcrypto](https://www.postgresql.org/docs/11/pgcrypto.html) extension which is supported in [Azure for PostgreSQL](./concepts-extensions.md).
-
-## Next steps
-- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md)-- Learn about [Azure Active Directory authentication](concepts-aad-authentication.md) in Azure Database for PostgreSQL
postgresql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-server-logs.md
- Title: Logs - Azure Database for PostgreSQL - Single Server
-description: Describes logging configuration, storage and analysis in Azure Database for PostgreSQL - Single Server
----- Previously updated : 06/25/2020--
-# Logs in Azure Database for PostgreSQL - Single Server
-
-Azure Database for PostgreSQL allows you to configure and access Postgres's standard logs. The logs can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance. Logging information you can configure and access includes errors, query information, autovacuum records, connections, and checkpoints. (Access to transaction logs is not available).
-
-Audit logging is made available through a PostgreSQL extension, pgaudit. To learn more, visit the [auditing concepts](concepts-audit.md) article.
--
-## Configure logging
-You can configure Postgres standard logging on your server using the logging server parameters. On each Azure Database for PostgreSQL server, `log_checkpoints` and `log_connections` are on by default. There are additional parameters you can adjust to suit your logging needs:
--
-To learn more about Postgres log parameters, visit the [When To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN) and [What To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT) sections of the Postgres documentation. Most, but not all, Postgres logging parameters are available to configure in Azure Database for PostgreSQL.
-
-To learn how to configure parameters in Azure Database for PostgreSQL, see the [portal documentation](howto-configure-server-parameters-using-portal.md) or the [CLI documentation](howto-configure-server-parameters-using-cli.md).
-
-> [!NOTE]
-> Configuring a high volume of logs, for example statement logging, can add significant performance overhead.
-
-## Access .log files
-The default log format in Azure Database for PostgreSQL is .log. A sample line from this log looks like:
-
-```
-2019-10-14 17:00:03 UTC-5d773cc3.3c-LOG: connection received: host=101.0.0.6 port=34331 pid=16216
-```
-
-Azure Database for PostgreSQL provides a short-term storage location for the .log files. A new file begins every 1 hour or 100 MB, whichever comes first. Logs are appended to the current file as they are emitted from Postgres.
-
-You can set the retention period for this short-term log storage using the `log_retention_period` parameter. The default value is 3 days; the maximum value is 7 days. The short-term storage location can hold up to 1 GB of log files. After 1 GB, the oldest files, regardless of retention period, will be deleted to make room for new logs.
-
-For longer-term retention of logs and log analysis, you can download the .log files and move them to a third-party service. You can download the files using the [Azure portal](howto-configure-server-logs-in-portal.md), [Azure CLI](howto-configure-server-logs-using-cli.md). Alternatively, you can configure Azure Monitor diagnostic settings which automatically emits your logs (in JSON format) to longer-term locations. Learn more about this option in the section below.
-
-You can stop generating .log files by setting the parameter `logging_collector` to OFF. Turning off .log file generation is recommended if you are using Azure Monitor diagnostic settings. This configuration will reduce the performance impact of additional logging.
-> [!NOTE]
-> Restart the server to apply this change.
-
-## Resource logs
-
-Azure Database for PostgreSQL is integrated with Azure Monitor diagnostic settings. Diagnostic settings allows you to send your Postgres logs in JSON format to Azure Monitor Logs for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
-
-> [!IMPORTANT]
-> This diagnostic feature for server logs is only available in the General Purpose and Memory Optimized [pricing tiers](concepts-pricing-tiers.md).
--
-### Configure diagnostic settings
-
-You can enable diagnostic settings for your Postgres server using the Azure portal, CLI, REST API, and PowerShell. The log category to select is **PostgreSQLLogs**. (There are other logs you can configure if you are using [Query Store](concepts-query-store.md).)
-
-To enable resource logs using the Azure portal:
-
- 1. In the portal, go to *Diagnostic Settings* in the navigation menu of your Postgres server.
- 2. Select *Add Diagnostic Setting*.
- 3. Name this setting.
- 4. Select your preferred endpoint (storage account, event hub, log analytics).
- 5. Select the log type **PostgreSQLLogs**.
- 7. Save your setting.
-
-To enable resource logs using PowerShell, CLI, or REST API, visit the [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) article.
-
-### Access resource logs
-
-The way you access the logs depends on which endpoint you choose. For Azure Storage, see the [logs storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) article. For Event Hubs, see the [stream Azure logs](../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs) article.
-
-For Azure Monitor Logs, logs are sent to the workspace you selected. The Postgres logs use the **AzureDiagnostics** collection mode, so they can be queried from the AzureDiagnostics table. The fields in the table are described below. Learn more about querying and alerting in the [Azure Monitor Logs query](../azure-monitor/logs/log-query-overview.md) overview.
-
-The following are queries you can try to get started. You can configure alerts based on queries.
-
-Search for all Postgres logs for a particular server in the last day
-```
-AzureDiagnostics
-| where LogicalServerName_s == "myservername"
-| where Category == "PostgreSQLLogs"
-| where TimeGenerated > ago(1d)
-```
-
-Search for all non-localhost connection attempts
-```
-AzureDiagnostics
-| where Message contains "connection received" and Message !contains "host=127.0.0.1"
-| where Category == "PostgreSQLLogs" and TimeGenerated > ago(6h)
-```
-The query above will show results over the last 6 hours for any Postgres server logging in this workspace.
-
-### Log format
-
-The following table describes the fields for the **PostgreSQLLogs** type. Depending on the output endpoint you choose, the fields included and the order in which they appear may vary.
-
-|**Field** | **Description** |
-|||
-| TenantId | Your tenant ID |
-| SourceSystem | `Azure` |
-| TimeGenerated [UTC] | Time stamp when the log was recorded in UTC |
-| Type | Type of the log. Always `AzureDiagnostics` |
-| SubscriptionId | GUID for the subscription that the server belongs to |
-| ResourceGroup | Name of the resource group the server belongs to |
-| ResourceProvider | Name of the resource provider. Always `MICROSOFT.DBFORPOSTGRESQL` |
-| ResourceType | `Servers` |
-| ResourceId | Resource URI |
-| Resource | Name of the server |
-| Category | `PostgreSQLLogs` |
-| OperationName | `LogEvent` |
-| errorLevel | Logging level, example: LOG, ERROR, NOTICE |
-| Message | Primary log message |
-| Domain | Server version, example: postgres-10 |
-| Detail | Secondary log message (if applicable) |
-| ColumnName | Name of the column (if applicable) |
-| SchemaName | Name of the schema (if applicable) |
-| DatatypeName | Name of the datatype (if applicable) |
-| LogicalServerName | Name of the server |
-| _ResourceId | Resource URI |
-| Prefix | Log line's prefix |
--
-## Next steps
-- Learn more about accessing logs from the [Azure portal](howto-configure-server-logs-in-portal.md) or [Azure CLI](howto-configure-server-logs-using-cli.md).-- Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).-- Learn more about [audit logs](concepts-audit.md)
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-servers.md
-
Title: Servers - Azure Database for PostgreSQL - Single Server
-description: This article provides considerations and guidelines for configuring and managing Azure Database for PostgreSQL - Single Server.
----- Previously updated : 5/6/2019-
-# Azure Database for PostgreSQL - Single Server
-This article provides considerations and guidelines for working with Azure Database for PostgreSQL - Single Server.
-
-## What is an Azure Database for PostgreSQL server?
-A server in the Azure Database for PostgreSQL - Single Server deployment option is a central administrative point for multiple databases. It is the same PostgreSQL server construct that you may be familiar with in the on-premises world. Specifically, the PostgreSQL service is managed, provides performance guarantees, exposes access and features at the server-level.
-
-An Azure Database for PostgreSQL server:
--- Is created within an Azure subscription.-- Is the parent resource for databases.-- Provides a namespace for databases.-- Is a container with strong lifetime semantics - delete a server and it deletes the contained databases.-- Collocates resources in a region.-- Provides a connection endpoint for server and database access -- Provides the scope for management policies that apply to its databases: login, firewall, users, roles, configurations, etc.-- Is available in multiple versions. For more information, see [supported PostgreSQL database versions](concepts-supported-versions.md).-- Is extensible by users. For more information, see [PostgreSQL extensions](concepts-extensions.md).-
-Within an Azure Database for PostgreSQL server, you can create one or multiple databases. You can opt to create a single database per server to utilize all the resources, or create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Pricing tiers](./concepts-pricing-tiers.md).
-
-## How do I connect and authenticate to an Azure Database for PostgreSQL server?
-The following elements help ensure safe access to your database:
-
-|Security concept|Description|
-|:--|:--|
-| **Authentication and authorization** | Azure Database for PostgreSQL server supports native PostgreSQL authentication. You can connect and authenticate to server with the server's admin login. |
-| **Protocol** | The service supports a message-based protocol used by PostgreSQL. |
-| **TCP/IP** | The protocol is supported over TCP/IP, and over Unix-domain sockets. |
-| **Firewall** | To help protect your data, a firewall rule prevents all access to your server and to its databases, until you specify which computers have permission. See [Azure Database for PostgreSQL Server firewall rules](concepts-firewall-rules.md). |
-
-## Managing your server
-You can manage Azure Database for PostgreSQL servers by using the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/postgres).
-
-While creating a server, you set up the credentials for your admin user. The admin user is the highest privilege user you have on the server. It belongs to the role azure_pg_admin. This role does not have full superuser permissions.
-
-The PostgreSQL superuser attribute is assigned to the azure_superuser, which belongs to the managed service. You do not have access to this role.
-
-An Azure Database for PostgreSQL server has default databases:
-- **postgres** - A default database you can connect to once your server is created.-- **azure_maintenance** - This database is used to separate the processes that provide the managed service from user actions. You do not have access to this database.-- **azure_sys** - A database for the Query Store. This database does not accumulate data when Query Store is off; this is the default setting. For more information, see the [Query Store overview](concepts-query-store.md).--
-## Server parameters
-The PostgreSQL server parameters determine the configuration of the server. In Azure Database for PostgreSQL, the list of parameters can be viewed and edited using the Azure portal or the Azure CLI.
-
-As a managed service for Postgres, the configurable parameters in Azure Database for PostgreSQL are a subset of the parameters in a local Postgres instance (For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/runtime-config.html)). Your Azure Database for PostgreSQL server is enabled with default values for each parameter on creation. Some parameters that would require a server restart or superuser access for changes to take effect cannot be configured by the user.
--
-## Next steps
-- For an overview of the service, see [Azure Database for PostgreSQL Overview](overview.md).-- For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](concepts-pricing-tiers.md).-- For information on connecting to the service, see [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).-- View and edit server parameters through [Azure portal](howto-configure-server-parameters-using-portal.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-single-to-flexible.md
- Title: "Migrate from Azure Database for PostgreSQL Single Server to Flexible Server - Concepts"-
-description: Concepts about migrating your Single server to Azure database for PostgreSQL Flexible server.
---- Previously updated : 05/11/2022---
-# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview)
-
->[!NOTE]
-> Single Server to Flexible Server migration feature is in public preview.
-
-Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration feature enables customers to migrate their databases from Single server to Flexible. See this [documentation](./flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration feature. This feature automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The feature is provided free of cost for customers.
-
-Single to Flexible server migration is enabled in **Preview** in Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US.
-
-## Overview
-
-Single to Flexible server migration feature provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
-
-You choose the source server and can select up to **8** databases from it. This limitation is per migration task. The migration feature automates the following steps:
-
-1. Creates the migration infrastructure in the region of the target flexible server
-2. Creates public IP address and attaches it to the migration infrastructure
-3. Allow-listing of migration infrastructureΓÇÖs IP address on the firewall rules of both source and target servers
-4. Creates a migration project with both source and target types as Azure database for PostgreSQL
-5. Creates a migration activity to migrate the databases specified by the user from source to target.
-6. Migrates schema from source to target
-7. Creates databases with the same name on the target Flexible server
-8. Migrates data from source to target
-
-Following is the flow diagram for Single to Flexible migration feature.
-
-**Steps:**
-1. Create a Flex PG server
-2. Invoke migration
-3. Migration infrastructure provisioned (DMS)
-4. Initiates migration ΓÇô (4a) Initial dump/restore (online & offline) (4b) streaming the changes (online only)
-5. Cutover to the target
-
-The migration feature is exposed through **Azure Portal** and via easy-to-use **Azure CLI** commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations
-
-## Migration modes comparison
-
-Single to Flexible Server migration supports online and offline mode of migrations. Online option provides reduced downtime migration with logical replication restrictions while the offline option offers a simple migration but may incur extended downtime depending on the size of databases.
-
-The following table summarizes the differences between these two modes of migration.
-
-| Capability | Online | Offline |
-|:|:-|:--|
-| Database availability for reads during migration | Available | Available |
-| Database availability for writes during migration | Available | Generally, not recommended. Any writes initiated after the migration is not captured or migrated |
-| Application Suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window |
-| Environment Suitability | Production environments | Usually Development, Testing environments and some production that can afford downtime |
-| Suitability for Write-heavy workloads | Suitable but expected to reduce the workload during migration | Not Applicable. Writes at source after migration begins are not replicated to target. |
-| Manual Cutover | Required | Not required |
-| Downtime Required | Less | More |
-| Logical replication limitations | Applicable | Not Applicable |
-| Migration time required | Depends on Database size and the write activity until cutover | Depends on Database size |
-
-**Migration steps involved for Offline mode** = Dump of the source Single Server database followed by the Restore at the target Flexible server.
-
-The following table shows the approximate time taken to perform offline migrations for databases of various sizes.
-
->[!NOTE]
-> Add ~15 minutes for the migration infrastructure to get deployed for each migration task, where each task can migrate up to 8 databases.
-
-| Database Size | Approximate Time Taken (HH:MM) |
-|:|:-|
-| 1 GB | 00:01 |
-| 5 GB | 00:05 |
-| 10 GB | 00:10 |
-| 50 GB | 00:45 |
-| 100 GB | 06:00 |
-| 500 GB | 08:00 |
-| 1000 GB | 09:30 |
-
-**Migration steps involved for Online mode** = Dump of the source Single Server database(s), Restore of that dump in the target Flexible server, followed by Replication of ongoing changes (change data capture using logical decoding).
-
-The time taken for an online migration to complete is dependent on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to the replicated to the target flexible server.
-
-Based on the above differences, pick the mode that best works for your workloads.
---
-## Migration steps
-
-### Pre-requisites
-
-Follow the steps provided in this section before you get started with the single to flexible server migration feature.
--- **Target Server Creation** - You need to create the target PostgreSQL flexible server before using the migration feature. Use the creation [QuickStart guide](./flexible-server/quickstart-create-server-portal.md) to create one.--- **Source Server pre-requisites** - You must [enable logical replication](./concepts-logical.md) on the source server.-
- :::image type="content" source="./media/concepts-single-to-flex/logical-replication-support.png" alt-text="Logical replication from Azure portal" lightbox="./media/concepts-single-to-flex/logical-replication-support.png":::
-
->[!NOTE]
-> Enabling logical replication will require a server reboot for the change to take effect.
--- **Azure Active Directory App set up** - It is a critical component of the migration feature. Azure AD App helps with role-based access control as the migration feature needs access to both the source and target servers. See [How to setup and configure Azure AD App](./how-to-setup-aad-app-portal.md) for step-by-step process.-
-### Data and schema migration
-
-Once all these pre-requisites are taken care of, you can do the migration. This automated step involves schema and data migration using Azure portal or Azure CLI.
--- [Migrate using Azure portal](./how-to-migrate-single-to-flex-portal.md)-- [Migrate using Azure CLI](./how-to-migrate-single-to-flex-cli.md)-
-### Post migration
--- All the resources created by this migration tool will be automatically cleaned up irrespective of whether the migration has **succeeded/failed/cancelled**. There is no action required from you.--- If your migration has failed and you want to retry the migration, then you need to create a new migration task with a different name and retry the operation.--- If you have more than eight databases on your single server and if you want to migrate them all, then it is recommended to create multiple migration tasks with each task migrating up to eight databases.--- The migration does not move the database users and roles of the source server. This has to be manually created and applied to the target server post migration.--- For security reasons, it is highly recommended to delete the Azure Active Directory app once the migration completes.--- Post data validations and making your application point to flexible server, you can consider deleting your single server.-
-## Limitations
-
-### Size limitations
--- Databases of sizes up to 1TB can be migrated using this feature. To migrate larger databases or heavy write workloads, reach out to your account team or reach us @ AskAzureDBforPGS2F@microsoft.com.--- In one migration attempt, you can migrate up to eight user databases from a single server to flexible server. In case you have more databases to migrate, you can create multiple migrations between the same single and flexible servers.-
-### Performance limitations
--- The migration infrastructure is deployed on a 4 vCore VM which may limit the migration performance. --- The deployment of migration infrastructure takes ~10-15 minutes before the actual data migration starts - irrespective of the size of data or the migration mode (online or offline).-
-### Replication limitations
--- Single to Flexible Server migration feature uses logical decoding feature of PostgreSQL to perform the online migration and it comes with the following limitations. See PostgreSQL documentation for [logical replication limitations](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
- - **DDL commands** are not replicated.
- - **Sequence** data is not replicated.
- - **Truncate** commands are not replicated.(**Workaround**: use DELETE instead of TRUNCATE. To avoid accidental TRUNCATE invocations, you can revoke the TRUNCATE privilege from tables)
-
- - Views, Materialized views, partition root tables and foreign tables will not be migrated.
--- Logical decoding will use resources in the source single server. Consider reducing the workload or plan to scale CPU/memory resources at the Source Single Server during the migration.-
-### Other limitations
--- The migration feature migrates only data and schema of the single server databases to flexible server. It does not migrate other features such as server parameters, connection security details, firewall rules, users, roles and permissions. In other words, everything except data and schema must be manually configured in the target flexible server.--- It does not validate the data in flexible server post migration. The customers must manually do this.--- The migration tool only migrates user databases including Postgres database and not system/maintenance databases.--- For failed migrations, there is no option to retry the same migration task. A new migration task with a unique name can to be created.--- The migration feature does not include assessment of your single server. -
-## Best practices
--- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.-- Plan the mode of migration for each database. For less complex migrations and smaller databases, consider offline mode of migrations.-- Batch similar sized databases in a migration task. -- Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures.-- Perform test migrations before migrating for production.
- - **Testing migrations** is a very important aspect of database migration to ensure that all aspects of the migration are taken care of, including application testing. The best practice is to begin by running a migration entirely for testing purposes. Start a migration, and after it enters the continuous replication (CDC) phase with minimal lag, make your flexible server as the primary database server and use it for testing the application to ensure expected performance and results. If you are doing migration to a higher Postgres version, test for your application compatibility.
-
- - **Production migrations** - Once testing is completed, you can migrate the production databases. At this point you need to finalize the day and time of production migration. Ideally, there is low application use at this time. In addition, all stakeholders that need to be involved should be available and ready. The production migration would require close monitoring. It is important that for an online migration, the replication is completed before performing the cutover to prevent data loss.
--- Cut over all dependent applications to access the new primary database and open the applications for production usage.-- Once the application starts running on flexible server, monitor the database performance closely to see if performance tuning is required.-
-## Next steps
--- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flex-portal.md).-- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flex-cli.md)
postgresql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-ssl-connection-security.md
- Title: SSL/TLS - Azure Database for PostgreSQL - Single Server
-description: Instructions and information on how to configure TLS connectivity for Azure Database for PostgreSQL - Single Server.
----- Previously updated : 07/08/2020-
-# Configure TLS connectivity in Azure Database for PostgreSQL - Single Server
-
-Azure Database for PostgreSQL prefers connecting your client applications to the PostgreSQL service using Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL). Enforcing TLS connections between your database server and your client applications helps protect against "man-in-the-middle" attacks by encrypting the data stream between the server and your application.
-
-By default, the PostgreSQL database service is configured to require TLS connection. You can choose to disable requiring TLS if your client application does not support TLS connectivity.
-
->[!NOTE]
-> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021).
-
-> [!IMPORTANT]
-> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md)
--
-## Enforcing TLS connections
-
-For all Azure Database for PostgreSQL servers provisioned through the Azure portal and CLI, enforcement of TLS connections is enabled by default.
-
-Likewise, connection strings that are pre-defined in the "Connection Strings" settings under your server in the Azure portal include the required parameters for common languages to connect to your database server using TLS. The TLS parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations.
-
-## Configure Enforcement of TLS
-
-You can optionally disable enforcing TLS connectivity. Microsoft Azure recommends to always enable **Enforce SSL connection** setting for enhanced security.
-
-### Using the Azure portal
-
-Visit your Azure Database for PostgreSQL server and click **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting. Then, click **Save**.
--
-You can confirm the setting by viewing the **Overview** page to see the **SSL enforce status** indicator.
-
-### Using Azure CLI
-
-You can enable or disable the **ssl-enforcement** parameter using `Enabled` or `Disabled` values respectively in Azure CLI.
-
-```azurecli
-az postgres server update --resource-group myresourcegroup --name mydemoserver --ssl-enforcement Enabled
-```
-
-## Ensure your application or framework supports TLS connections
-
-Some application frameworks that use PostgreSQL for their database services do not enable TLS by default during installation. If your PostgreSQL server enforces TLS connections but the application is not configured for TLS, the application may fail to connect to your database server. Consult your application's documentation to learn how to enable TLS connections.
-
-## Applications that require certificate verification for TLS connectivity
-
-In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. The certificate to connect to an Azure Database for PostgreSQL server is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem. Download the certificate file and save it to your preferred location.
-
-See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
-
-### Connect using psql
-
-The following example shows how to connect to your PostgreSQL server using the psql command-line utility. Use the `sslmode=verify-full` connection string setting to enforce TLS/SSL certificate verification. Pass the local certificate file path to the `sslrootcert` parameter.
-
-The following command is an example of the psql connection string:
-
-```shell
-psql "sslmode=verify-full sslrootcert=BaltimoreCyberTrustRoot.crt host=mydemoserver.postgres.database.azure.com dbname=postgres user=myusern@mydemoserver"
-```
-
-> [!TIP]
-> Confirm that the value passed to `sslrootcert` matches the file path for the certificate you saved.
-
-## TLS enforcement in Azure Database for PostgreSQL Single server
-
-Azure Database for PostgreSQL - Single server supports encryption for clients connecting to your database server using Transport Layer Security (TLS). TLS is an industry standard protocol that ensures secure network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-
-### TLS settings
-
-Azure Database for PostgreSQL single server provides the ability to enforce the TLS version for the client connections. To enforce the TLS version, use the **Minimum TLS version** option setting. The following values are allowed for this option setting:
-
-| Minimum TLS setting | Client TLS version supported |
-|:|-:|
-| TLSEnforcementDisabled (default) | No TLS required |
-| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher |
-| TLS1_1 | TLS 1.1, TLS 1.2 and higher |
-| TLS1_2 | TLS version 1.2 and higher |
--
-For example, setting this Minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected.
-
-> [!Note]
-> By default, Azure Database for PostgreSQL does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`).
->
-> Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement.
-
-To learn how to set the TLS setting for your Azure Database for PostgreSQL Single server, refer to [How to configure TLS setting](howto-tls-configurations.md).
-
-## Cipher support by Azure Database for PostgreSQL Single server
-
-As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database server. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites doesn't match one of suites listed below, incoming client connections will be rejected.
-
-### Cipher suite supported
-
-* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-
-## Next steps
-
-Review various application connectivity options in [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).
--- Learn how to [configure TLS](howto-tls-configurations.md)
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-supported-versions.md
- Title: Supported versions - Azure Database for PostgreSQL - Single Server
-description: Describes the supported Postgres major and minor versions in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 03/10/2022-
-adobe-target: true
-
-# Supported PostgreSQL major versions
-
-Please see [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for support policy details.
-
-Azure Database for PostgreSQL currently supports the following major versions:
-
-## PostgreSQL version 11
-The current minor release is 11.12. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-12.html) to learn more about improvements and fixes in this minor release.
-
-## PostgreSQL version 10
-The current minor release is 10.17. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/10/static/release-10-17.html) to learn more about improvements and fixes in this minor release.
-
-## PostgreSQL version 9.6 (retired)
-Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.6 as of November 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
-
-## PostgreSQL version 9.5 (retired)
-Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.5 as of February 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
-
-## Managing upgrades
-The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
-
-Automatic in-place upgrades for major versions are not supported. To upgrade to a higher major version, you can
- * Use one of the methods documented in [major version upgrades using dump and restore](./how-to-upgrade-using-dump-and-restore.md).
- * Use [pg_dump and pg_restore](./howto-migrate-using-dump-and-restore.md) to move a database to a server created with the new engine version.
- * Use [Azure Database Migration service](..\dms\tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) for doing online upgrades.
-
-### Version syntax
-Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number. For example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10, only a change in the first number is considered a major version upgrade. For example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a _major_ version upgrade.
-
-## Next steps
-For information on supported PostgreSQL extensions, see [the extensions document](concepts-extensions.md).
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-version-policy.md
- Title: Versioning policy - Azure Database for PostgreSQL - Single Server and Flexible Server (Preview)
-description: Describes the policy around Postgres major and minor versions in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 12/14/2021--
-# Azure Database for PostgreSQL versioning policy
-
-This page describes the Azure Database for PostgreSQL versioning policy, and is applicable to these deployment modes:
-
-* Single Server
-* Flexible Server
-* Hyperscale (Citus)
-
-## Supported PostgreSQL versions
-
-Azure Database for PostgreSQL supports the following database versions.
-
-| Version | Single Server | Flexible Server | Hyperscale (Citus) |
-| -- | :: | :-: | :-: |
-| PostgreSQL 14 | | | X |
-| PostgreSQL 13 | | X | X |
-| PostgreSQL 12 | | X | X |
-| PostgreSQL 11 | X | X | X |
-| PostgreSQL 10 | X | | |
-| *PostgreSQL 9.6 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | | |
-| *PostgreSQL 9.5 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | | |
-
-## Major version support
-Each major version of PostgreSQL will be supported by Azure Database for PostgreSQL from the date on which Azure begins supporting the version until the version is retired by the PostgreSQL community, as provided in the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
-
-## Minor version support
-Azure Database for PostgreSQL automatically performs minor version upgrades to the Azure preferred PostgreSQL version as part of periodic maintenance.
-
-## Major version retirement policy
-The table below provides the retirement details for PostgreSQL major versions. The dates follow the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
-
-| Version | What's New | Azure support start date | Retirement date|
-| -- | -- | | -- |
-| [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
-| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021
-| [PostgreSQL 10](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022
-| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2023
-| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024
-| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025
-| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | October 1, 2021 | November 12, 2026
-
-## Retired PostgreSQL engine versions not supported in Azure Database for PostgreSQL
-
-You may continue to run the retired version in Azure Database for PostgreSQL. However, please note the following restrictions after the retirement date for each PostgreSQL database version:
-- As the community will not be releasing any further bug fixes or security fixes, Azure Database for PostgreSQL will not patch the retired database engine for any bugs or security issues or otherwise take security measures with regard to the retired database engine. You may experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.-- If any support issue you may experience relates to the PostgreSQL engine itself, as the community no longer provides the patches, we may not be able to provide you with support. In such cases, you will have to upgrade your database to one of the supported versions.-- You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.-- New service capabilities developed by Azure Database for PostgreSQL may only be available to supported database server versions.-- Uptime SLAs will apply solely to Azure Database for PostgreSQL service-related issues and not to any downtime caused by database engine-related bugs. -- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure may choose to stop your database server to secure the service. In such case, you will be notified to upgrade the server before bringing the server online.-
-## PostgreSQL version syntax
-Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number. For example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10, only a change in the first number is considered a major version upgrade. For example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a _major_ version upgrade.
-
-## Next steps
-- See Azure Database for PostgreSQL - Single Server [supported versions](./concepts-supported-versions.md)-- See Azure Database for PostgreSQL - Flexible Server [supported versions](flexible-server/concepts-supported-versions.md)-- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](hyperscale/reference-versions.md)
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-csharp.md
- Title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Single Server'
-description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server."
------- Previously updated : 10/18/2020--
-# Quickstart: Use .NET (C#) to connect and query data in Azure Database for PostgreSQL - Single Server
-
-This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using C#, and that you are new to working with Azure Database for PostgreSQL.
-
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for PostgreSQL single server using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-
- |Action| Connectivity method|How-to guide|
- |: |: |: |
- | **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
- | **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
- | **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- Install the [.NET SDK for your platform](https://dotnet.microsoft.com/download) (Windows, Ubuntu Linux, or macOS) for your platform.-- Install [Visual Studio](https://www.visualstudio.com/downloads/) to build your project.-- Install [Npgsql](https://www.nuget.org/packages/Npgsql/) NuGet package in Visual Studio.-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-csharp/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
-
-## Step 1: Connect and insert data
-Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses NpgsqlCommand class with method:
-- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database.-- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) sets the CommandText property.-- [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) method to run the database commands.-
-> [!IMPORTANT]
-> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using Npgsql;
-
-namespace Driver
-{
- public class AzurePostgresCreate
- {
- // Obtain connection string information from the portal
- //
- private static string Host = "mydemoserver.postgres.database.azure.com";
- private static string User = "mylogin@mydemoserver";
- private static string DBname = "mypgsqldb";
- private static string Password = "<server_admin_password>";
- private static string Port = "5432";
-
- static void Main(string[] args)
- {
- // Build connection string using parameters from portal
- //
- string connString =
- String.Format(
- "Server={0};Username={1};Database={2};Port={3};Password={4};SSLMode=Prefer",
- Host,
- User,
- DBname,
- Port,
- Password);
--
- using (var conn = new NpgsqlConnection(connString))
-
- {
- Console.Out.WriteLine("Opening connection");
- conn.Open();
-
- using (var command = new NpgsqlCommand("DROP TABLE IF EXISTS inventory", conn))
- {
- command.ExecuteNonQuery();
- Console.Out.WriteLine("Finished dropping table (if existed)");
-
- }
-
- using (var command = new NpgsqlCommand("CREATE TABLE inventory(id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER)", conn))
- {
- command.ExecuteNonQuery();
- Console.Out.WriteLine("Finished creating table");
- }
-
- using (var command = new NpgsqlCommand("INSERT INTO inventory (name, quantity) VALUES (@n1, @q1), (@n2, @q2), (@n3, @q3)", conn))
- {
- command.Parameters.AddWithValue("n1", "banana");
- command.Parameters.AddWithValue("q1", 150);
- command.Parameters.AddWithValue("n2", "orange");
- command.Parameters.AddWithValue("q2", 154);
- command.Parameters.AddWithValue("n3", "apple");
- command.Parameters.AddWithValue("q3", 100);
-
- int nRows = command.ExecuteNonQuery();
- Console.Out.WriteLine(String.Format("Number of rows inserted={0}", nRows));
- }
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Step 2: Read data
-Use the following code to connect and read the data using a **SELECT** SQL statement. The code uses NpgsqlCommand class with method:
-- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.-- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.-- [Read()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_Read) to advance to the record in the results.-- [GetInt32()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetInt32_System_Int32_) and [GetString()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetString_System_Int32_) to parse the values in the record.-
-> [!IMPORTANT]
-> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using Npgsql;
-
-namespace Driver
-{
- public class AzurePostgresRead
- {
- // Obtain connection string information from the portal
- //
- private static string Host = "mydemoserver.postgres.database.azure.com";
- private static string User = "mylogin@mydemoserver";
- private static string DBname = "mypgsqldb";
- private static string Password = "<server_admin_password>";
- private static string Port = "5432";
-
- static void Main(string[] args)
- {
- // Build connection string using parameters from portal
- //
- string connString =
- String.Format(
- "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4};SSLMode=Prefer",
- Host,
- User,
- DBname,
- Port,
- Password);
-
- using (var conn = new NpgsqlConnection(connString))
- {
-
- Console.Out.WriteLine("Opening connection");
- conn.Open();
--
- using (var command = new NpgsqlCommand("SELECT * FROM inventory", conn))
- {
-
- var reader = command.ExecuteReader();
- while (reader.Read())
- {
- Console.WriteLine(
- string.Format(
- "Reading from table=({0}, {1}, {2})",
- reader.GetInt32(0).ToString(),
- reader.GetString(1),
- reader.GetInt32(2).ToString()
- )
- );
- }
- reader.Close();
- }
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Step 3: Update data
-Use the following code to connect and update the data using an **UPDATE** SQL statement. The code uses NpgsqlCommand class with method:
-- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.-- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand), sets the CommandText property.-- [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) method to run the database commands.-
-> [!IMPORTANT]
-> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using Npgsql;
-
-namespace Driver
-{
- public class AzurePostgresUpdate
- {
- // Obtain connection string information from the portal
- //
- private static string Host = "mydemoserver.postgres.database.azure.com";
- private static string User = "mylogin@mydemoserver";
- private static string DBname = "mypgsqldb";
- private static string Password = "<server_admin_password>";
- private static string Port = "5432";
-
- static void Main(string[] args)
- {
- // Build connection string using parameters from portal
- //
- string connString =
- String.Format(
- "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4};SSLMode=Prefer",
- Host,
- User,
- DBname,
- Port,
- Password);
-
- using (var conn = new NpgsqlConnection(connString))
- {
-
- Console.Out.WriteLine("Opening connection");
- conn.Open();
-
- using (var command = new NpgsqlCommand("UPDATE inventory SET quantity = @q WHERE name = @n", conn))
- {
- command.Parameters.AddWithValue("n", "banana");
- command.Parameters.AddWithValue("q", 200);
- int nRows = command.ExecuteNonQuery();
- Console.Out.WriteLine(String.Format("Number of rows updated={0}", nRows));
- }
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
--
-```
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Step 4: Delete data
-Use the following code to connect and delete data using a **DELETE** SQL statement.
-
-The code uses NpgsqlCommand class with method [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. Then, the code uses the [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) method, sets the CommandText property, and calls the method [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run the database commands.
-
-> [!IMPORTANT]
-> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using Npgsql;
-
-namespace Driver
-{
- public class AzurePostgresDelete
- {
- // Obtain connection string information from the portal
- //
- private static string Host = "mydemoserver.postgres.database.azure.com";
- private static string User = "mylogin@mydemoserver";
- private static string DBname = "mypgsqldb";
- private static string Password = "<server_admin_password>";
- private static string Port = "5432";
-
- static void Main(string[] args)
- {
- // Build connection string using parameters from portal
- //
- string connString =
- String.Format(
- "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4};SSLMode=Prefer",
- Host,
- User,
- DBname,
- Port,
- Password);
-
- using (var conn = new NpgsqlConnection(connString))
- {
- Console.Out.WriteLine("Opening connection");
- conn.Open();
-
- using (var command = new NpgsqlCommand("DELETE FROM inventory WHERE name = @n", conn))
- {
- command.Parameters.AddWithValue("n", "orange");
-
- int nRows = command.ExecuteNonQuery();
- Console.Out.WriteLine(String.Format("Number of rows deleted={0}", nRows));
- }
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)
-
-[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-go.md
- Title: 'Quickstart: Connect with Go - Azure Database for PostgreSQL - Single Server'
-description: This quickstart provides a Go programming language sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
------- Previously updated : 5/6/2019--
-# Quickstart: Use Go language to connect and query data in Azure Database for PostgreSQL - Single Server
-
-This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using code written in the [Go](https://go.dev/) language (golang). It shows how to use SQL statements to query, insert, update, and delete data in the database. This article assumes you are familiar with development using Go, but that you are new to working with Azure Database for PostgreSQL.
-
-## Prerequisites
-This quickstart uses the resources created in either of these guides as a starting point:
-- [Create DB - Portal](quickstart-create-server-database-portal.md)-- [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md)-
-## Install Go and pq connector
-Install [Go](https://go.dev/doc/install) and the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) on your own machine. Depending on your platform, follow the appropriate steps:
-
-### Windows
-1. [Download](https://go.dev/dl/) and install Go for Microsoft Windows according to the [installation instructions](https://go.dev/doc/install).
-2. Launch the command prompt from the start menu.
-3. Make a folder for your project, such as `mkdir %USERPROFILE%\go\src\postgresqlgo`.
-4. Change directory into the project folder, such as `cd %USERPROFILE%\go\src\postgresqlgo`.
-5. Set the environment variable for GOPATH to point to the source code directory. `set GOPATH=%USERPROFILE%\go`.
-6. Install the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) by running the `go get github.com/lib/pq` command.
-
- In summary, install Go, then run these commands in the command prompt:
- ```cmd
- mkdir %USERPROFILE%\go\src\postgresqlgo
- cd %USERPROFILE%\go\src\postgresqlgo
- set GOPATH=%USERPROFILE%\go
- go get github.com/lib/pq
- ```
-
-### Linux (Ubuntu)
-1. Launch the Bash shell.
-2. Install Go by running `sudo apt-get install golang-go`.
-3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/postgresqlgo/`.
-4. Change directory into the folder, such as `cd ~/go/src/postgresqlgo/`.
-5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
-6. Install the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) by running the `go get github.com/lib/pq` command.
-
- In summary, run these bash commands:
- ```bash
- sudo apt-get install golang-go
- mkdir -p ~/go/src/postgresqlgo/
- cd ~/go/src/postgresqlgo/
- export GOPATH=~/go/
- go get github.com/lib/pq
- ```
-
-### Apple macOS
-1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform.
-2. Launch the Bash shell.
-3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/postgresqlgo/`.
-4. Change directory into the folder, such as `cd ~/go/src/postgresqlgo/`.
-5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
-6. Install the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) by running the `go get github.com/lib/pq` command.
-
- In summary, install Go, then run these bash commands:
- ```bash
- mkdir -p ~/go/src/postgresqlgo/
- cd ~/go/src/postgresqlgo/
- export GOPATH=~/go/
- go get github.com/lib/pq
- ```
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-go/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
-
-## Build and run Go code
-1. To write Golang code, you can use a plain text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE) try [GoLand](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/).
-2. Paste the Golang code from the following sections into text files, and save into your project folder with file extension \*.go, such as Windows path `%USERPROFILE%\go\src\postgresqlgo\createtable.go` or Linux path `~/go/src/postgresqlgo/createtable.go`.
-3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and replace the example values with your own values.
-4. Launch the command prompt or bash shell. Change directory into your project folder. For example, on Windows `cd %USERPROFILE%\go\src\postgresqlgo\`. On Linux `cd ~/go/src/postgresqlgo/`. Some of the IDE environments mentioned offer debug and runtime capabilities without requiring shell commands.
-5. Run the code by typing the command `go run createtable.go` to compile the application and run it.
-6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application.
-
-## Connect and create a table
-Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the PostgreSQL server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several SQL commands. Each time a custom checkError() method checks if an error occurred and panic to exit if an error does occur.
-
-Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
-
-```go
-package main
-
-import (
- "database/sql"
- "fmt"
- _ "github.com/lib/pq"
-)
-
-const (
- // Initialize connection constants.
- HOST = "mydemoserver.postgres.database.azure.com"
- DATABASE = "mypgsqldb"
- USER = "mylogin@mydemoserver"
- PASSWORD = "<server_admin_password>"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
- // Initialize connection string.
- var connectionString string = fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
-
- // Initialize connection object.
- db, err := sql.Open("postgres", connectionString)
- checkError(err)
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database")
-
- // Drop previous table of same name if one exists.
- _, err = db.Exec("DROP TABLE IF EXISTS inventory;")
- checkError(err)
- fmt.Println("Finished dropping table (if existed)")
-
- // Create table.
- _, err = db.Exec("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
- checkError(err)
- fmt.Println("Finished creating table")
-
- // Insert some data into table.
- sql_statement := "INSERT INTO inventory (name, quantity) VALUES ($1, $2);"
- _, err = db.Exec(sql_statement, "banana", 150)
- checkError(err)
- _, err = db.Exec(sql_statement, "orange", 154)
- checkError(err)
- _, err = db.Exec(sql_statement, "apple", 100)
- checkError(err)
- fmt.Println("Inserted 3 rows of data")
-}
-```
-
-## Read data
-Use the following code to connect and read the data using a **SELECT** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the PostgreSQL server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The select query is run by calling method [db.Query()](https://go.dev/pkg/database/sql/#DB.Query), and the resulting rows are kept in a variable of type
-[rows](https://go.dev/pkg/database/sql/#Rows). The code reads the column data values in the current row using method [rows.Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) and loops over the rows using the iterator [rows.Next()](https://go.dev/pkg/database/sql/#Rows.Next) until no more rows exist. Each row's column values are printed to the console out. Each time a custom checkError() method is used to check if an error occurred and panic to exit if an error does occur.
-
-Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
-
-```go
-package main
-
-import (
- "database/sql"
- "fmt"
- _ "github.com/lib/pq"
-)
-
-const (
- // Initialize connection constants.
- HOST = "mydemoserver.postgres.database.azure.com"
- DATABASE = "mypgsqldb"
- USER = "mylogin@mydemoserver"
- PASSWORD = "<server_admin_password>"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString string = fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
-
- // Initialize connection object.
- db, err := sql.Open("postgres", connectionString)
- checkError(err)
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database")
-
- // Read rows from table.
- var id int
- var name string
- var quantity int
-
- sql_statement := "SELECT * from inventory;"
- rows, err := db.Query(sql_statement)
- checkError(err)
- defer rows.Close()
-
- for rows.Next() {
- switch err := rows.Scan(&id, &name, &quantity); err {
- case sql.ErrNoRows:
- fmt.Println("No rows were returned")
- case nil:
- fmt.Printf("Data row = (%d, %s, %d)\n", id, name, quantity)
- default:
- checkError(err)
- }
- }
-}
-```
-
-## Update data
-Use the following code to connect and update the data using an **UPDATE** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the Postgres server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the SQL statement that updates the table. A custom checkError() method is used to check if an error occurred and panic to exit if an error does occur.
-
-Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
-```go
-package main
-
-import (
- "database/sql"
- _ "github.com/lib/pq"
- "fmt"
-)
-
-const (
- // Initialize connection constants.
- HOST = "mydemoserver.postgres.database.azure.com"
- DATABASE = "mypgsqldb"
- USER = "mylogin@mydemoserver"
- PASSWORD = "<server_admin_password>"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString string =
- fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
-
- // Initialize connection object.
- db, err := sql.Open("postgres", connectionString)
- checkError(err)
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database")
-
- // Modify some data in table.
- sql_statement := "UPDATE inventory SET quantity = $2 WHERE name = $1;"
- _, err = db.Exec(sql_statement, "banana", 200)
- checkError(err)
- fmt.Println("Updated 1 row of data")
-}
-```
-
-## Delete data
-Use the following code to connect and delete the data using a **DELETE** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the Postgres server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the SQL statement that deletes a row from the table. A custom checkError() method is used to check if an error occurred and panic to exit if an error does occur.
-
-Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
-```go
-package main
-
-import (
- "database/sql"
- _ "github.com/lib/pq"
- "fmt"
-)
-
-const (
- // Initialize connection constants.
- HOST = "mydemoserver.postgres.database.azure.com"
- DATABASE = "mypgsqldb"
- USER = "mylogin@mydemoserver"
- PASSWORD = "<server_admin_password>"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString string =
- fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
-
- // Initialize connection object.
- db, err := sql.Open("postgres", connectionString)
- checkError(err)
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database")
-
- // Delete some data from table.
- sql_statement := "DELETE FROM inventory WHERE name = $1;"
- _, err = db.Exec(sql_statement, "orange")
- checkError(err)
- fmt.Println("Deleted 1 row of data")
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./howto-migrate-using-export-and-import.md)
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-java.md
- Title: 'Quickstart: Use Java and JDBC with Azure Database for PostgreSQL'
-description: In this quickstart, you learn how to use Java and JDBC with an Azure Database for PostgreSQL.
------ Previously updated : 08/17/2020--
-# Quickstart: Use Java and JDBC with Azure Database for PostgreSQL
-
-This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for PostgreSQL](./index.yml).
-
-JDBC is the standard Java API to connect to traditional relational databases.
-
-## Prerequisites
--- An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).-- [Azure Cloud Shell](../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.-- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).-- The [Apache Maven](https://maven.apache.org/) build tool.-
-## Prepare the working environment
-
-We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
-
-Set up those environment variables by using the following commands:
-
-```bash
-AZ_RESOURCE_GROUP=database-workshop
-AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
-AZ_LOCATION=<YOUR_AZURE_REGION>
-AZ_POSTGRESQL_USERNAME=demo
-AZ_POSTGRESQL_PASSWORD=<YOUR_POSTGRESQL_PASSWORD>
-AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
-```
-
-Replace the placeholders with the following values, which are used throughout this article:
--- `<YOUR_DATABASE_NAME>`: The name of your PostgreSQL server. It should be unique across Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.-- `<YOUR_POSTGRESQL_PASSWORD>`: The password of your PostgreSQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).-
-Next, create a resource group by using the following command:
-
-```azurecli
-az group create \
- --name $AZ_RESOURCE_GROUP \
- --location $AZ_LOCATION \
- | jq
-```
-
-> [!NOTE]
-> We use the `jq` utility to display JSON data and make it more readable. This utility is installed by default on [Azure Cloud Shell](https://shell.azure.com/). If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
-
-## Create an Azure Database for PostgreSQL instance
-
-The first thing we'll create is a managed PostgreSQL server.
-
-> [!NOTE]
-> You can read more detailed information about creating PostgreSQL servers in [Create an Azure Database for PostgreSQL server by using the Azure portal](./quickstart-create-server-database-portal.md).
-
-In [Azure Cloud Shell](https://shell.azure.com/), run the following command:
-
-```azurecli
-az postgres server create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME \
- --location $AZ_LOCATION \
- --sku-name B_Gen5_1 \
- --storage-size 5120 \
- --admin-user $AZ_POSTGRESQL_USERNAME \
- --admin-password $AZ_POSTGRESQL_PASSWORD \
- | jq
-```
-
-This command creates a small PostgreSQL server.
-
-### Configure a firewall rule for your PostgreSQL server
-
-Azure Database for PostgreSQL instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
-
-Because you configured your local IP address at the beginning of this article, you can open the server's firewall by running the following command:
-
-```azurecli
-az postgres server firewall-rule create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME-database-allow-local-ip \
- --server $AZ_DATABASE_NAME \
- --start-ip-address $AZ_LOCAL_IP_ADDRESS \
- --end-ip-address $AZ_LOCAL_IP_ADDRESS \
- | jq
-```
-
-### Configure a PostgreSQL database
-
-The PostgreSQL server that you created earlier is empty. It doesn't have any database that you can use with the Java application. Create a new database called `demo` by using the following command:
-
-```azurecli
-az postgres db create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name demo \
- --server-name $AZ_DATABASE_NAME \
- | jq
-```
-
-### Create a new Java project
-
-Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
-
-```xml
-<?xml version="1.0" encoding="UTF-8"?>
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>com.example</groupId>
- <artifactId>demo</artifactId>
- <version>0.0.1-SNAPSHOT</version>
- <name>demo</name>
-
- <properties>
- <java.version>1.8</java.version>
- <maven.compiler.source>1.8</maven.compiler.source>
- <maven.compiler.target>1.8</maven.compiler.target>
- </properties>
-
- <dependencies>
- <dependency>
- <groupId>org.postgresql</groupId>
- <artifactId>postgresql</artifactId>
- <version>42.2.12</version>
- </dependency>
- </dependencies>
-</project>
-```
-
-This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
--- Java 8-- A recent PostgreSQL driver for Java-
-### Prepare a configuration file to connect to Azure Database for PostgreSQL
-
-Create a *src/main/resources/application.properties* file, and add:
-
-```properties
-url=jdbc:postgresql://$AZ_DATABASE_NAME.postgres.database.azure.com:5432/demo?ssl=true&sslmode=require
-user=demo@$AZ_DATABASE_NAME
-password=$AZ_POSTGRESQL_PASSWORD
-```
--- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.-- Replace the `$AZ_POSTGRESQL_PASSWORD` variable with the value that you configured at the beginning of this article.-
-> [!NOTE]
-> We append `?ssl=true&sslmode=require` to the configuration property `url`, to tell the JDBC driver to use TLS ([Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security)) when connecting to the database. It is mandatory to use TLS with Azure Database for PostgreSQL, and it is a good security practice.
-
-### Create an SQL file to generate the database schema
-
-We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
-
-```sql
-DROP TABLE IF EXISTS todo;
-CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN);
-```
-
-## Code the application
-
-### Connect to the database
-
-Next, add the Java code that will use JDBC to store and retrieve data from your PostgreSQL server.
-
-Create a *src/main/java/DemoApplication.java* file, that contains:
-
-```java
-package com.example.demo;
-
-import java.sql.*;
-import java.util.*;
-import java.util.logging.Logger;
-
-public class DemoApplication {
-
- private static final Logger log;
-
- static {
- System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
- log =Logger.getLogger(DemoApplication.class.getName());
- }
-
- public static void main(String[] args) throws Exception {
- log.info("Loading application properties");
- Properties properties = new Properties();
- properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
-
- log.info("Connecting to the database");
- Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
- log.info("Database connection test: " + connection.getCatalog());
-
- log.info("Create database schema");
- Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
- Statement statement = connection.createStatement();
- while (scanner.hasNextLine()) {
- statement.execute(scanner.nextLine());
- }
-
- /*
- Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
- insertData(todo, connection);
- todo = readData(connection);
- todo.setDetails("congratulations, you have updated data!");
- updateData(todo, connection);
- deleteData(todo, connection);
- */
-
- log.info("Closing database connection");
- connection.close();
- }
-}
-```
-
-This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the PostgreSQL server and create a schema that will store our data.
-
-In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
-
-> [!NOTE]
-> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
-
-You can now execute this main class with your favorite tool:
--- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.-- Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.-
-The application should connect to the Azure Database for PostgreSQL, create a database schema, and then close the connection, as you should see in the console logs:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Closing database connection
-```
-
-### Create a domain class
-
-Create a new `Todo` Java class, next to the `DemoApplication` class, and add the following code:
-
-```java
-package com.example.demo;
-
-public class Todo {
-
- private Long id;
- private String description;
- private String details;
- private boolean done;
-
- public Todo() {
- }
-
- public Todo(Long id, String description, String details, boolean done) {
- this.id = id;
- this.description = description;
- this.details = details;
- this.done = done;
- }
-
- public Long getId() {
- return id;
- }
-
- public void setId(Long id) {
- this.id = id;
- }
-
- public String getDescription() {
- return description;
- }
-
- public void setDescription(String description) {
- this.description = description;
- }
-
- public String getDetails() {
- return details;
- }
-
- public void setDetails(String details) {
- this.details = details;
- }
-
- public boolean isDone() {
- return done;
- }
-
- public void setDone(boolean done) {
- this.done = done;
- }
-
- @Override
- public String toString() {
- return "Todo{" +
- "id=" + id +
- ", description='" + description + '\'' +
- ", details='" + details + '\'' +
- ", done=" + done +
- '}';
- }
-}
-```
-
-This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script.
-
-### Insert data into Azure Database for PostgreSQL
-
-In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database:
-
-```java
-private static void insertData(Todo todo, Connection connection) throws SQLException {
- log.info("Insert data");
- PreparedStatement insertStatement = connection
- .prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);");
-
- insertStatement.setLong(1, todo.getId());
- insertStatement.setString(2, todo.getDescription());
- insertStatement.setString(3, todo.getDetails());
- insertStatement.setBoolean(4, todo.isDone());
- insertStatement.executeUpdate();
-}
-```
-
-You can now uncomment the two following lines in the `main` method:
-
-```java
-Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
-insertData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Closing database connection
-```
-
-### Reading data from Azure Database for PostgreSQL
-
-Let's read the data previously inserted, to validate that our code works correctly.
-
-In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
-
-```java
-private static Todo readData(Connection connection) throws SQLException {
- log.info("Read data");
- PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;");
- ResultSet resultSet = readStatement.executeQuery();
- if (!resultSet.next()) {
- log.info("There is no data in the database!");
- return null;
- }
- Todo todo = new Todo();
- todo.setId(resultSet.getLong("id"));
- todo.setDescription(resultSet.getString("description"));
- todo.setDetails(resultSet.getString("details"));
- todo.setDone(resultSet.getBoolean("done"));
- log.info("Data read from the database: " + todo.toString());
- return todo;
-}
-```
-
-You can now uncomment the following line in the `main` method:
-
-```java
-todo = readData(connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Closing database connection
-```
-
-### Updating data in Azure Database for PostgreSQL
-
-Let's update the data we previously inserted.
-
-Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
-
-```java
-private static void updateData(Todo todo, Connection connection) throws SQLException {
- log.info("Update data");
- PreparedStatement updateStatement = connection
- .prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;");
-
- updateStatement.setString(1, todo.getDescription());
- updateStatement.setString(2, todo.getDetails());
- updateStatement.setBoolean(3, todo.isDone());
- updateStatement.setLong(4, todo.getId());
- updateStatement.executeUpdate();
- readData(connection);
-}
-```
-
-You can now uncomment the two following lines in the `main` method:
-
-```java
-todo.setDetails("congratulations, you have updated data!");
-updateData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
-[INFO ] Closing database connection
-```
-
-### Deleting data in Azure Database for PostgreSQL
-
-Finally, let's delete the data we previously inserted.
-
-Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
-
-```java
-private static void deleteData(Todo todo, Connection connection) throws SQLException {
- log.info("Delete data");
- PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;");
- deleteStatement.setLong(1, todo.getId());
- deleteStatement.executeUpdate();
- readData(connection);
-}
-```
-
-You can now uncomment the following line in the `main` method:
-
-```java
-deleteData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
-[INFO ] Delete data
-[INFO ] Read data
-[INFO ] There is no data in the database!
-[INFO ] Closing database connection
-```
-
-## Clean up resources
-
-Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for PostgreSQL.
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./howto-migrate-using-export-and-import.md)
postgresql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-nodejs.md
- Title: 'Quickstart: Use Node.js to connect to Azure Database for PostgreSQL - Single Server'
-description: This quickstart provides a Node.js code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
------- Previously updated : 5/6/2019--
-# Quickstart: Use Node.js to connect and query data in Azure Database for PostgreSQL - Single Server
-
-In this quickstart, you connect to an Azure Database for PostgreSQL using a Node.js application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using Node.js, and are new to working with Azure Database for PostgreSQL.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).--- Completion of [Quickstart: Create an Azure Database for PostgreSQL server in the Azure portal](quickstart-create-server-database-portal.md) or [Quickstart: Create an Azure Database for PostgreSQL using the Azure CLI](quickstart-create-server-database-azure-cli.md).--- [Node.js](https://nodejs.org)-
-## Install pg client
-Install [pg](https://www.npmjs.com/package/pg), which is a PostgreSQL client for Node.js.
-
-To do so, run the node package manager (npm) for JavaScript from your command line to install the pg client.
-```bash
-npm install pg
-```
-
-Verify the installation by listing the packages installed.
-```bash
-npm list
-```
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
-
-1. In the [Azure portal](https://portal.azure.com/), search for and select the server you have created (such as **mydemoserver**).
-
-1. From the server's **Overview** panel, make a note of the **Server name** and **Admin username**. If you forget your password, you can also reset the password from this panel.
-
- :::image type="content" source="./media/connect-nodejs/server-details-azure-database-postgresql.png" alt-text="Azure Database for PostgreSQL connection string":::
-
-## Running the JavaScript code in Node.js
-You may launch Node.js from the Bash shell, Terminal, or Windows Command Prompt by typing `node`, then run the example JavaScript code interactively by copy and pasting it onto the prompt. Alternatively, you may save the JavaScript code into a text file and launch `node filename.js` with the file name as a parameter to run it.
-
-## Connect, create table, and insert data
-Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements.
-The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
-
-Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
-
-```javascript
-const pg = require('pg');
-
-const config = {
- host: '<your-db-server-name>.postgres.database.azure.com',
- // Do not hard code your username and password.
- // Consider using Node environment variables.
- user: '<your-db-username>',
- password: '<your-password>',
- database: '<name-of-database>',
- port: 5432,
- ssl: true
-};
-
-const client = new pg.Client(config);
-
-client.connect(err => {
- if (err) throw err;
- else {
- queryDatabase();
- }
-});
-
-function queryDatabase() {
- const query = `
- DROP TABLE IF EXISTS inventory;
- CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
- INSERT INTO inventory (name, quantity) VALUES ('banana', 150);
- INSERT INTO inventory (name, quantity) VALUES ('orange', 154);
- INSERT INTO inventory (name, quantity) VALUES ('apple', 100);
- `;
-
- client
- .query(query)
- .then(() => {
- console.log('Table created successfully!');
- client.end(console.log('Closed client connection'));
- })
- .catch(err => console.log(err))
- .then(() => {
- console.log('Finished execution, exiting now');
- process.exit();
- });
-}
-```
-
-## Read data
-Use the following code to connect and read the data using a **SELECT** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
-
-Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
-
-```javascript
-const pg = require('pg');
-
-const config = {
- host: '<your-db-server-name>.postgres.database.azure.com',
- // Do not hard code your username and password.
- // Consider using Node environment variables.
- user: '<your-db-username>',
- password: '<your-password>',
- database: '<name-of-database>',
- port: 5432,
- ssl: true
-};
-
-const client = new pg.Client(config);
-
-client.connect(err => {
- if (err) throw err;
- else { queryDatabase(); }
-});
-
-function queryDatabase() {
-
- console.log(`Running query to PostgreSQL server: ${config.host}`);
-
- const query = 'SELECT * FROM inventory;';
-
- client.query(query)
- .then(res => {
- const rows = res.rows;
-
- rows.map(row => {
- console.log(`Read: ${JSON.stringify(row)}`);
- });
-
- process.exit();
- })
- .catch(err => {
- console.log(err);
- });
-}
-```
-
-## Update data
-Use the following code to connect and read the data using a **UPDATE** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
-
-Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
-
-```javascript
-const pg = require('pg');
-
-const config = {
- host: '<your-db-server-name>.postgres.database.azure.com',
- // Do not hard code your username and password.
- // Consider using Node environment variables.
- user: '<your-db-username>',
- password: '<your-password>',
- database: '<name-of-database>',
- port: 5432,
- ssl: true
-};
-
-const client = new pg.Client(config);
-
-client.connect(err => {
- if (err) throw err;
- else {
- queryDatabase();
- }
-});
-
-function queryDatabase() {
- const query = `
- UPDATE inventory
- SET quantity= 1000 WHERE name='banana';
- `;
-
- client
- .query(query)
- .then(result => {
- console.log('Update completed');
- console.log(`Rows affected: ${result.rowCount}`);
- })
- .catch(err => {
- console.log(err);
- throw err;
- });
-}
-```
-
-## Delete data
-Use the following code to connect and read the data using a **DELETE** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
-
-Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
-
-```javascript
-const pg = require('pg');
-
-const config = {
- host: '<your-db-server-name>.postgres.database.azure.com',
- // Do not hard code your username and password.
- // Consider using Node environment variables.
- user: '<your-db-username>',
- password: '<your-password>',
- database: '<name-of-database>',
- port: 5432,
- ssl: true
-};
-
-const client = new pg.Client(config);
-
-client.connect(err => {
- if (err) {
- throw err;
- } else {
- queryDatabase();
- }
-});
-
-function queryDatabase() {
- const query = `
- DELETE FROM inventory
- WHERE name = 'apple';
- `;
-
- client
- .query(query)
- .then(result => {
- console.log('Delete completed');
- console.log(`Rows affected: ${result.rowCount}`);
- })
- .catch(err => {
- console.log(err);
- throw err;
- });
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./howto-migrate-using-export-and-import.md)
postgresql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-php.md
- Title: 'Quickstart: Connect with PHP - Azure Database for PostgreSQL - Single Server'
-description: This quickstart provides a PHP code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
------- Previously updated : 2/28/2018--
-# Quickstart: Use PHP to connect and query data in Azure Database for PostgreSQL - Single Server
-
-This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using PHP, and are new to working with Azure Database for PostgreSQL.
-
-## Prerequisites
-This quickstart uses the resources created in either of these guides as a starting point:
-- [Create DB - Portal](quickstart-create-server-database-portal.md)-- [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md)-
-## Install PHP
-Install PHP on your own server, or create an Azure [web app](../app-service/overview.md) that includes PHP.
-
-### Windows
-- Download [PHP 7.1.4 non-thread safe (x64) version](https://windows.php.net/download#php-7.1)-- Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.windows.php) for further configuration-- The code uses the **pgsql** class (ext/php_pgsql.dll) that is included in the PHP installation. -- Enabled the **pgsql** extension by editing the php.ini configuration file, typically located at `C:\Program Files\PHP\v7.1\php.ini`. The configuration file should contain a line with the text `extension=php_pgsql.so`. If it is not shown, add the text and save the file. If the text is present, but commented with a semicolon prefix, uncomment the text by removing the semicolon.-
-### Linux (Ubuntu)
-- Download [PHP 7.1.4 non-thread safe (x64) version](https://secure.php.net/downloads.php) -- Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.unix.php) for further configuration-- The code uses the **pgsql** class (php_pgsql.so). Install it by running `sudo apt-get install php-pgsql`.-- Enabled the **pgsql** extension by editing the `/etc/php/7.0/mods-available/pgsql.ini` configuration file. The configuration file should contain a line with the text `extension=php_pgsql.so`. If it is not shown, add the text and save the file. If the text is present, but commented with a semicolon prefix, uncomment the text by removing the semicolon.-
-### MacOS
-- Download [PHP 7.1.4 version](https://secure.php.net/downloads.php)-- Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.macosx.php) for further configuration-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-php/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
-
-## Connect and create a table
-Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
-
-The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method
-[pg_query()](https://secure.php.net/manual/en/function.pg-query.php) several times to run several commands, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred each time. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
-
-Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
-
-```php
-<?php
- // Initialize connection variables.
- $host = "mydemoserver.postgres.database.azure.com";
- $database = "mypgsqldb";
- $user = "mylogin@mydemoserver";
- $password = "<server_admin_password>";
-
- // Initialize connection object.
- $connection = pg_connect("host=$host dbname=$database user=$user password=$password")
- or die("Failed to create connection to database: ". pg_last_error(). "<br/>");
- print "Successfully created connection to database.<br/>";
-
- // Drop previous table of same name if one exists.
- $query = "DROP TABLE IF EXISTS inventory;";
- pg_query($connection, $query)
- or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
- print "Finished dropping table (if existed).<br/>";
-
- // Create table.
- $query = "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
- pg_query($connection, $query)
- or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
- print "Finished creating table.<br/>";
-
- // Insert some data into table.
- $name = '\'banana\'';
- $quantity = 150;
- $query = "INSERT INTO inventory (name, quantity) VALUES ($name, $quantity);";
- pg_query($connection, $query)
- or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
-
- $name = '\'orange\'';
- $quantity = 154;
- $query = "INSERT INTO inventory (name, quantity) VALUES ($name, $quantity);";
- pg_query($connection, $query)
- or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
-
- $name = '\'apple\'';
- $quantity = 100;
- $query = "INSERT INTO inventory (name, quantity) VALUES ($name, $quantity);";
- pg_query($connection, $query)
- or die("Encountered an error when executing given sql statement: ". pg_last_error()). "<br/>";
-
- print "Inserted 3 rows of data.<br/>";
-
- // Closing connection
- pg_close($connection);
-?>
-```
-
-## Read data
-Use the following code to connect and read the data using a **SELECT** SQL statement.
-
- The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run the SELECT command, keeping the results in a result set, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. To read the result set, method [pg_fetch_row()](https://secure.php.net/manual/en/function.pg-fetch-row.php) is called in a loop, once per row, and the row data is retrieved in an array `$row`, with one data value per column in each array position. To free the result set, method [pg_free_result()](https://secure.php.net/manual/en/function.pg-free-result.php) is called. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
-
-Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
-
-```php
-<?php
- // Initialize connection variables.
- $host = "mydemoserver.postgres.database.azure.com";
- $database = "mypgsqldb";
- $user = "mylogin@mydemoserver";
- $password = "<server_admin_password>";
-
- // Initialize connection object.
- $connection = pg_connect("host=$host dbname=$database user=$user password=$password")
- or die("Failed to create connection to database: ". pg_last_error(). "<br/>");
-
- print "Successfully created connection to database. <br/>";
-
- // Perform some SQL queries over the connection.
- $query = "SELECT * from inventory";
- $result_set = pg_query($connection, $query)
- or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
- while ($row = pg_fetch_row($result_set))
- {
- print "Data row = ($row[0], $row[1], $row[2]). <br/>";
- }
-
- // Free result_set
- pg_free_result($result_set);
-
- // Closing connection
- pg_close($connection);
-?>
-```
-
-## Update data
-Use the following code to connect and update the data using a **UPDATE** SQL statement.
-
-The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run a command, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
-
-Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
-
-```php
-<?php
- // Initialize connection variables.
- $host = "mydemoserver.postgres.database.azure.com";
- $database = "mypgsqldb";
- $user = "mylogin@mydemoserver";
- $password = "<server_admin_password>";
-
- // Initialize connection object.
- $connection = pg_connect("host=$host dbname=$database user=$user password=$password")
- or die("Failed to create connection to database: ". pg_last_error(). ".<br/>");
-
- print "Successfully created connection to database. <br/>";
-
- // Modify some data in table.
- $new_quantity = 200;
- $name = '\'banana\'';
- $query = "UPDATE inventory SET quantity = $new_quantity WHERE name = $name;";
- pg_query($connection, $query)
- or die("Encountered an error when executing given sql statement: ". pg_last_error(). ".<br/>");
- print "Updated 1 row of data. </br>";
-
- // Closing connection
- pg_close($connection);
-?>
-```
--
-## Delete data
-Use the following code to connect and read the data using a **DELETE** SQL statement.
-
- The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run a command, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
-
-Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
-
-```php
-<?php
- // Initialize connection variables.
- $host = "mydemoserver.postgres.database.azure.com";
- $database = "mypgsqldb";
- $user = "mylogin@mydemoserver";
- $password = "<server_admin_password>";
-
- // Initialize connection object.
- $connection = pg_connect("host=$host dbname=$database user=$user password=$password")
- or die("Failed to create connection to database: ". pg_last_error(). ". </br>");
-
- print "Successfully created connection to database. <br/>";
-
- // Delete some data from table.
- $name = '\'orange\'';
- $query = "DELETE FROM inventory WHERE name = $name;";
- pg_query($connection, $query)
- or die("Encountered an error when executing given sql statement: ". pg_last_error(). ". <br/>");
- print "Deleted 1 row of data. <br/>";
-
- // Closing connection
- pg_close($connection);
-?>
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./howto-migrate-using-export-and-import.md)
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-python.md
- Title: 'Quickstart: Connect with Python - Azure Database for PostgreSQL - Single Server'
-description: This quickstart provides Python code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
------- Previously updated : 10/28/2020--
-# Quickstart: Use Python to connect and query data in Azure Database for PostgreSQL - Single Server
-
-In this quickstart, you will learn how to connect to the database on Azure Database for PostgreSQL Single Server and run SQL statements to query using Python on macOS, Ubuntu Linux, or Windows.
-
-> [!TIP]
-> If you are looking to build a Django Application with PostgreSQL then checkout the tutorial, [Deploy a Django web app with PostgreSQL](../app-service/tutorial-python-postgresql-app.md) tutorial.
--
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for PostgreSQL single server using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-
- |Action| Connectivity method|How-to guide|
- |: |: |: |
- | **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
- | **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
- | **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- [Python](https://www.python.org/downloads/) 2.7 or 3.6+.--- Latest [pip](https://pip.pypa.io/en/stable/installing/) package installer.-- Install [psycopg2](https://pypi.python.org/pypi/psycopg2-binary/) using `pip install psycopg2-binary` in a terminal or command prompt window. For more information, see [how to install `psycopg2`](https://www.psycopg.org/docs/install.html).-
-## Get database connection information
-Connecting to an Azure Database for PostgreSQL database requires the fully qualified server name and login credentials. You can get this information from the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL server name.
-1. On the server's **Overview** page, copy the fully qualified **Server name** and the **Admin username**. The fully qualified **Server name** is always of the form *\<my-server-name>.postgres.database.azure.com*, and the **Admin username** is always of the form *\<my-admin-username>@\<my-server-name>*.
-
- You also need your admin password. If you forget it, you can reset it from this page.
-
- :::image type="content" source="./media/connect-python/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
-
-> [!IMPORTANT]
-> Replace the following values:
-> - `<server-name>` and `<admin-username>` with the values you copied from the Azure portal.
-> - `<admin-password>` with your server password.
-> - `<database-name>` a default database named *postgres* was automatically created when you created your server. You can rename that database or [create a new database](https://www.postgresql.org/docs/current/sql-createdatabase.html) by using SQL commands.
-
-## Step 1: Connect and insert data
-The following code example connects to your Azure Database for PostgreSQL database using
-- [psycopg2.connect](https://www.psycopg.org/docs/connection.html) function, and loads data with a SQL **INSERT** statement.-- [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) function executes the SQL query against the database.-
-```Python
-import psycopg2
-
-# Update connection string information
-host = "<server-name>"
-dbname = "<database-name>"
-user = "<admin-username>"
-password = "<admin-password>"
-sslmode = "require"
-
-# Construct connection string
-conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode)
-conn = psycopg2.connect(conn_string)
-print("Connection established")
-
-cursor = conn.cursor()
-
-# Drop previous table of same name if one exists
-cursor.execute("DROP TABLE IF EXISTS inventory;")
-print("Finished dropping table (if existed)")
-
-# Create a table
-cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
-print("Finished creating table")
-
-# Insert some data into the table
-cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150))
-cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154))
-cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100))
-print("Inserted 3 rows of data")
-
-# Clean up
-conn.commit()
-cursor.close()
-conn.close()
-```
-
-When the code runs successfully, it produces the following output:
---
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
-
-## Step 2: Read data
-The following code example connects to your Azure Database for PostgreSQL database and uses
-- [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **SELECT** statement to read data.-- [cursor.fetchall()](https://www.psycopg.org/docs/cursor.html#cursor.fetchall) accepts a query and returns a result set to iterate over by using-
-```Python
-
-# Fetch all rows from table
-cursor.execute("SELECT * FROM inventory;")
-rows = cursor.fetchall()
-
-# Print all rows
-for row in rows:
- print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2])))
--
-```
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
-
-## Step 3: Update data
-The following code example uses [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **UPDATE** statement to update data.
-
-```Python
-
-# Update a data row in the table
-cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (200, "banana"))
-print("Updated 1 row of data")
-
-```
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
-
-## Step 5: Delete data
-The following code example runs [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **DELETE** statement to delete an inventory item that you previously inserted.
-
-```Python
-
-# Delete data row from table
-cursor.execute("DELETE FROM inventory WHERE name = %s;", ("orange",))
-print("Deleted 1 row of data")
-
-```
-
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)<br/>
-
-[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-ruby.md
- Title: 'Quickstart: Connect with Ruby - Azure Database for PostgreSQL - Single Server'
-description: This quickstart provides a Ruby code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
------- Previously updated : 5/6/2019--
-# Quickstart: Use Ruby to connect and query data in Azure Database for PostgreSQL - Single Server
-
-This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a [Ruby](https://www.ruby-lang.org) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using Ruby, and are new to working with Azure Database for PostgreSQL.
-
-## Prerequisites
-This quickstart uses the resources created in either of these guides as a starting point:
-- [Create DB - Portal](quickstart-create-server-database-portal.md)-- [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md)-
-You also need to have installed:
-- [Ruby](https://www.ruby-lang.org/en/downloads/)-- [Ruby pg](https://rubygems.org/gems/pg/), the PostgreSQL module for Ruby-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-ruby/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
-
-> [!NOTE]
-> The `@` symbol in the Azure Postgres username has been url encoded as `%40` in all the connection strings.
-
-## Connect and create a table
-Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
-
-The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See Ruby Pg reference documentation for more information on these classes and methods.
-
-Replace the `host`, `database`, `user`, and `password` strings with your own values.
--
-```ruby
-require 'pg'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.postgres.database.azure.com')
- database = String('postgres')
- user = String('mylogin%40mydemoserver')
- password = String('<server_admin_password>')
-
- # Initialize connection object.
- connection = PG::Connection.new(:host => host, :user => user, :dbname => database, :port => '5432', :password => password)
- puts 'Successfully created connection to database'
-
- # Drop previous table of same name if one exists
- connection.exec('DROP TABLE IF EXISTS inventory;')
- puts 'Finished dropping table (if existed).'
-
- # Drop previous table of same name if one exists.
- connection.exec('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);')
- puts 'Finished creating table.'
-
- # Insert some data into table.
- connection.exec("INSERT INTO inventory VALUES(1, 'banana', 150)")
- connection.exec("INSERT INTO inventory VALUES(2, 'orange', 154)")
- connection.exec("INSERT INTO inventory VALUES(3, 'apple', 100)")
- puts 'Inserted 3 rows of data.'
-
-rescue PG::Error => e
- puts e.message
-
-ensure
- connection.close if connection
-end
-```
-
-## Read data
-Use the following code to connect and read the data using a **SELECT** SQL statement.
-
-The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the SELECT command, keeping the results in a result set. The result set collection is iterated over using the `resultSet.each do` loop, keeping the current row values in the `row` variable. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See Ruby Pg reference documentation for more information on these classes and methods.
-
-Replace the `host`, `database`, `user`, and `password` strings with your own values.
-
-```ruby
-require 'pg'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.postgres.database.azure.com')
- database = String('postgres')
- user = String('mylogin%40mydemoserver')
- password = String('<server_admin_password>')
-
- # Initialize connection object.
- connection = PG::Connection.new(:host => host, :user => user, :dbname => database, :port => '5432', :password => password)
- puts 'Successfully created connection to database.'
-
- resultSet = connection.exec('SELECT * from inventory;')
- resultSet.each do |row|
- puts 'Data row = (%s, %s, %s)' % [row['id'], row['name'], row['quantity']]
- end
-
-rescue PG::Error => e
- puts e.message
-
-ensure
- connection.close if connection
-end
-```
-
-## Update data
-Use the following code to connect and update the data using a **UPDATE** SQL statement.
-
-The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the UPDATE command. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See [Ruby Pg reference documentation](https://rubygems.org/gems/pg) for more information on these classes and methods.
-
-Replace the `host`, `database`, `user`, and `password` strings with your own values.
-
-```ruby
-require 'pg'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.postgres.database.azure.com')
- database = String('postgres')
- user = String('mylogin%40mydemoserver')
- password = String('<server_admin_password>')
-
- # Initialize connection object.
- connection = PG::Connection.new(:host => host, :user => user, :dbname => database, :port => '5432', :password => password)
- puts 'Successfully created connection to database.'
-
- # Modify some data in table.
- connection.exec('UPDATE inventory SET quantity = %d WHERE name = %s;' % [200, '\'banana\''])
- puts 'Updated 1 row of data.'
-
-rescue PG::Error => e
- puts e.message
-
-ensure
- connection.close if connection
-end
-```
--
-## Delete data
-Use the following code to connect and read the data using a **DELETE** SQL statement.
-
-The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the UPDATE command. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating.
-
-Replace the `host`, `database`, `user`, and `password` strings with your own values.
-
-```ruby
-require 'pg'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.postgres.database.azure.com')
- database = String('postgres')
- user = String('mylogin%40mydemoserver')
- password = String('<server_admin_password>')
-
- # Initialize connection object.
- connection = PG::Connection.new(:host => host, :user => user, :dbname => database, :port => '5432', :password => password)
- puts 'Successfully created connection to database.'
-
- # Modify some data in table.
- connection.exec('DELETE FROM inventory WHERE name = %s;' % ['\'orange\''])
- puts 'Deleted 1 row of data.'
-
-rescue PG::Error => e
- puts e.message
-
-ensure
- connection.close if connection
-end
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./howto-migrate-using-export-and-import.md) <br/>
-> [!div class="nextstepaction"]
-> [Ruby Pg reference documentation](https://rubygems.org/gems/pg)
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-rust.md
- Title: 'Quickstart: Connect with Rust - Azure Database for PostgreSQL - Single Server'
-description: This quickstart provides Rust code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
------ Previously updated : 03/26/2021--
-# Quickstart: Use Rust to connect and query data in Azure Database for PostgreSQL - Single Server
-
-In this article, you will learn how to use the [PostgreSQL driver for Rust](https://github.com/sfackler/rust-postgres) to interact with Azure Database for PostgreSQL by exploring CRUD (create, read, update, delete) operations implemented in the sample code. Finally, you can run the application locally to see it in action.
-
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- A recent version of [Rust](https://www.rust-lang.org/tools/install) installed.-- An Azure Database for PostgreSQL single server - create one using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md).-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-
- |Action| Connectivity method|How-to guide|
- |: |: |: |
- | **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
- | **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
- | **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- [Git](https://git-scm.com/downloads) installed.-
-## Get database connection information
-Connecting to an Azure Database for PostgreSQL database requires the fully qualified server name and login credentials. You can get this information from the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL server name.
-1. On the server's **Overview** page, copy the fully qualified **Server name** and the **Admin username**. The fully qualified **Server name** is always of the form *\<my-server-name>.postgres.database.azure.com*, and the **Admin username** is always of the form *\<my-admin-username>@\<my-server-name>*.
-
-## Review the code (optional)
-
-If you're interested in learning how the code works, you can review the following snippets. Otherwise, feel free to skip ahead to [Run the application](#run-the-application).
-
-### Connect
-
-The `main` function starts by connecting to Azure Database for PostgreSQL and it depends on following environment variables for connectivity information `POSTGRES_HOST`, `POSTGRES_USER`, `POSTGRES_PASSWORD` and, `POSTGRES_DBNAME`. By default, the PostgreSQL database service is configured to require `TLS` connection. You can choose to disable requiring `TLS` if your client application does not support `TLS` connectivity. For details, please refer [Configure TLS connectivity in Azure Database for PostgreSQL - Single Server](./concepts-ssl-connection-security.md).
-
-The sample application in this article uses TLS with the [postgres-openssl crate](https://crates.io/crates/postgres-openssl/). [postgres::Client::connect](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.connect) function is used to initiate the connection and the program exits in case this fails.
-
-```rust
-fn main() {
- let pg_host = std::env::var("POSTGRES_HOST").expect("missing environment variable POSTGRES_HOST");
- let pg_user = std::env::var("POSTGRES_USER").expect("missing environment variable POSTGRES_USER");
- let pg_password = std::env::var("POSTGRES_PASSWORD").expect("missing environment variable POSTGRES_PASSWORD");
- let pg_dbname = std::env::var("POSTGRES_DBNAME").unwrap_or("postgres".to_string());
-
- let builder = SslConnector::builder(SslMethod::tls()).unwrap();
- let tls_connector = MakeTlsConnector::new(builder.build());
-
- let url = format!(
- "host={} port=5432 user={} password={} dbname={} sslmode=require",
- pg_host, pg_user, pg_password, pg_dbname
- );
- let mut pg_client = postgres::Client::connect(&url, tls_connector).expect("failed to connect to postgres");
-...
-}
-```
-
-### Drop and create table
-
-The sample application uses a simple `inventory` table to demonstrate the CRUD (create, read, update, delete) operations.
-
-```sql
-CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
-```
-
-The `drop_create_table` function initially tries to `DROP` the `inventory` table before creating a new one. This makes it easier for learning/experimentation, as you always start with a known (clean) state. The [execute](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.execute) method is used for create and drop operations.
-
-```rust
-const CREATE_QUERY: &str =
- "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
-
-const DROP_TABLE: &str = "DROP TABLE inventory";
-
-fn drop_create_table(pg_client: &mut postgres::Client) {
- let res = pg_client.execute(DROP_TABLE, &[]);
- match res {
- Ok(_) => println!("dropped table"),
- Err(e) => println!("failed to drop table {}", e),
- }
- pg_client
- .execute(CREATE_QUERY, &[])
- .expect("failed to create 'inventory' table");
-}
-```
-
-### Insert data
-
-`insert_data` adds entries to the `inventory` table. It creates a [prepared statement](https://docs.rs/postgres/0.19.0/postgres/struct.Statement.html) with [prepare](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.prepare) function.
--
-```rust
-const INSERT_QUERY: &str = "INSERT INTO inventory (name, quantity) VALUES ($1, $2) RETURNING id;";
-
-fn insert_data(pg_client: &mut postgres::Client) {
-
- let prep_stmt = pg_client
- .prepare(&INSERT_QUERY)
- .expect("failed to create prepared statement");
-
- let row = pg_client
- .query_one(&prep_stmt, &[&"item-1", &42])
- .expect("insert failed");
-
- let id: i32 = row.get(0);
- println!("inserted item with id {}", id);
-...
-}
-```
-
-Also note the usage of [prepare_typed](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.prepare_typed) method, that allows the types of query parameters to be explicitly specified.
-
-```rust
-...
-let typed_prep_stmt = pg_client
- .prepare_typed(&INSERT_QUERY, &[Type::VARCHAR, Type::INT4])
- .expect("failed to create prepared statement");
-
-let row = pg_client
- .query_one(&typed_prep_stmt, &[&"item-2", &43])
- .expect("insert failed");
-
-let id: i32 = row.get(0);
-println!("inserted item with id {}", id);
-...
-```
-
-Finally, a `for` loop is used to add `item-3`, `item-4` and, `item-5` with randomly generated quantity for each.
-
-```rust
-...
- for n in 3..=5 {
- let row = pg_client
- .query_one(
- &typed_prep_stmt,
- &[
- &("item-".to_owned() + &n.to_string()),
- &rand::thread_rng().gen_range(10..=50),
- ],
- )
- .expect("insert failed");
-
- let id: i32 = row.get(0);
- println!("inserted item with id {} ", id);
- }
-...
-```
-
-### Query data
-
-`query_data` function demonstrates how to retrieve data from the `inventory` table. The [query_one](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query_one) method is used to get an item by its `id`.
-
-```rust
-const SELECT_ALL_QUERY: &str = "SELECT * FROM inventory;";
-const SELECT_BY_ID: &str = "SELECT name, quantity FROM inventory where id=$1;";
-
-fn query_data(pg_client: &mut postgres::Client) {
-
- let prep_stmt = pg_client
- .prepare_typed(&SELECT_BY_ID, &[Type::INT4])
- .expect("failed to create prepared statement");
-
- let item_id = 1;
-
- let c = pg_client
- .query_one(&prep_stmt, &[&item_id])
- .expect("failed to query item");
-
- let name: String = c.get(0);
- let quantity: i32 = c.get(1);
- println!("quantity for item {} = {}", name, quantity);
-...
-}
-```
-
-All rows in the inventory table are fetched using a `select * from` query with the [query](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query) method. The returned rows are iterated over to extract the value for each column using [get](https://docs.rs/postgres/0.19.0/postgres/row/struct.Row.html#method.get).
-
-> [!TIP]
-> Note how `get` makes it possible to specify the column either by its numeric index in the row, or by its column name.
-
-```rust
-...
- let items = pg_client
- .query(SELECT_ALL_QUERY, &[])
- .expect("select all failed");
-
- println!("listing items...");
-
- for item in items {
- let id: i32 = item.get("id");
- let name: String = item.get("name");
- let quantity: i32 = item.get("quantity");
- println!(
- "item info: id = {}, name = {}, quantity = {} ",
- id, name, quantity
- );
- }
-...
-```
-
-### Update data
-
-The `update_date` function randomly updates the quantity for all the items. Since the `insert_data` function had added `5` rows, the same is taken into account in the `for` loop - `for n in 1..=5`
-
-> [!TIP]
-> Note that we use `query` instead of `execute` since we intend to get back the `id` and the newly generated `quantity` (using [RETURNING clause](https://www.postgresql.org/docs/current/dml-returning.html)).
-
-```rust
-const UPDATE_QUERY: &str = "UPDATE inventory SET quantity = $1 WHERE name = $2 RETURNING quantity;";
-
-fn update_data(pg_client: &mut postgres::Client) {
- let stmt = pg_client
- .prepare_typed(&UPDATE_QUERY, &[Type::INT4, Type::VARCHAR])
- .expect("failed to create prepared statement");
-
- for id in 1..=5 {
- let row = pg_client
- .query_one(
- &stmt,
- &[
- &rand::thread_rng().gen_range(10..=50),
- &("item-".to_owned() + &id.to_string()),
- ],
- )
- .expect("update failed");
-
- let quantity: i32 = row.get("quantity");
- println!("updated item id {} to quantity = {}", id, quantity);
- }
-}
-```
-
-### Delete data
-
-Finally, the `delete` function demonstrates how to remove an item from the `inventory` table by its `id`. The `id` is chosen randomly - it's a random integer between `1` to `5` (`5` inclusive) since the `insert_data` function had added `5` rows to start with.
-
-> [!TIP]
-> Note that we use `query` instead of `execute` since we intend to get back the info about the item we just deleted (using [RETURNING clause](https://www.postgresql.org/docs/current/dml-returning.html)).
-
-```rust
-const DELETE_QUERY: &str = "DELETE FROM inventory WHERE id = $1 RETURNING id, name, quantity;";
-
-fn delete(pg_client: &mut postgres::Client) {
- let stmt = pg_client
- .prepare_typed(&DELETE_QUERY, &[Type::INT4])
- .expect("failed to create prepared statement");
-
- let item = pg_client
- .query_one(&stmt, &[&rand::thread_rng().gen_range(1..=5)])
- .expect("delete failed");
-
- let id: i32 = item.get(0);
- let name: String = item.get(1);
- let quantity: i32 = item.get(2);
- println!(
- "deleted item info: id = {}, name = {}, quantity = {} ",
- id, name, quantity
- );
-}
-```
-
-## Run the application
-
-1. To begin with, run the following command to clone the sample repository:
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-postgresql-rust-quickstart.git
- ```
-
-2. Set the required environment variables with the values you copied from the Azure portal:
-
- ```bash
- export POSTGRES_HOST=<server name e.g. my-server.postgres.database.azure.com>
- export POSTGRES_USER=<admin username e.g. my-admin-user@my-server>
- export POSTGRES_PASSWORD=<admin password>
- export POSTGRES_DBNAME=<database name. it is optional and defaults to postgres>
- ```
-
-3. To run the application, change into the directory where you cloned it and execute `cargo run`:
-
- ```bash
- cd azure-postgresql-rust-quickstart
- cargo run
- ```
-
- You should see an output similar to this:
-
- ```bash
- dropped 'inventory' table
- inserted item with id 1
- inserted item with id 2
- inserted item with id 3
- inserted item with id 4
- inserted item with id 5
- quantity for item item-1 = 42
- listing items...
- item info: id = 1, name = item-1, quantity = 42
- item info: id = 2, name = item-2, quantity = 43
- item info: id = 3, name = item-3, quantity = 11
- item info: id = 4, name = item-4, quantity = 32
- item info: id = 5, name = item-5, quantity = 24
- updated item id 1 to quantity = 27
- updated item id 2 to quantity = 14
- updated item id 3 to quantity = 31
- updated item id 4 to quantity = 16
- updated item id 5 to quantity = 10
- deleted item info: id = 4, name = item-4, quantity = 16
- ```
-
-4. To confirm, you can also connect to Azure Database for PostgreSQL [using psql](./quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) and run queries against the database, for example:
-
- ```sql
- select * from inventory;
- ```
-
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for PostgreSQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for PostgreSQL server using CLI](./how-to-manage-server-cli.md)<br/>
-
-[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-connect-query-guide.md
- Title: Connect and query - Single Server PostgreSQL
-description: Links to quickstarts showing how to connect to your Azure Database for PostgreSQL Single Server and run queries.
------ Previously updated : 09/21/2020--
-# Connect and query overview for Azure database for PostgreSQL- Single Server
-
-The following document includes links to examples showing how to connect and query with Azure Database for PostgreSQL Single Server. This guide also includes TLS recommendations and extension that you can use to connect to the server in supported languages below.
-
-## Quickstarts
-
-| Quickstart | Description |
-|||
-|[Pgadmin](https://www.pgadmin.org/)|You can use pgadmin to connect to the server and it simplifies the creation, maintenance and use of database objects.|
-|[psql in Azure Cloud Shell](quickstart-create-server-database-azure-cli.md#connect-to-the-azure-database-for-postgresql-server-by-using-psql)|This article shows how to run [**psql**](https://www.postgresql.org/docs/current/static/app-psql.html) in [Azure Cloud Shell](../cloud-shell/overview.md) to connect to your server and then run statements to query, insert, update, and delete data in the database.You can run **psql** if installed on your development environment|
-|[PostgreSQL with VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-cosmosdb)|Azure Databases extension for VS Code (Preview) allows you to browse and query your PostgreSQL server both locally and in the cloud using scrapbooks with rich Intellisense. |
-|[PHP](connect-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use work with database objects to query data.|
-|[Java](connect-java.md)|This quickstart demonstrates how to use Java to connect to a database and then use work with database objects to query data.|
-|[Node.js](connect-nodejs.md)|This quickstart demonstrates how to use Node.js to create a program to connect to a database and use work with database objects to query data.|
-|[.NET(C#)](connect-csharp.md)|This quickstart demonstrates how to use.NET (C#) to create a C# program to connect to a database and use work with database objects to query data.|
-|[Go](connect-go.md)|This quickstart demonstrates how to use Go to connect to a database. Transact-SQL statements to query and modify data are also demonstrated.|
-|[Python](connect-python.md)|This quickstart demonstrates how to use Python to connect to a database and use work with database objects to query data. |
-|[Ruby](connect-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use work with database objects to query data.|
-
-## TLS considerations for database connectivity
-
-Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to databases in Azure Database for PostgreSQL. No special configuration is necessary but do enforce TLS 1.2 for newly created servers. We recommend if you are using TLS 1.0 and 1.1, then you update the TLS version for your servers. See [How to configure TLS](howto-tls-configurations.md)
-
-## PostgreSQL extensions
-
-PostgreSQL provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions function like built-in features.
--- [Postgres 11 extensions](./concepts-extensions.md#postgres-11-extensions)-- [Postgres 10 extensions](./concepts-extensions.md#postgres-10-extensions)-- [Postgres 9.6 extensions](./concepts-extensions.md#postgres-96-extensions)-
-Fore more details, see [How to use PostgreSQL extensions on Single server](concepts-extensions.md).
-
-## Next steps
--- [Migrate data using dump and restore](howto-migrate-using-dump-and-restore.md)-- [Migrate data using import and export](howto-migrate-using-export-and-import.md)
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-deploy-github-action.md
- Title: 'Quickstart: Connect to Azure PostgreSQL with GitHub Actions'
-description: Use Azure PostgreSQL from a GitHub Actions workflow
------- Previously updated : 10/12/2020--
-# Quickstart: Use GitHub Actions to connect to Azure PostgreSQL
-
-**APPLIES TO:** :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Single Server :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Flexible Server
-
-Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a workflow to deploy database updates to [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/).
-
-## Prerequisites
-
-You will need:
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A GitHub repository with sample data (`data.sql`). If you don't have a GitHub account, [sign up for free](https://github.com/join).-- An Azure Database for PostgreSQL server.
- - [Quickstart: Create an Azure Database for PostgreSQL server in the Azure portal](quickstart-create-server-database-portal.md)
-
-## Workflow file overview
-
-A GitHub Actions workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
-
-The file has two sections:
-
-|Section |Tasks |
-|||
-|**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. |
-|**Deploy** | 1. Deploy the database. |
-
-## Generate deployment credentials
-
-You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac&preserve-view=true) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-
-Replace the placeholders `server-name` with the name of your PostgreSQL server hosted on Azure. Replace the `subscription-id` and `resource-group` with the subscription ID and resource group connected to your PostgreSQL server.
-
-```azurecli-interactive
- az ad sp create-for-rbac --name {server-name} --role contributor \
- --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
- --sdk-auth
-```
-
-The output is a JSON object with the role assignment credentials that provide access to your database similar to below. Copy this output JSON object for later.
-
-```output
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
-```
-
-> [!IMPORTANT]
-> It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific server and not the entire resource group.
-
-## Copy the PostgreSQL connection string
-
-In the Azure portal, go to your Azure Database for PostgreSQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string will look similar to this.
-
-> [!IMPORTANT]
-> - For Single server use ```user=adminusername@servername``` . Note the ```@servername``` is required.
-> - For Flexible server , use ```user= adminusername``` without the ```@servername```.
-
-```output
-psql host={servername.postgres.database.azure.com} port=5432 dbname={your_database} user={adminusername} password={your_database_password} sslmode=require
-```
-
-You will use the connection string as a GitHub secret.
-
-## Configure the GitHub secrets
-
-1. In [GitHub](https://github.com/), browse your repository.
-
-1. Select **Settings > Secrets > New secret**.
-
-1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
-
- When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
-
- ```yaml
- - uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
- ```
-
-1. Select **New secret** again.
-
-1. Paste the connection string value into the secret's value field. Give the secret the name `AZURE_POSTGRESQL_CONNECTION_STRING`.
--
-## Add your workflow
-
-1. Go to **Actions** for your GitHub repository.
-
-2. Select **Set up your workflow yourself**.
-
-2. Delete everything after the `on:` section of your workflow file. For example, your remaining workflow may look like this.
-
- ```yaml
- name: CI
-
- on:
- push:
- branches: [ master ]
- pull_request:
- branches: [ master ]
- ```
-
-1. Rename your workflow `PostgreSQL for GitHub Actions` and add the checkout and login actions. These actions will checkout your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
-
- ```yaml
- name: PostgreSQL for GitHub Actions
-
- on:
- push:
- branches: [ master ]
- pull_request:
- branches: [ master ]
-
- jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/checkout@v1
- - uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
- ```
-
-2. Use the Azure PostgreSQL Deploy action to connect to your PostgreSQL instance. Replace `POSTGRESQL_SERVER_NAME` with the name of your server. You should have a PostgreSQL data file named `data.sql` at the root level of your repository.
-
- ```yaml
- - uses: azure/postgresql@v1
- with:
- connection-string: ${{ secrets.AZURE_POSTGRESQL_CONNECTION_STRING }}
- server-name: POSTGRESQL_SERVER_NAME
- sql-file: './data.sql'
- ```
-
-3. Complete your workflow by adding an action to logout of Azure. Here is the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
-
- ```yaml
- name: PostgreSQL for GitHub Actions
-
- on:
- push:
- branches: [ master ]
- pull_request:
- branches: [ master ]
--
- jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/checkout@v1
- - uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
-
- - uses: azure/postgresql@v1
- with:
- server-name: POSTGRESQL_SERVER_NAME
- connection-string: ${{ secrets.AZURE_POSTGRESQL_CONNECTION_STRING }}
- sql-file: './data.sql'
-
- # Azure logout
- - name: logout
- run: |
- az logout
- ```
-
-## Review your deployment
-
-1. Go to **Actions** for your GitHub repository.
-
-1. Open the first result to see detailed logs of your workflow's run.
-
- :::image type="content" source="media/how-to-deploy-github-action/gitbub-action-postgres-success.png" alt-text="Log of GitHub actions run":::
-
-## Clean up resources
-
-When your Azure PostgreSQL database and repository are no longer needed, clean up the resources you deployed by deleting the resource group and your GitHub repository.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about Azure and GitHub integration](/azure/developer/github/)
-<br/>
-> [!div class="nextstepaction"]
-> [Learn how to connect to the server](how-to-connect-query-guide.md)
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-manage-server-cli.md
- Title: Manage server - Azure CLI - Azure Database for PostgreSQL
-description: Learn how to manage an Azure Database for PostgreSQL server from the Azure CLI.
----- Previously updated : 9/22/2020--
-# Manage an Azure Database for PostgreSQL Single server using the Azure CLI
-
-This article shows you how to manage your Single servers deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-You'll need to log in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
-
-```azurecli-interactive
-az login
-```
-
-Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
-
-```azurecli
-az account set --subscription <subscription id>
-```
-
-If you have not already created a server, refer to this [quickstart](quickstart-create-server-database-azure-cli.md) to create one.
--
-## Scale compute and storage
-
-You can scale up your pricing tier, compute, and storage easily using the following command. You can see all the server operation you can perform [az postgres server overview](/cli/azure/mysql/server)
-
-```azurecli-interactive
-az postgres server update --resource-group myresourcegroup --name mydemoserver --sku-name GP_Gen5_4 --storage-size 6144
-```
-
-Here are the details for arguments above:
-
-**Setting** | **Sample value** | **Description**
-||
-name | mydemoserver | Enter a unique name for your Azure Database for PostgreSQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
-resource-group | myresourcegroup | Provide the name of the Azure resource group.
-sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information.
-storage-size | 6144 | The storage capacity of the server (unit is megabytes). Minimum 5120 and increases in 1024 increments.
-
-> [!Important]
-> - Storage can be scaled up (however, you cannot scale storage down)
-> - Scaling up from Basic to General purpose or Memory optimized pricing tier is not supported. You can manually scale up with either [using a bash script](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/upgrade-from-basic-to-general-purpose-or-memory-optimized-tiers/ba-p/830404) or [using PostgreSQL Workbench](https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-scale-up-azure-database-for-mysql-from-basic-tier-to/ba-p/369134)
--
-## Manage PostgreSQL databases on a server.
-You can use any of these commands to create, delete, list, and view database properties of a database on your server
-
-| Cmdlet | Usage| Description |
-| | | |
-|[az postgres db create](/cli/azure/sql/db#az-mysql-db-create)|```az postgres db create -g myresourcegroup -s mydemoserver -n mydatabasename``` |Creates a database|
-|[az postgres db delete](/cli/azure/sql/db#az-mysql-db-delete)|```az postgres db delete -g myresourcegroup -s mydemoserver -n mydatabasename```|Delete your database from your server. This command does not delete your server. |
-|[az postgres db list](/cli/azure/sql/db#az-mysql-db-list)|```az postgres db list -g myresourcegroup -s mydemoserver```|lists all the databases on the server|
-|[az postgres db show](/cli/azure/sql/db#az-mysql-db-show)|```az postgres db show -g myresourcegroup -s mydemoserver -n mydatabasename```|Shows more details of the database|
-
-## Update admin password
-You can change the administrator role's password with this command
-```azurecli-interactive
-az postgres server update --resource-group myresourcegroup --name mydemoserver --admin-password <new-password>
-```
-
-> [!Important]
-> Make sure password is minimum 8 characters and maximum 128 characters.
-> Password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
-
-## Delete a server
-If you would just like to delete the PostgreSQL Single server, you can run [az postgres server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
-
-```azurecli-interactive
-az postgres server delete --resource-group myresourcegroup --name mydemoserver
-```
-
-## Next steps
-- [Restart a server](howto-restart-server-cli.md)-- [Restore a server in a bad state](howto-restore-server-cli.md)-- [Monitor and tune the server](concepts-monitoring.md)
postgresql How To Migrate Single To Flex Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-migrate-single-to-flex-cli.md
- Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure CLI"-
-description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using CLI.
---- Previously updated : 05/09/2022--
-# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI
-
->[!NOTE]
-> Single Server to Flexible Server migration feature is in public preview.
-
-This quick start article shows you how to use Single to Flexible Server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
-
-## Before you begin
-
-1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
-2. Register your subscription for Azure Database Migration Service (DMS). If you have already done it, you can skip this step. Go to Azure portal homepage and navigate to your subscription as shown below.
-
- :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-dms.png" alt-text="Screenshot of C L I DMS" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-dms.png":::
-
-3. In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for "**Microsoft.DataMigration**"; as shown below and click on **Register**.
-
- :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-dms-register.png" alt-text="Screenshot of C L I DMS register" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-dms-register.png":::
-
-## Pre-requisites
-
-### Setup Azure CLI
-
-1. Install the latest Azure CLI for your corresponding operating system from the [Azure CLI install page](/cli/azure/install-azure-cli)
-2. In case Azure CLI is already installed, check the version by issuing **az version** command. The version should be **2.28.0 or above** to use the migration CLI commands. If not, update your Azure CLI using this [link](/cli/azure/update-azure-cli.md).
-3. Once you have the right Azure CLI version, run the **az login** command. A browser page is opened with Azure sign-in page to authenticate. Provide your Azure credentials to do a successful authentication. For other ways to sign with Azure CLI, visit this [link](/cli/azure/authenticate-azure-cli.md).
-
- ```bash
- az login
- ```
-1. Take care of the pre-requisites listed in this [**document**](./concepts-single-to-flexible.md#pre-requisites) which are necessary to get started with the Single to Flexible migration feature.
-
-## Migration CLI commands
-
-Single to Flexible Server migration feature comes with a list of easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with **az postgres flexible-server migration**. You can use the **help** parameter to help you with understanding the various options associated with a command and in framing the right syntax for the same.
-
-```azurecli-interactive
-az postgres flexible-server migration --help
-```
-
- gives you the following output.
-
- :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-help.png" alt-text="Screenshot of C L I help" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-help.png":::
-
-It lists the set of migration commands that are supported along with their actions. Let us look into these commands in detail.
-
-### Create migration
-
-The create migration command helps in creating a migration from a source server to a target server
-
-```azurecli-interactive
-az postgres flexible-server migration create -- help
-```
-
-gives the following result
--
-It calls out the expected arguments and has an example syntax that needs to be used to create a successful migration from the source to target server. The CLI command to create a migration is given below
-
-```azurecli
-az postgres flexible-server migration create [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--properties]
-```
-
-| Parameter | Description |
-| - | - |
-|**subscription** | Subscription ID of the target flexible server |
-| **resource-group** | Resource group of the target flexible server |
-| **name** | Name of the target flexible server |
-| **migration-name** | Unique identifier to migrations attempted to the flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **-**. The name cannot start with a **-** and no two migrations to a flexible server can have the same name. |
-| **properties** | Absolute path to a JSON file, that has the information about the source single server |
-
-**For example:**
-
-```azurecli-interactive
-az postgres flexible-server migration create --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON"
-```
-
-The **migration-name** argument used in **create migration** command will be used in other CLI commands such as **update, delete, show** to uniquely identify the migration attempt and to perform the corresponding actions.
-
-The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, visit this [link](./concepts-single-to-flexible.md)
-
-Create a migration between a source and target server with a migration mode of your choice. The **create** command needs a JSON file to be passed as part of its **properties** argument.
-
-The structure of the JSON is given below.
-
-```bash
-{
-"properties": {
- "SourceDBServerResourceId":"subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
-
-"SourceDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the source server as per the custom DNS server",
-"TargetDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the target server as per the custom DNS server"
-
-"SecretParameters": {
- "AdminCredentials":
- {
- "SourceServerPassword": "<password>",
- "TargetServerPassword": "<password>"
- },
-"AADApp":
- {
- "ClientId": "<client id>",
- "TenantId": "<tenant id>",
- "AadSecret": "<secret>"
- }
-},
-
-"MigrationResourceGroup":
- {
- "ResourceId":"subscriptions/<subscriptionid>/resourceGroups/<temp_rg_name>",
- "SubnetResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<rg_name>/providers/Microsoft.Network/virtualNetworks/<Vnet_name>/subnets/<subnet_name>"
- },
-
-"DBsToMigrate":
- [
- "<db1>","<db2>"
- ],
-
-"SetupLogicalReplicationOnSourceDBIfNeeded":ΓÇ»"true",
-
-"OverwriteDBsInTarget":ΓÇ»"true"
-
-}
-
-}
-
-```
-
-Create migration parameters:
-
-| Parameter | Type | Description |
-| - | - | - |
-| **SourceDBServerResourceId** | Required | Resource ID of the single server and is mandatory. |
-| **SourceDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution for a virtual network. The FQDN of the single server as per the custom DNS server should be provided for this property. |
-| **TargetDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution inside a virtual network. The FQDN of the flexible server as per the custom DNS server should be provided for this property. <br> **_SourceDBServerFullyQualifiedDomainName_**, **_TargetDBServerFullyQualifiedDomainName_** should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure provided DNS. Otherwise, these parameters should not be included as a part of the JSON file. |
-| **SecretParameters** | Required | Passwords for admin user for both single server and flexible server along with the Azure AD app credentials. They help to authenticate against the source and target servers and help in checking proper authorization access to the resources.
-| **MigrationResourceGroup** | optional | This section consists of two properties. <br> **ResourceID (optional)** : The migration infrastructure and other network infrastructure components are created to migrate data and schema from the source to target. By default, all the components created by this feature are provisioned under the resource group of the target server. If you wish to deploy them under a different resource group, then you can assign the resource ID of that resource group to this property. <br> **SubnetResourceID (optional)** : In case if your source has public access turned OFF or if your target server is deployed inside a VNet, then specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
-| **DBsToMigrate** | Required | Specify the list of databases you want to migrate to the flexible server. You can include a maximum of 8 database names at a time. |
-| **SetupLogicalReplicationOnSourceDBIfNeeded** | Optional | Logical replication can be enabled on the source server automatically by setting this property to **true**. This change in the server settings requires a server restart with a downtime of few minutes (~ 2-3 mins). |
-| **OverwriteDBsinTarget** | Optional | If the target server happens to have an existing database with the same name as the one you are trying to migrate, the migration will pause until you acknowledge that overwrites in the target DBs are allowed. This pause can be avoided by giving the migration feature, permission to automatically overwrite databases by setting the value of this property to **true** |
-
-### Mode of migrations
-
-The default migration mode for migrations created using CLI commands is **online**. With the above properties filled out in your JSON file, an online migration would be created from your single server to flexible server.
-
-If you want to migrate in **offline** mode, you need to add an additional property **"TriggerCutover":"true"** to your properties JSON file before initiating the create command.
-
-### List migrations
-
-The **list command** shows the migration attempts that were made to a flexible server. The CLI command to list migrations is given below
-
-```azurecli
-az postgres flexible-server migration list [--subscription]
- [--resource-group]
- [--name]
- [--filter]
-```
-
-There is a parameter called **filter** and it can take **Active** and **All** as values.
--- **Active** ΓÇô Lists down the current active migration attempts for the target server. It does not include the migrations that have reached a failed/cancelled/succeeded state.-- **All** ΓÇô Lists down all the migration attempts to the target server. This includes both the active and past migrations irrespective of the state.-
-```azurecli-interactive
-az postgres flexible-server migration list -- help
-```
-
-For any additional information.
-
-### Show Details
-
-The **list** gets the details of a specific migration. This includes information on the current state and substate of the migration. The CLI command to show the details of a migration is given below:
-
-```azurecli
-az postgres flexible-server migration list [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
-```
-
-The **migration_name** is the name assigned to the migration during the **create migration** command. Here is a snapshot of the sample response from the **Show Details** CLI command.
--
-Some important points to note on the command response:
--- As soon as the **create** migration command is triggered, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and to perform a few maintenance tasks. -- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.-- Each DB being migrated has its own section with all migration details such as table count, incremental inserts, deletes, pending bytes, etc.-- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.-- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** sub state completes successfully. If there is an issue at the **Migrating Data** substate, the migration moves into a **Failed** state.-- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and a substate of **WaitingForCutoverTrigger** after the **Migrating Data** state completes successfully. The details of **WaitingForUserAction** state are covered in detail in the next section.-
-```azurecli-interactive
- az postgres flexible-server migration show -- help
- ```
-
-for any additional information.
-
-### Update migration
-
-As soon as the infrastructure setup is complete, the migration activity will pause with appropriate messages seen in the **show details** CLI command response if some pre-requisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called **WaitingForUserAction**. The **update migration** command is used to set values for parameters, which helps the migration to move to the next stage in the process. Let us look at each of the sub-states.
--- **WaitingForLogicalReplicationSetupRequestOnSourceDB** - If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. A user can enable the logical replication setting manually by changing the replication flag to **Logical** on the portal. This would require a server restart. This can also be enabled by the following CLI command-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--initiate-data-migration]
-```
-
-You need to pass the value **true** to the **initiate-data-migration** property to set logical replication on your source server.
-
-**For example:**
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --initiate-data-migration true"
-```
-
-In case you have enabled it manually, **you would still need to issue the above update command** for the migration to move out of the **WaitingForUserAction** state. The server does not need a reboot again since it was already done via the portal action.
--- **WaitingForTargetDBOverwriteConfirmation** - This is the state where migration is waiting for confirmation on target overwrite as data is already present in the target server for the database that is being migrated. This can be enabled by the following CLI command.-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--overwrite-dbs]
-```
-
-You need to pass the value **true** to the **overwrite-dbs** property to give the permissions to the migration to overwrite any existing data in the target server.
-
-**For example:**
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --overwrite-dbs true"
-```
--- **WaitingForCutoverTrigger** - Migration gets to this state when the dump and restore of the databases have been completed and the ongoing writes at your source single server is being replicated to the target flexible server.You should wait for the replication to complete so that the target is in sync with the source. You can monitor the replication lag by using the response from the **show migration** command. There is a metric called **Pending Bytes** associated with each database that is being migrated and this gives you indication of the difference between the source and target database in bytes. This should be nearing zero over time. Once it reaches zero for all the databases, stop any further writes to your single server. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server. After completing the above steps, you can trigger **cutover** by using the following CLI command.-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--cutover]
-```
-
-**For example:**
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --cutover"
-```
-
-After issuing the above command, use the **show details** command to monitor if the cutover has completed successfully. Upon successful cutover, migration will move to **Succeeded** state. Update your application to point to the new target flexible server.
-
-```azurecli-interactive
- az postgres flexible-server migration update -- help
- ```
-
-for any additional information.
-
-### Delete/Cancel Migration
-
-Any ongoing migration attempts can be deleted or cancelled using the **delete migration** command. This command stops all migration activities in that task, but does not drop or rollback any changes on your target server. Below is the CLI command to delete a migration
-
-```azurecli
-az postgres flexible-server migration delete [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
-```
-
-**For example:**
-
-```azurecli-interactive
-az postgres flexible-server migration delete --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1"
-```
-
-```azurecli-interactive
- az postgres flexible-server migration delete -- help
- ```
-
-for any additional information.
-
-## Monitoring Migration
-
-The **create migration** command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the **completed** state. The **show command** helps to monitor ongoing migrations since it gives the current state and substate of the migration.
-
-Migration **states**:
-
-| Migration State | Description |
-| - | - |
-| **InProgress** | The migration infrastructure is being setup, or the actual data migration is in progress. |
-| **Canceled** | The migration has been cancelled or deleted. |
-| **Failed** | The migration has failed. |
-| **Succeeded** | The migration has succeeded and is complete. |
-| **WaitingForUserAction** | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. |
-
-Migration **substates**:
-
-| Migration substates | Description |
-| - | - |
-| **PerformingPreRequisiteSteps** | Infrastructure is being set up and is being prepped for data migration. |
-| **MigratingData** | Data is being migrated. |
-| **CompletingMigration** | Migration cutover in progress. |
-| **WaitingForLogicalReplicationSetupRequestOnSourceDB** | Waiting for logical replication enablement. You can manually enable this manually or enable via the update migration CLI command covered in the next section. |
-| **WaitingForCutoverTrigger** | Migration is ready for cutover. You can start the cutover when ready. |
-| **WaitingForTargetDBOverwriteConfirmation** | Waiting for confirmation on target overwrite as data is present in the target server being migrated into. <br> You can enable this via the **update migration** CLI command. |
-| **Completed** | Cutover was successful, and migration is complete. |
--
-## How to find if custom DNS is used for name resolution?
-Navigate to your Virtual network where you deployed your source or the target server and click on **DNS server**. It should indicate if it is using a custom DNS server or default Azure provided DNS server.
--
-## Post Migration Steps
-
-Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
-
-## Next steps
--- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
postgresql How To Migrate Single To Flex Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-migrate-single-to-flex-portal.md
- Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure portal"-
-description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using Portal.
---- Previously updated : 05/09/2022--
-# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal
-
-This guide shows you how to use Single to Flexible server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
-
-## Before you begin
-
-1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
-2. Register your subscription for the Azure Database Migration Service
-
-Go to Azure portal homepage and navigate to your subscription as shown below.
--
-In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for **Microsoft.DataMigration**; as shown below and click on **Register**.
--
-## Pre-requisites
-
-Take care of the pre-requisites listed [here](./concepts-single-to-flexible.md#pre-requisites) to get started with the migration feature.
-
-## Configure migration task
-
-Single to Flexible server migration feature comes with a simple, wizard-based portal experience. Let us get started to know the steps needed to consume the tool from portal.
--- **Sign into the Azure portal -** Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.-- Navigate to your Azure database for PostgreSQL flexible server.If you have not created an Azure database for PostgreSQL flexible server, create one using this [link](./flexible-server/quickstart-create-server-portal.md).--- In the **Overview** tab of your flexible server, use the left navigation menu and scroll down to the option of **Migration (preview)** and click on it.--
-Click the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you are using the migration feature, you will see an empty grid with a prompt to begin your first migration.
--
-If you have already created migrations to your flexible server, you should see the grid populated with information of the list of migrations that were attempted to this flexible server from single servers.
-
-Click on the **Migrate from Single Server** button. You will be taken through a wizard-based user interface to create a migration to this flexible server from any single server.
-
-### Setup tab
-
-The first is the setup tab which has basic information about the migration and the list of pre-requisites that need to be taken care of to get started with migrations. The list of pre-requisites is the same as the ones listed in the pre-requisites section [here](./concepts-single-to-flexible.md). Click on the provided link to know more about the same.
---- The **Migration name** is the unique identifier for each migration to this flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **&#39;-&#39;**. The name cannot start with a **&#39;-&#39;** and should be unique for a target server. No two migrations to the same flexible server can have the same name.-- The **Migration resource group** is where all the migration-related components will be created by this migration feature.-
-By default, it is resource group of the target flexible server and all the components will be cleaned up automatically once the migration completes. If you want to create a temporary resource group for migration-related purposes, create a resource group and select the same from the dropdown.
--- For the **Azure Active Directory App**, click the **select** option and pick the app that was created as a part of the pre-requisite step. Once the Azure AD App is chosen, paste the client secret that was generated for the Azure AD app to the **Azure Active Directory Client Secret** field.--
-Click on the **Next** button.
-
-### Source tab
--
-The source tab prompts you to give details related to the source single server from which databases needs to be migrated. As soon as you pick the **Subscription** and **Resource Group**, the dropdown for server names will have the list of single servers under that resource group across regions. It is recommended to migrate databases from a single server to flexible server in the same region.
-
-Choose the single server from which you want to migrate databases from, in the drop down.
-
-Once the single server is chosen, the fields such as **Location, PostgreSQL version, Server admin login name** are automatically pre-populated. The server admin login name is the admin username that was used to create the single server. Enter the password for the **server admin login name**. This is required for the migration feature to login into the single server to initiate the dump and migration.
-
-You should also see the list of user databases inside the single server that you can pick for migration. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations using the same experience between the source and target servers.
-
-The final property in the source tab is migration mode. The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, please visit this [link](./concepts-single-to-flexible.md).
-
-Once you pick the migration mode, the restrictions associated with the mode are displayed.
-
-After filling out all the fields, please click the **Next** button.
-
-### Target tab
--
-This tab displays metadata of the flexible server like the **Subscription**, **Resource Group**, **Server name**, **Location**, and **PostgreSQL version**. It displays **server admin login name** which is the admin username that was used during the creation of the flexible server.Enter the corresponding password for the admin user. This is required for the migration feature to login into the flexible server to perform restore operations.
-
-Choose an option **yes/no** for **Authorize DB overwrite**.
--- If you set the option to **Yes**, you give this migration service permission to overwrite existing data in case when a database that is being migrated to flexible server is already present.-- If set to **No**, it goes into a waiting state and asks you for permission either to overwrite the data or to cancel the migration.-
-Click on the **Next** button
-
-### Networking tab
-
-The content on the Networking tab depends on the networking topology of your source and target servers.
--- If both source and target servers are in public access, then you are going to see the message below.--
-In this case, you need not do anything and can just click on the **Next** button.
--- If either the source or target server is configured in private access, then the content of the networking tab is going to be different. Let us try to understand what does private access mean for single server and flexible server:--- **Single Server Private Access** ΓÇô **Deny public network access** set to **Yes** and a private end point configured-- **Flexible Server Private Access** ΓÇô When flexible server is deployed inside a Vnet.-
-If either source or target is configured in private access, then the networking tab looks like the following
--
-All the fields will be automatically populated with subnet details. This is the subnet in which the migration feature will deploy Azure DMS to move data between the source and target.
-
-You can go ahead with the suggested subnet or choose a different subnet. But make sure that the selected subnet can connect to both the source and target servers.
-
-After picking a subnet, click on **Next** button
-
-### Review + create tab
-
-This tab gives a summary of all the details for creating the migration. Review the details and click on the **Create** button to start the migration.
--
-## Monitoring migrations
-
-After clicking on the **Create** button, you should see a notification in a few seconds saying the migration was successfully created.
--
-You should automatically be redirected to **Migrations (Preview)** page of flexible server that will have a new entry of the recently created migration
--
-The grid displaying the migrations has various columns including **Name**, **Status**, **Source server name**, **Region**, **Version**, **Database names**, and the **Migration start time**. By default, the grid shows the list of migrations in the decreasing order of migration start time. In other words, recent migrations appear on top of the grid.
-
-You can use the refresh button to refresh the status of the migrations.
-
-You can click on the migration name in the grid to see the details of that migration.
---- As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate since it takes time to create and deploy DMS, add its IP on firewall list of source and target servers and to perform a few maintenance tasks.-- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.-- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.-- You can click on each of the DBs that are being migrated and a fan-out blade appears that has all migration details such as table count, incremental inserts, deletes, pending bytes, etc.-- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** state completes successfully. If there is an issue at the **Migrating Data** state, the migration moves into a **Failed** state.-- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and **WaitingForCutOver** substate after the **Migrating Data** substate completes successfully.--
-You can click on the migration name to go into the migration details page and should see the substate of **WaitingForCutover**.
--
-At this stage, the ongoing writes at your source single server will be replicated to the target flexible server using the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source. You can monitor the replication lag by clicking on each of the databases that are being migrated. It opens a fan-out blade with a bunch of metrics. Look for the value of **Pending Bytes** metric and it should be nearing zero over time. Once it reaches to a few MB for all the databases, stop any further writes to your single server and wait until the metric reaches 0. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server.
-
-After completing the above steps, click on the **Cutover** button. You should see the following message
--
-Click on the **Yes** button to start cutover.
-
-In a few seconds after starting cutover, you should see the following notification
--
-Once the cutover is complete, the migration moves to **Succeeded** state and migration of schema data from your single server to flexible server is now complete. You can use the refresh button in the page to check if the cutover was successful.
-
-After completing the above steps, you can make changes to your application code to point database connection strings to the flexible server and start using it as the primary database server.
-
-Possible migration states include
--- **InProgress**: The migration infrastructure is being setup, or the actual data migration is in progress.-- **Canceled**: The migration has been cancelled or deleted.-- **Failed**: The migration has failed.-- **Succeeded**: The migration has succeeded and is complete.-- **WaitingForUserAction**: Migration is waiting on a user action..-
-Possible migration substates include
--- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration-- **MigratingData**: Data is being migrated-- **CompletingMigration**: Migration cutover in progress-- **WaitingForLogicalReplicationSetupRequestOnSourceDB**: Waiting for logical replication enablement.-- **WaitingForCutoverTrigger**: Migration is ready for cutover.-- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite as data is present in the target server being migrated into.-- **Completed**: Cutover was successful, and migration is complete.-
-## Cancel migrations
-
-You also have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in **InProgress** or **WaitingForUserAction** state. You cannot cancel a migration that has either already **Succeeded** or **Failed**.
-
-You can choose multiple ongoing migrations at once and can cancel them.
--
-Note that **cancel migration** just stops any more further migration activity on your target server. It will not drop or roll back any changes on your target server that were done by the migration attempts. Make sure to drop the databases involved in a cancelled migration on your target server.
-
-## Post migration steps
-
-Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
-
-## Next steps
-- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
postgresql How To Setup Aad App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-setup-aad-app-portal.md
- Title: "Setup Azure AD app to use with Single to Flexible migration"-
-description: Learn about setting up Azure AD App to be used with Single to Flexible Server migration feature.
---- Previously updated : 05/09/2022--
-# Setup Azure AD app to use with Single to Flexible server Migration
-
-This quick start article shows you how to setup Azure Active Directory (Azure AD) app to use with Single to Flexible server migration. It's an important component of the Single to Flexible migration feature. See [Azure Active Directory app](../active-directory/develop/howto-create-service-principal-portal.md) for details. Azure AD App helps with role-based access control (RBAC) as the migration infrastructure requires access to both the source and target servers, and is restricted by the roles assigned to the Azure Active Directory App. The Azure AD app instance once created, can be used to manage multiple migrations. To get started, create a new Azure Active Directory Enterprise App by doing the following steps:
-
-## Create Azure AD App
-
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
-2. Search for Azure Active Directory in the search bar on the top in the portal.
-3. Within the Azure Active Directory portal, under **Manage** on the left, choose **App Registrations**.
-4. Click on **New Registration**
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-new-registration.png" alt-text="New Registration for Azure Active Directory App." lightbox="./media/concepts-single-to-flex/azure-ad-new-registration.png":::
-
-5. Give the app registration a name, choose an option that suits your needs for account types and click register
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-application-registration.png" alt-text="Azure AD App Name screen." lightbox="./media/concepts-single-to-flex/azure-ad-application-registration.png":::
-
-6. Once the app is created, you can copy the client ID and tenant ID required for later steps in the migration. Next, click on **Add a certificate or secret**.
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-secret-screen.png" alt-text="Add a certificate screen." lightbox="./media/concepts-single-to-flex/azure-ad-add-secret-screen.png":::
-
-7. In the next screen, click on **New client secret**.
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-new-client-secret.png" alt-text="New Client Secret screen." lightbox="./media/concepts-single-to-flex/azure-ad-add-new-client-secret.png":::
-
-8. In the fan-out blade that opens, add a description, and select the drop-down to pick the life span of your Azure Active Directory App. Once all the migrations are complete, the Azure Active Directory App that was created for Role Based Access Control can be deleted. The default option is six months. If you don't need Azure Active Directory App for six months, choose three months and click **Add**.
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-client-secret-description.png" alt-text="Client Secret Description." lightbox="./media/concepts-single-to-flex/azure-ad-add-client-secret-description.png":::
-
-9. In the next screen, copy the **Value** column that has the details of the Azure Active Directory App secret. This can be copied only while creation. If you miss copying the secret, you will need to delete the secret and create another one for future tries.
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-client-secret-value.png" alt-text="Copying client secret." lightbox="./media/concepts-single-to-flex/azure-ad-client-secret-value.png":::
-
-10. Once Azure Active Directory App is created, you will need to add contributor privileges for this Azure Active Directory app to the following resources:
-
- | Resource | Type | Description |
- | - | - | - |
- | Single Server | Required | Source single server you're migrating from. |
- | Flexible Server | Required | Target flexible server you're migrating into. |
- | Azure Resource Group | Required | Resource group for the migration. By default, this is the target flexible server resource group. If you're using a temporary resource group to create the migration infrastructure, the Azure Active Directory App will require contributor privileges to this resource group. |
- | VNET | Required (if used) | If the source or the target happens to have private access, then the Azure Active Directory App will require contributor privileges to corresponding VNet. If you're using public access, you can skip this step. |
--
-## Add contributor privileges to an Azure resource
-
-Repeat the steps listed below for source single server, target flexible server, resource group and Vnet (if used).
-
-1. For the target flexible server, select the target flexible server in the Azure portal. Click on Access Control (IAM) on the top left.
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-iam-screen.png" alt-text="Access Control I A M screen." lightbox="./media/concepts-single-to-flex/azure-ad-iam-screen.png":::
-
-2. Click **Add** and choose **Add role assignment**.
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-role-assignment.png" alt-text="Add role assignment here." lightbox="./media/concepts-single-to-flex/azure-ad-add-role-assignment.png":::
-
-> [!NOTE]
-> The Add role assignment capability is only enabled for users in the subscription with role type as **Owners**. Users with other roles do not have permission to add role assignments.
-
-3. Under the **Role** tab, click on **Contributor** and click Next button
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-contributor-privileges.png" alt-text="Choosing Contributor Screen." lightbox="./media/concepts-single-to-flex/azure-ad-contributor-privileges.png":::
-
-4. Under the Members tab, keep the default option of **Assign access to** User, group or service principal and click **Select Members**. Search for your Azure Active Directory App and click on **Select**.
- :::image type="content" source="./media/concepts-single-to-flex/azure-ad-review-and-assign.png" alt-text="Review and Assign Screen." lightbox="./media/concepts-single-to-flex/azure-ad-review-and-assign.png":::
-
-
-## Next steps
--- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)-- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flex-portal.md)-- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flex-cli.md)
postgresql How To Upgrade Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-upgrade-using-dump-and-restore.md
- Title: Upgrade using dump and restore - Azure Database for PostgreSQL
-description: Describes offline upgrade methods using dump and restore databases to migrate to a higher version Azure Database for PostgreSQL.
----- Previously updated : 11/30/2021--
-# Upgrade your PostgreSQL database using dump and restore
-
->[!NOTE]
-> The concepts explained in this documentation are applicable to both Azure Database for PostgreSQL - Single Server and Azure Database for PostgreSQL - Flexible Server.
-
-You can upgrade your PostgreSQL server deployed in Azure Database for PostgreSQL by migrating your databases to a higher major version server using following methods.
-* **Offline** method using PostgreSQL [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) which incurs downtime for migrating the data. This document addresses this method of upgrade/migration.
-* **Online** method using [Database Migration Service](../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) (DMS). This method provides a reduced downtime migration and keeps the target database in-sync with the source and you can choose when to cut-over. However, there are few prerequisites and restrictions to be addressed for using DMS. For details, see the [DMS documentation](../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md).
-
- The following table provides some recommendations based on database sizes and scenarios.
-
-| **Database/Scenario** | **Dump/restore (Offline)** | **DMS (Online)** |
-| | :: | :--: |
-| You have a small database and can afford downtime to upgrade | X | |
-| Small databases (< 10 GB) | X | X |
-| Small-medium DBs (10 GB ΓÇô 100 GB) | X | X |
-| Large databases (> 100 GB) | | X |
-| Can afford downtime to upgrade (irrespective of the database size) | X | |
-| Can address DMS [pre-requisites](../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md#prerequisites), including a reboot? | | X |
-| Can avoid DDLs and unlogged tables during the upgrade process? | | X |
-
-This guide provides few offline migration methodologies and examples to show how you can migrate from your source server to the target server that runs a higher version of PostgreSQL.
-
-> [!NOTE]
-> PostgreSQL dump and restore can be performed in many ways. You may choose to migrate using one of the methods provided in this guide or choose any alternate ways to suit your needs. For detailed dump and restore syntax with additional parameters, see the articles [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
--
-## Prerequisites for using dump and restore with Azure Database for PostgreSQL
-
-To step through this how-to-guide, you need:
--- A **source** PostgreSQL database server running a lower version of the engine that you want to upgrade.-- A **target** PostgreSQL database server with the desired major version [Azure Database for PostgreSQL server - Single Server](quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Flexible Server](./flexible-server/quickstart-create-server-portal.md). -- A PostgreSQL client system to run the dump and restore commands. It is recommended to use the higher database version. For example, if you are upgrading from PostgreSQL version 9.6 to 11, please use PostgreSQL version 11 client.
- - It can be a Linux or Windows client that has PostgreSQL installed and that has the [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed.
- - Alternatively, you can use [Azure Cloud Shell](https://shell.azure.com) or by clicking the Azure Cloud Shell on the menu bar at the upper right in the [Azure portal](https://portal.azure.com). You will have to login to your account `az login` before running the dump and restore commands.
-- Your PostgreSQL client preferably running in the same region as the source and target servers. --
-## Additional details and considerations
-- You can find the connection string to the source and target databases by clicking the ΓÇ£Connection StringsΓÇ¥ from the portal. -- You may be running more than one database in your server. You can find the list of databases by connecting to your source server and running `\l`.-- Create corresponding databases in the target database server or add `-C` option to the `pg_dump` command which creates the databases.-- You must not upgrade `azure_maintenance` or template databases. If you have made any changes to template databases, you may choose to migrate the changes or make those changes in the target database.-- Refer to the tables above to determine the database is suitable for this mode of migration.-- If you want to use Azure Cloud Shell, please note that the session times out after 20 minutes. If your database size is < 10 GB, you may be able to complete the upgrade without the session timing out. Otherwise, you may have to keep the session open by other means, such as pressing any key once in 10-15 minutes. --
-## Example database used in this guide
-
-In this guide, the following source and target servers and database names are used to illustrate with examples.
-
- | **Description** | **Value** |
- | - | - |
- | Source server (v9.5) | pg-95.postgres.database.azure.com |
- | Source database | bench5gb |
- | Source database size | 5 GB |
- | Source user name | pg@pg-95 |
- | Target server (v11) | pg-11.postgres.database.azure.com |
- | Target database | bench5gb |
- | Target user name | pg@pg-11 |
-
->[!NOTE]
-> Flexible server supports PostgreSQL version 11 onwards. Also, flexible server user name does not require @dbservername.
-
-## Upgrade your databases using offline migration methods
-You may choose to use one of the methods described in this section for your upgrades. You can use the following tips while performing the tasks.
--- If you are using the same password for source and the target database, you can set the `PGPASSWORD=yourPassword` environment variable. Then you donΓÇÖt have to provide password every time you run commands like psql, pg_dump, and pg_restore. Similarly you can setup additional variables like `PGUSER`, `PGSSLMODE` etc. see to [PostgreSQL environment variables](https://www.postgresql.org/docs/11/libpq-envars.html).
-
-- If your PostgreSQL server requires TLS/SSL connections (on by default in Azure Database for PostgreSQL servers), set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error may read `FATAL: SSL connection is required. Please specify SSL options and retry.`--- In the Windows command line, run the command `SET PGSSLMODE=require` before running the pg_restore command. In Linux or Bash run the command `export PGSSLMODE=require` before running the pg_restore command.-
->[!Important]
-> The steps and methods provided in this document are to give some examples of pg_dump/pg_restore commands and do not represent all possible ways to perform upgrades. It is recommended to test and validate the commands in a test environment before you use them in production.
-
-### Migrate the Roles
-
-Roles (Users) are global objects and needed to be migrated separately to the new cluster before restoring the database. You can use `pg_dumpall` binary with -r (--roles-only) option to dump them.
-To dump all the roles from the source server:
-
-```azurecli-interactive
-pg_dumpall -r --host=mySourceServer --port=5432 --username=myUser --database=mySourceDB > roles.sql
-```
-
-Edit the `roles.sql` and remove references of `NOSUPERUSER` and `NOBYPASSRLS` before restoring the content using psql in the target server:
-
-```azurecli-interactive
-psql -f roles.sql --host=myTargetServer --port=5432 --username=myUser --dbname=postgres
-```
-
-The dump script should not be expected to run completely without errors. In particular, because the script will issue CREATE ROLE for every role existing in the source cluster, it is certain to get a ΓÇ£role already existsΓÇ¥ error for the bootstrap superuser like azure_pg_admin or azure_superuser. This error is harmless and can be ignored. Use of the `--clean` option is likely to produce additional harmless error messages about non-existent objects, although you can minimize those by adding `--if-exists`.
--
-### Method 1: Using pg_dump and psql
-
-This method involves two steps. First is to dump a SQL file from the source server using `pg_dump`. The second step is to import the file to the target server using `psql`. Please see the [Migrate using export and import](howto-migrate-using-export-and-import.md) documentation for details.
-
-### Method 2: Using pg_dump and pg_restore
-
-In this method of upgrade, you first create a dump from the source server using `pg_dump`. Then you restore that dump file to the target server using `pg_restore`. Please see the [Migrate using dump and restore](howto-migrate-using-dump-and-restore.md) documentation for details.
-
-### Method 3: Using streaming the dump data to the target database
-
-If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, then you can use this method. The database dump is streamed directly to the target database server and does not store the dump in the client. Hence, this can be used with a client with limited storage and even can be run from the Azure Cloud Shell.
-
-1. Make sure the database exists in the target server using `\l` command. If the database does not exist, then create the database.
- ```azurecli-interactive
- psql "host=myTargetServer port=5432 dbname=postgres user=myUser password=###### sslmode=mySSLmode"
- ```
- ```SQL
- postgres> \l
- postgres> create database myTargetDB;
- ```
-
-2. Run the dump and restore as a single command line using a pipe.
- ```azurecli-interactive
- pg_dump -Fc --host=mySourceServer --port=5432 --username=myUser --dbname=mySourceDB | pg_restore --no-owner --host=myTargetServer --port=5432 --username=myUser --dbname=myTargetDB
- ```
-
- For example,
-
- ```azurecli-interactive
- pg_dump -Fc --host=pg-95.postgres.database.azure.com --port=5432 --username=pg@pg-95 --dbname=bench5gb | pg_restore --no-owner --host=pg-11.postgres.database.azure.com --port=5432 --username=pg@pg-11 --dbname=bench5gb
- ```
-3. Once the upgrade (migration) process completes, you can test your application with the target server.
-4. Repeat this process for all the databases within the server.
-
- As an example, the following table illustrates the time it took to migrate using streaming dump method. The sample data is populated using [pgbench](https://www.postgresql.org/docs/10/pgbench.html). As your database can have different number of objects with varied sizes than pgbench generated tables and indexes, it is highly recommended to test dump and restore of your database to understand the actual time it takes to upgrade your database.
-
-| **Database Size** | **Approx. time taken** |
-| -- | |
-| 1 GB | 1-2 minutes |
-| 5 GB | 8-10 minutes |
-| 10 GB | 15-20 minutes |
-| 50 GB | 1-1.5 hours |
-| 100 GB | 2.5-3 hours|
-
-### Method 4: Using parallel dump and restore
-
-You can consider this method if you have few larger tables in your database and you want to parallelize the dump and restore process for that database. You also need enough storage in your client system to accommodate backup dumps. This parallel dump and restore process reduces the time consumption to complete the whole migration. For example, the 50 GB pgbench database which took 1-1.5 hrs to migrate was completed using Method 1 and 2 took less than 30 minutes using this method.
-
-1. For each database in your source server, create a corresponding database at the target server.
-
- ```azurecli-interactive
- psql "host=myTargetServer port=5432 dbname=postgres user=myuser password=###### sslmode=mySSLmode"
- ```
-
- ```SQL
- postgres> create database myDB;
- ```
-
- For example,
- ```bash
- psql "host=pg-11.postgres.database.azure.com port=5432 dbname=postgres user=pg@pg-11 password=###### sslmode=require"
- psql (12.3 (Ubuntu 12.3-1.pgdg18.04+1), server 13.3)
- SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
- Type "help" for help.
-
- postgres> create database bench5gb;
- postgres> \q
- ```
-
-2. Run the pg_dump command in a directory format with number of jobs = 4 (number of tables in the database). With larger compute tier and with more tables, you can increase it to a higher number. That pg_dump will create a directory to store compressed files for each job.
-
- ```azurecli-interactive
- pg_dump -Fd -v --host=sourceServer --port=5432 --username=myUser --dbname=mySourceDB -j 4 -f myDumpDirectory
- ```
- For example,
- ```bash
- pg_dump -Fd -v --host=pg-95.postgres.database.azure.com --port=5432 --username=pg@pg-95 --dbname=bench5gb -j 4 -f dump.dir
- ```
-
-3. Then restore the backup at the target server.
- ```azurecli-interactive
- $ pg_restore -v --no-owner --host=myTargetServer --port=5432 --username=myUser --dbname=myTargetDB -j 4 myDumpDir
- ```
- For example,
- ```bash
- $ pg_restore -v --no-owner --host=pg-11.postgres.database.azure.com --port=5432 --username=pg@pg-11 --dbname=bench5gb -j 4 dump.dir
- ```
-
-> [!TIP]
-> The process mentioned in this document can also be used to upgrade your Azure Database for PostgreSQL - Flexible server, which is in Preview. The main difference is the connection string for the flexible server target is without the `@dbName`. For example, if the user name is `pg`, the single serverΓÇÖs username in the connect string will be `pg@pg-95`, while with flexible server, you can simply use `pg`.
-
-## Post upgrade/migrate
-After the major version upgrade is complete, we recommend to run the `ANALYZE` command in each database to refresh the `pg_statistic` table. Otherwise, you may run into performance issues.
-
-```SQL
-postgres=> analyze;
-ANALYZE
-```
-
-## Next steps
--- After you're satisfied with the target database function, you can drop your old database server. -- For Azure Database for PostgreSQL - Single server only. If you want to use the same database endpoint as the source server, then after you had deleted your old source database server, you can create a read replica with the old database server name. Once the steady replication state is established, you can stop the replica, which will promote the replica server to be an independent server. See [Replication](./concepts-read-replicas.md) for more details.-
->[!Important]
-> It is highly recommended to test the new PostgreSQL upgraded version before using it directly for production. This includes comparing server parameters between the older source version source and the newer version target. Please ensure that they are same and check on any new parameters that were added in the new version. Differences between versions can be found [here](https://www.postgresql.org/docs/release/).
postgresql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-alert-on-metric.md
- Title: Configure alerts - Azure portal - Azure Database for PostgreSQL - Single Server
-description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Single Server from the Azure portal.
----- Previously updated : 5/6/2019--
-# Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Single Server
-
-This article shows you how to set up Azure Database for PostgreSQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services.
-
-The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met.
-
-You can configure an alert to do the following actions when it triggers:
-* Send email notifications to the service administrator and co-administrators.
-* Send email to additional emails that you specify.
-* Call a webhook.
-
-You can configure and get information about alert rules using:
-* [Azure portal](../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal)
-* [Azure CLI](../azure-monitor/alerts/alerts-metric.md#with-azure-cli)
-* [Azure Monitor REST API](/rest/api/monitor/metricalerts)
-
-## Create an alert rule on a metric from the Azure portal
-1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for PostgreSQL server you want to monitor.
-
-2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown:
-
- :::image type="content" source="./media/howto-alert-on-metric/2-alert-rules.png" alt-text="Select Alert Rules":::
-
-3. Select **Add metric alert** (+ icon).
-
-4. The **Create rule** page opens as shown below. Fill in the required information:
-
- :::image type="content" source="./media/howto-alert-on-metric/4-add-rule-form.png" alt-text="Add metric alert form":::
-
-5. Within the **Condition** section, select **Add condition**.
-
-6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent".
-
- :::image type="content" source="./media/howto-alert-on-metric/6-configure-signal-logic.png" alt-text="Select metric":::
-
-7. Configure the alert logic including the **Condition** (ex. "Greater than"), **Threshold** (ex. 85 percent), **Time Aggregation**, **Period** of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes"), and **Frequency**.
-
- Select **Done** when complete.
-
- :::image type="content" source="./media/howto-alert-on-metric/7-set-threshold-time.png" alt-text="Screenshot that highlights the Alert logic section and the Done button.":::
-
-8. Within the **Action Groups** section, select **Create New** to create a new group to receive notifications on the alert.
-
-9. Fill out the "Add action group" form with a name, short name, subscription, and resource group.
-
-10. Configure an **Email/SMS/Push/Voice** action type.
-
- Choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications.
-
- Optionally, provide a valid URI in the **Webhook** field if you want it called when the alert fires.
-
- Select **OK** when completed.
-
- :::image type="content" source="./media/howto-alert-on-metric/10-action-group-type.png" alt-text="Screenshot that shows how to add a new action group.":::
-
-11. Specify an Alert rule name, Description, and Severity.
-
- :::image type="content" source="./media/howto-alert-on-metric/11-name-description-severity.png" alt-text="Action group":::
-
-12. Select **Create alert rule** to create the alert.
-
- Within a few minutes, the alert is active and triggers as previously described.
-
-## Manage your alerts
-Once you have created an alert, you can select it and do the following actions:
-
-* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
-* **Edit** or **Delete** the alert rule.
-* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications.
-
-## Next steps
-* Learn more about [configuring webhooks in alerts](../azure-monitor/alerts/alerts-webhooks.md).
-* Get an [overview of metrics collection](../azure-monitor/data-platform.md) to make sure your service is available and responsive.
postgresql Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-auto-grow-storage-cli.md
- Title: Auto-grow storage - Azure CLI - Azure Database for PostgreSQL - Single Server
-description: This article describes how you can configure storage auto-grow using the Azure CLI in Azure Database for PostgreSQL - Single Server.
------ Previously updated : 8/7/2019 -
-# Auto-grow Azure Database for PostgreSQL storage - Single Server using the Azure CLI
-This article describes how you can configure an Azure Database for PostgreSQL server storage to grow without impacting the workload.
-
-The server [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
-
-## Prerequisites
--- You need an [Azure Database for PostgreSQL server](quickstart-create-server-database-azure-cli.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Enable PostgreSQL server storage auto-grow
-
-Enable server auto-grow storage on an existing server with the following command:
-
-```azurecli-interactive
-az postgres server update --name mydemoserver --resource-group myresourcegroup --auto-grow Enabled
-```
-
-Enable server auto-grow storage while creating a new server with the following command:
-
-```azurecli-interactive
-az postgres server create --resource-group myresourcegroup --name mydemoserver --auto-grow Enabled --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 9.6
-```
-
-## Next steps
-
-Learn about [how to create alerts on metrics](howto-alert-on-metric.md).
postgresql Howto Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-auto-grow-storage-portal.md
- Title: Auto grow storage - Azure portal - Azure Database for PostgreSQL - Single Server
-description: This article describes how you can configure storage auto-grow using the Azure portal in Azure Database for PostgreSQL - Single Server
----- Previously updated : 5/29/2019-
-# Auto grow storage using the Azure portal in Azure Database for PostgreSQL - Single Server
-This article describes how you can configure an Azure Database for PostgreSQL server storage to grow without impacting the workload.
-
-When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
-
-## Prerequisites
-To complete this how-to guide, you need:
-- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md)-
-## Enable storage auto grow
-
-Follow these steps to set PostgreSQL server storage auto grow:
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL server.
-
-2. On the PostgreSQL server page, under **Settings**, click **Pricing tier** to open the pricing tier page.
-
-3. In the **Auto-growth** section, select **Yes** to enable storage auto grow.
-
- :::image type="content" source="./media/howto-auto-grow-storage-portal/3-auto-grow.png" alt-text="Azure Database for PostgreSQL - Settings_Pricing_tier - Auto-growth":::
-
-4. Click **OK** to save the changes.
-
-5. A notification will confirm that auto grow was successfully enabled.
-
- :::image type="content" source="./media/howto-auto-grow-storage-portal/5-auto-grow-successful.png" alt-text="Azure Database for PostgreSQL - auto-growth success":::
-
-## Next steps
-
-Learn about [how to create alerts on metrics](howto-alert-on-metric.md).
postgresql Howto Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-auto-grow-storage-powershell.md
- Title: Auto grow storage - Azure PowerShell - Azure Database for PostgreSQL
-description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for PostgreSQL.
----- Previously updated : 06/08/2020 --
-# Auto grow storage in Azure Database for PostgreSQL server using PowerShell
-
-This article describes how you can configure an Azure Database for PostgreSQL server storage to grow
-without impacting the workload.
-
-Storage auto grow prevents your server from
-[reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit) and
-becoming read-only. For servers with 100 GB or less of provisioned storage, the size is increased by
-5 GB when the free space is below 10%. For servers with more than 100 GB of provisioned storage, the
-size is increased by 5% when the free space is below 10 GB. Maximum storage limits apply as
-specified in the storage section of the
-[Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md#storage).
-
-> [!IMPORTANT]
-> Remember that storage can only be scaled up, not down.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
-
-## Enable PostgreSQL server storage auto grow
-
-Enable server auto grow storage on an existing server with the following command:
-
-```azurepowershell-interactive
-Update-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -StorageAutogrow Enabled
-```
-
-Enable server auto grow storage while creating a new server with the following command:
-
-```azurepowershell-interactive
-$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
-New-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -StorageAutogrow Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to create and manage read replicas in Azure Database for PostgreSQL using PowerShell](howto-read-replicas-powershell.md).
postgresql Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-privatelink-cli.md
- Title: Private Link - Azure CLI - Azure Database for PostgreSQL - Single server
-description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure CLI
------ Previously updated : 01/09/2020 ---
-# Create and manage Private Link for Azure Database for PostgreSQL - Single server using CLI
-
-A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure CLI to create a VM in an Azure Virtual Network and an Azure Database for PostgreSQL Single server with an Azure private endpoint.
-
-> [!NOTE]
-> The private link feature is only available for Azure Database for PostgreSQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
-
-## Prerequisites
-
-To step through this how-to guide, you need:
--- An [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md).--
-If you decide to install and use Azure CLI locally instead, this quickstart requires you to use Azure CLI version 2.0.28 or later. To find your installed version, run `az --version`. See [Install Azure CLI](/cli/azure/install-azure-cli) for install or upgrade info.
-
-## Create a resource group
-
-Before you can create any resource, you have to create a resource group to host the Virtual Network. Create a resource group with [az group create](/cli/azure/group). This example creates a resource group named *myResourceGroup* in the *westeurope* location:
-
-```azurecli-interactive
-az group create --name myResourceGroup --location westeurope
-```
-
-## Create a Virtual Network
-Create a Virtual Network with [az network vnet create](/cli/azure/network/vnet). This example creates a default Virtual Network named *myVirtualNetwork* with one subnet named *mySubnet*:
-
-```azurecli-interactive
-az network vnet create \
- --name myVirtualNetwork \
- --resource-group myResourceGroup \
- --subnet-name mySubnet
-```
-
-## Disable subnet private endpoint policies
-Azure deploys resources to a subnet within a virtual network, so you need to create or update the subnet to disable private endpoint [network policies](../private-link/disable-private-endpoint-network-policy.md). Update a subnet configuration named *mySubnet* with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update):
-
-```azurecli-interactive
-az network vnet subnet update \
- --name mySubnet \
- --resource-group myResourceGroup \
- --vnet-name myVirtualNetwork \
- --disable-private-endpoint-network-policies true
-```
-## Create the VM
-Create a VM with az vm create. When prompted, provide a password to be used as the sign-in credentials for the VM. This example creates a VM named *myVm*:
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroup \
- --name myVm \
- --image Win2019Datacenter
-```
-
- Note the public IP address of the VM. You will use this address to connect to the VM from the internet in the next step.
-
-## Create an Azure Database for PostgreSQL - Single server
-Create a Azure Database for PostgreSQL with the az postgres server create command. Remember that the name of your PostgreSQL Server must be unique across Azure, so replace the placeholder value with your own unique values that you used above:
-
-```azurecli-interactive
-# Create a server in the resource group
-az postgres server create \
name mydemoserver \resource-group myresourcegroup \location westeurope \admin-user mylogin \admin-password <server_admin_password> \sku-name GP_Gen5_2
-```
-
-## Create the Private Endpoint
-Create a private endpoint for the PostgreSQL server in your Virtual Network:
-
-```azurecli-interactive
-az network private-endpoint create \
- --name myPrivateEndpoint \
- --resource-group myResourceGroup \
- --vnet-name myVirtualNetwork \
- --subnet mySubnet \
- --private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforPostgreSQL/servers" --query "id" -o tsv) \
- --group-id postgresqlServer \
- --connection-name myConnection
- ```
-
-## Configure the Private DNS Zone
-Create a Private DNS Zone for PostgreSQL server domain and create an association link with the Virtual Network.
-
-```azurecli-interactive
-az network private-dns zone create --resource-group myResourceGroup \
- --name "privatelink.postgres.database.azure.com"
-az network private-dns link vnet create --resource-group myResourceGroup \
- --zone-name "privatelink.postgres.database.azure.com"\
- --name MyDNSLink \
- --virtual-network myVirtualNetwork \
- --registration-enabled false
-
-#Query for the network interface ID
-networkInterfaceId=$(az network private-endpoint show --name myPrivateEndpoint --resource-group myResourceGroup --query 'networkInterfaces[0].id' -o tsv)
--
-az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json
-# Copy the content for privateIPAddress and FQDN matching the Azure database for PostgreSQL name
--
-#Create DNS records
-az network private-dns record-set a create --name myserver --zone-name privatelink.postgres.database.azure.com --resource-group myResourceGroup
-az network private-dns record-set a add-record --record-set-name myserver --zone-name privatelink.postgres.database.azure.com --resource-group myResourceGroup -a <Private IP Address>
-```
-
-> [!NOTE]
-> The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../dns/dns-operations-recordsets-portal.md).
-
-> [!NOTE]
-> In some cases the Azure Database for PostgreSQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Make sure that both the subscription has the **Microsoft.DBforPostgreSQL** resource provider registered. For more information, refer to [resource providers](../azure-resource-manager/management/resource-providers-and-types.md).
-
-## Connect to a VM from the internet
-
-Connect to the VM *myVm* from the internet as follows:
-
-1. In the portal's search bar, enter *myVm*.
-
-1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens.
-
-1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
-
-1. Open the *downloaded.rdp* file.
-
- 1. If prompted, select **Connect**.
-
- 1. Enter the username and password you specified when creating the VM.
-
- > [!NOTE]
- > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
-
-1. Select **OK**.
-
-1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
-
-1. Once the VM desktop appears, minimize it to go back to your local desktop.
-
-## Access the PostgreSQL server privately from the VM
-
-1. In the Remote Desktop of *myVM*, open PowerShell.
-
-2. Enter ΓÇ»`nslookup mydemopostgresserver.privatelink.postgres.database.azure.com`.
-
- You'll receive a message similar to this:
-
- ```azurepowershell
- Server: UnKnown
- Address: 168.63.129.16
- Non-authoritative answer:
- Name: mydemopostgresserver.privatelink.postgres.database.azure.com
- Address: 10.1.3.4
- ```
-
-3. Test the private link connection for the PostgreSQL server using any available client. The following example uses [Azure Data studio](/sql/azure-data-studio/download) to do the operation.
-
-4. In **New connection**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Server type| Select **PostgreSQL**.|
- | Server name| Select *mydemopostgresserver.privatelink.postgres.database.azure.com* |
- | User name | Enter username as username@servername which is provided during the PostgreSQL server creation. |
- |Password |Enter a password provided during the PostgreSQL server creation. |
- |SSL|Select **Required**.|
- ||
-
-5. Select Connect.
-
-6. Browse databases from left menu.
-
-7. (Optionally) Create or query information from the postgreSQL server.
-
-8. Close the remote desktop connection to myVm.
-
-## Clean up resources
-When no longer needed, you can use az group delete to remove the resource group and all the resources it has:
-
-```azurecli-interactive
-az group delete --name myResourceGroup --yes
-```
-
-## Next steps
-- Learn more about [What is Azure private endpoint](../private-link/private-endpoint-overview.md)
postgresql Howto Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-privatelink-portal.md
- Title: Private Link - Azure portal - Azure Database for PostgreSQL - Single server
-description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure portal
------ Previously updated : 01/09/2020--
-# Create and manage Private Link for Azure Database for PostgreSQL - Single server using Portal
-
-A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure portal to create a VM in an Azure Virtual Network and an Azure Database for PostgreSQL Single server with an Azure private endpoint.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-> [!NOTE]
-> The private link feature is only available for Azure Database for PostgreSQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
-
-## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create an Azure VM
-
-In this section, you will create virtual network and the subnet to host the VM that is used to access your Private Link resource (a PostgreSQL server in Azure).
-
-### Create the virtual network
-In this section, you will create a Virtual Network and the subnet to host the VM that is used to access your Private Link resource.
-
-1. On the upper-left side of the screen, select **Create a resource** > **Networking** > **Virtual network**.
-2. In **Create virtual network**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter *MyVirtualNetwork*. |
- | Address space | Enter *10.1.0.0/16*. |
- | Subscription | Select your subscription.|
- | Resource group | Select **Create new**, enter *myResourceGroup*, then select **OK**. |
- | Location | Select **West Europe**.|
- | Subnet - Name | Enter *mySubnet*. |
- | Subnet - Address range | Enter *10.1.0.0/24*. |
- |||
-3. Leave the rest as default and select **Create**.
-
-### Create Virtual Machine
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual Machine**.
-
-2. In **Create a virtual machine - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **PROJECT DETAILS** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this in the previous section. |
- | **INSTANCE DETAILS** | |
- | Virtual machine name | Enter *myVm*. |
- | Region | Select **West Europe**. |
- | Availability options | Leave the default **No infrastructure redundancy required**. |
- | Image | Select **Windows Server 2019 Datacenter**. |
- | Size | Leave the default **Standard DS1 v2**. |
- | **ADMINISTRATOR ACCOUNT** | |
- | Username | Enter a username of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
- | Confirm Password | Reenter password. |
- | **INBOUND PORT RULES** | |
- | Public inbound ports | Leave the default **None**. |
- | **SAVE MONEY** | |
- | Already have a Windows license? | Leave the default **No**. |
- |||
-
-1. Select **Next: Disks**.
-
-1. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**.
-
-1. In **Create a virtual machine - Networking**, select this information:
-
- | Setting | Value |
- | - | -- |
- | Virtual network | Leave the default **MyVirtualNetwork**. |
- | Address space | Leave the default **10.1.0.0/24**.|
- | Subnet | Leave the default **mySubnet (10.1.0.0/24)**.|
- | Public IP | Leave the default **(new) myVm-ip**. |
- | Public inbound ports | Select **Allow selected ports**. |
- | Select inbound ports | Select **HTTP** and **RDP**.|
- |||
--
-1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-1. When you see the **Validation passed** message, select **Create**.
-
-> [!NOTE]
-> In some cases the Azure Database for PostgreSQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Make sure that both the subscription has the **Microsoft.DBforPostgreSQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Create an Azure Database for PostgreSQL Single server
-
-In this section, you will create an Azure Database for PostgreSQL server in Azure.
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **Azure Database for PostgreSQL**.
-
-1. In **Azure Database for PostgreSQL deployment option**, select **Single server** and provide these information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this in the previous section.|
- | **Server details** | |
- |Server name | Enter *myserver*. If this name is taken, create a unique name.|
- | Admin username| Enter an administrator name of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least 8 characters long and meet the defined requirements. |
- | Location | Select an Azure region where you want to want your PostgreSQL Server to reside. |
- |Version | Select the database version of the PostgreSQL server that is required.|
- | Compute + Storage| Select the pricing tier that is needed for the server based on the workload. |
- |||
-
-7. Select **OK**.
-8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-9. When you see the Validation passed message, select **Create**.
-10. When you see the Validation passed message, select Create.
-
-## Create a private endpoint
-
-In this section, you will create a PostgreSQL server and add a private endpoint to it.
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Networking** > **Private Link**.
-2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**.
-
- :::image type="content" source="media/concepts-data-access-and-security-private-link/privatelink-overview.png" alt-text="Private Link overview":::
-
-1. In **Create a private endpoint - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this in the previous section.|
- | **Instance Details** | |
- | Name | Enter *myPrivateEndpoint*. If this name is taken, create a unique name. |
- |Region|Select **West Europe**.|
- |||
-5. Select **Next: Resource**.
-6. In **Create a private endpoint - Resource**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- |Connection method | Select connect to an Azure resource in my directory.|
- | Subscription| Select your subscription. |
- | Resource type | Select **Microsoft.DBforPostgreSQL/servers**. |
- | Resource |Select *myServer*|
- |Target sub-resource |Select *postgresqlServer*|
- |||
-7. Select **Next: Configuration**.
-8. In **Create a private endpoint - Configuration**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- |**NETWORKING**| |
- | Virtual network| Select *MyVirtualNetwork*. |
- | Subnet | Select *mySubnet*. |
- |**PRIVATE DNS INTEGRATION**||
- |Integrate with private DNS zone |Select **Yes**. |
- |Private DNS Zone |Select *(New)privatelink.postgres.database.azure.com* |
- |||
-
- > [!Note]
- > Use the predefined private DNS zone for your service or provide your preferred DNS zone name. Refer to the [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md) for details.
-
-1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-2. When you see the **Validation passed** message, select **Create**.
-
- :::image type="content" source="media/concepts-data-access-and-security-private-link/show-postgres-private-link.png" alt-text="Private Link created":::
-
- > [!NOTE]
- > The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../dns/dns-operations-recordsets-portal.md).
-
-## Connect to a VM using Remote Desktop (RDP)
--
-After you've created **myVm**, connect to it from the internet as follows:
-
-1. In the portal's search bar, enter *myVm*.
-
-1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens.
-
-1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
-
-1. Open the *downloaded.rdp* file.
-
- 1. If prompted, select **Connect**.
-
- 1. Enter the username and password you specified when creating the VM.
-
- > [!NOTE]
- > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
-
-1. Select **OK**.
-
-1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
-
-1. Once the VM desktop appears, minimize it to go back to your local desktop.
-
-## Access the PostgreSQL server privately from the VM
-
-1. In the Remote Desktop of *myVM*, open PowerShell.
-
-2. EnterΓÇ»`nslookup mydemopostgresserver.privatelink.postgres.database.azure.com`.
-
- You'll receive a message similar to this:
- ```azurepowershell
- Server: UnKnown
- Address: 168.63.129.16
- Non-authoritative answer:
- Name: mydemopostgresserver.privatelink.postgres.database.azure.com
- Address: 10.1.3.4
- ```
-
-3. Test the private link connection for the PostgreSQL server using any available client. In the example below I have used [Azure Data studio](/sql/azure-data-studio/download) to do the operation.
-
-4. In **New connection**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Server type| Select **PostgreSQL**.|
- | Server name| Select *mydemopostgresserver.privatelink.postgres.database.azure.com* |
- | User name | Enter username as username@servername which is provided during the PostgreSQL server creation. |
- |Password |Enter a password provided during the PostgreSQL server creation. |
- |SSL|Select **Required**.|
- ||
-
-5. Select Connect.
-
-6. Browse databases from left menu.
-
-7. (Optionally) Create or query information from the postgreSQL server.
-
-8. Close the remote desktop connection to myVm.
-
-## Clean up resources
-When you're done using the private endpoint, PostgreSQL server, and the VM, delete the resource group and all of the resources it contains:
-
-1. Enter *myResourceGroup* in the **Search** box at the top of the portal and select *myResourceGroup* from the search results.
-2. Select **Delete resource group**.
-3. Enter myResourceGroup for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
-
-## Next steps
-
-In this how-to, you created a VM on a virtual network, an Azure Database for PostgreSQL - Single server, and a private endpoint for private access. You connected to one VM from the internet and securely communicated to the PostgreSQL server using Private Link. To learn more about private endpoints, see [What is Azure private endpoint](../private-link/private-endpoint-overview.md).
-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
postgresql Howto Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-logs-in-portal.md
- Title: Manage logs - Azure portal - Azure Database for PostgreSQL - Single Server
-description: This article describes how to configure and access the server logs (.log files) in Azure Database for PostgreSQL - Single Server from the Azure portal.
----- Previously updated : 5/6/2019--
-# Configure and access Azure Database for PostgreSQL - Single Server logs from the Azure portal
-
-You can configure, list, and download the [Azure Database for PostgreSQL logs](concepts-server-logs.md) from the Azure portal.
-
-## Prerequisites
-The steps in this article require that you have [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md).
-
-## Configure logging
-Configure access to the query logs and error logs.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Select your Azure Database for PostgreSQL server.
-
-3. Under the **Monitoring** section in the sidebar, select **Server logs**.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/1-select-server-logs-configure.png" alt-text="Screenshot of Server logs options":::
-
-4. To see the server parameters, select **Click here to enable logs and configure log parameters**.
-
-5. Change the parameters that you need to adjust. All changes you make in this session are highlighted in purple.
-
- After you have changed the parameters, select **Save**. Or, you can discard your changes.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/3-save-discard.png" alt-text="Screenshot of Server Parameters options":::
-
-From the **Server Parameters** page, you can return to the list of logs by closing the page.
-
-## View list and download logs
-After logging begins, you can view a list of available logs, and download individual log files.
-
-1. Open the Azure portal.
-
-2. Select your Azure Database for PostgreSQL server.
-
-3. Under the **Monitoring** section in the sidebar, select **Server logs**. The page shows a list of your log files.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/4-server-logs-list.png" alt-text="Screenshot of Server logs page, with list of logs highlighted":::
-
- > [!TIP]
- > The naming convention of the log is **postgresql-yyyy-mm-dd_hhmmss.log**. The date and time used in the file name is the time when the log was issued. The log files rotate every hour or 100 MB, whichever comes first.
-
-4. If needed, use the search box to quickly narrow down to a specific log, based on date and time. The search is on the name of the log.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/5-search.png" alt-text="Screenshot of Server logs page, with search box and results highlighted":::
-
-5. To download individual log files, select the down-arrow icon next to each log file in the table row.
-
- :::image type="content" source="./media/howto-configure-server-logs-in-portal/6-download.png" alt-text="Screenshot of Server logs page, with down-arrow icon highlighted":::
-
-## Next steps
-- See [Access server logs in CLI](howto-configure-server-logs-using-cli.md) to learn how to download logs programmatically.-- Learn more about [server logs](concepts-server-logs.md) in Azure Database for PostgreSQL. -- For more information about the parameter definitions and PostgreSQL logging, see the PostgreSQL documentation on [error reporting and logging](https://www.postgresql.org/docs/current/static/runtime-config-logging.html).-
postgresql Howto Configure Server Logs Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-logs-using-cli.md
- Title: Manage logs - Azure CLI - Azure Database for PostgreSQL - Single Server
-description: This article describes how to configure and access the server logs (.log files) in Azure Database for PostgreSQL - Single Server by using the Azure CLI.
------ Previously updated : 5/6/2019 --
-# Configure and access server logs by using Azure CLI
-You can download the PostgreSQL server error logs by using the command-line interface (Azure CLI). However, access to transaction logs isn't supported.
-
-## Prerequisites
-To step through this how-to guide, you need:
-- [Azure Database for PostgreSQL server](quickstart-create-server-database-azure-cli.md)-- The [Azure CLI](/cli/azure/install-azure-cli) command-line utility or Azure Cloud Shell in the browser-
-## Configure logging
-You can configure the server to access query logs and error logs. Error logs can have auto-vacuum, connection, and checkpoint information.
-1. Turn on logging.
-2. To enable query logging, update **log\_statement** and **log\_min\_duration\_statement**.
-3. Update retention period.
-
-For more information, see [Customizing server configuration parameters](howto-configure-server-parameters-using-cli.md).
-
-## List logs
-To list the available log files for your server, run the [az postgres server-logs list](/cli/azure/postgres/server-logs) command.
-
-You can list the log files for server **mydemoserver.postgres.database.azure.com** under the resource group **myresourcegroup**. Then direct the list of log files to a text file called **log\_files\_list.txt**.
-```azurecli-interactive
-az postgres server-logs list --resource-group myresourcegroup --server mydemoserver > log_files_list.txt
-```
-## Download logs locally from the server
-With the [az postgres server-logs download](/cli/azure/postgres/server-logs) command, you can download individual log files for your server.
-
-Use the following example to download the specific log file for the server **mydemoserver.postgres.database.azure.com** under the resource group **myresourcegroup** to your local environment.
-```azurecli-interactive
-az postgres server-logs download --name 20170414-mydemoserver-postgresql.log --resource-group myresourcegroup --server mydemoserver
-```
-## Next steps
-- To learn more about server logs, see [Server logs in Azure Database for PostgreSQL](concepts-server-logs.md).-- For more information about server parameters, see [Customize server configuration parameters using Azure CLI](howto-configure-server-parameters-using-cli.md).
postgresql Howto Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-parameters-using-cli.md
- Title: Configure parameters - Azure Database for PostgreSQL - Single Server
-description: This article describes how to configure Postgres parameters in Azure Database for PostgreSQL - Single Server using the Azure CLI.
----- Previously updated : 06/19/2019 --
-# Customize server configuration parameters for Azure Database for PostgreSQL - Single Server using Azure CLI
-You can list, show, and update configuration parameters for an Azure PostgreSQL server using the Command Line Interface (Azure CLI). A subset of engine configurations is exposed at server-level and can be modified.
-
-## Prerequisites
-To step through this how-to guide, you need:
-- Create an Azure Database for PostgreSQL server and database by following [Create an Azure Database for PostgreSQL](quickstart-create-server-database-azure-cli.md)-- Install [Azure CLI](/cli/azure/install-azure-cli) command-line interface on your machine or use the [Azure Cloud Shell](../cloud-shell/overview.md) in the Azure portal using your browser.-
-## List server configuration parameters for Azure Database for PostgreSQL server
-To list all modifiable parameters in a server and their values, run the [az postgres server configuration list](/cli/azure/postgres/server/configuration) command.
-
-You can list the server configuration parameters for the server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup**.
-```azurecli-interactive
-az postgres server configuration list --resource-group myresourcegroup --server mydemoserver
-```
-## Show server configuration parameter details
-To show details about a particular configuration parameter for a server, run the [az postgres server configuration show](/cli/azure/postgres/server/configuration) command.
-
-This example shows details of the **log\_min\_messages** server configuration parameter for server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup.**
-```azurecli-interactive
-az postgres server configuration show --name log_min_messages --resource-group myresourcegroup --server mydemoserver
-```
-## Modify server configuration parameter value
-You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the PostgreSQL server engine. To update the configuration, use the [az postgres server configuration set](/cli/azure/postgres/server/configuration) command.
-
-To update the **log\_min\_messages** server configuration parameter of server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup.**
-```azurecli-interactive
-az postgres server configuration set --name log_min_messages --resource-group myresourcegroup --server mydemoserver --value INFO
-```
-If you want to reset the value of a configuration parameter, you simply choose to leave out the optional `--value` parameter, and the service applies the default value. In above example, it would look like:
-```azurecli-interactive
-az postgres server configuration set --name log_min_messages --resource-group myresourcegroup --server mydemoserver
-```
-This command resets the **log\_min\_messages** configuration to the default value **WARNING**. For more information on server configuration and permissible values, see PostgreSQL documentation on [Server Configuration](https://www.postgresql.org/docs/9.6/static/runtime-config.html).
-
-## Next steps
-- [Learn how to restart a server](howto-restart-server-cli.md)-- To configure and access server logs, see [Server Logs in Azure Database for PostgreSQL](concepts-server-logs.md)
postgresql Howto Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-parameters-using-portal.md
- Title: Configure server parameters - Azure portal - Azure Database for PostgreSQL - Single Server
-description: This article describes how to configure the Postgres parameters in Azure Database for PostgreSQL through the Azure portal.
----- Previously updated : 02/28/2018--
-# Configure server parameters in Azure Database for PostgreSQL - Single Server via the Azure portal
-You can list, show, and update configuration parameters for an Azure Database for PostgreSQL server through the Azure portal.
-
-## Prerequisites
-To step through this how-to guide you need:
-- [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md)-
-## Viewing and editing parameters
-1. Open the [Azure portal](https://portal.azure.com).
-
-2. Select your Azure Database for PostgreSQL server.
-
-3. Under the **SETTINGS** section, select **Server parameters**. The page shows a list of parameters, their values, and descriptions.
-
-4. Select the **drop down** button to see the possible values for enumerated-type parameters like client_min_messages.
-
-5. Select or hover over the **i** (information) button to see the range of possible values for numeric parameters like cpu_index_tuple_cost.
-
-6. If needed, use the **search box** to narrow down to a specific parameter. The search is on the name and description of the parameters.
-
-7. Change the parameter values you would like to adjust. All changes you make in a session are highlighted in purple. Once you have changed the values, you can select **Save**. Or you can **Discard** your changes.
-
-8. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**.
-
-## Next steps
-Learn about:
-- [Overview of server parameters in Azure Database for PostgreSQL](concepts-servers.md)-- [Configuring parameters using the Azure CLI](howto-configure-server-parameters-using-cli.md)
postgresql Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-parameters-using-powershell.md
- Title: Configure server parameters - Azure PowerShell - Azure Database for PostgreSQL
-description: This article describes how to configure the service parameters in Azure Database for PostgreSQL using PowerShell.
------ Previously updated : 06/08/2020 --
-# Customize Azure Database for PostgreSQL server parameters using PowerShell
-
-You can list, show, and update configuration parameters for an Azure Database for PostgreSQL server using
-PowerShell. A subset of engine configurations is exposed at the server-level and can be modified.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
--
-## List server configuration parameters for Azure Database for PostgreSQL server
-
-To list all modifiable parameters in a server and their values, run the `Get-AzPostgreSqlConfiguration`
-cmdlet.
-
-The following example lists the server configuration parameters for the server **mydemoserver** in
-resource group **myresourcegroup**.
-
-```azurepowershell-interactive
-Get-AzPostgreSqlConfiguration -ResourceGroupName myresourcegroup -ServerName mydemoserver
-```
-
-For the definition of each of the listed parameters, see the PostgreSQL reference section on
-[Environment Variables](https://www.postgresql.org/docs/12/libpq-envars.html).
-
-## Show server configuration parameter details
-
-To show details about a particular configuration parameter for a server, run the
-`Get-AzPostgreSqlConfiguration` cmdlet and specify the **Name** parameter.
-
-This example shows details of the **slow\_query\_log** server configuration parameter for server
-**mydemoserver** under resource group **myresourcegroup**.
-
-```azurepowershell-interactive
-Get-AzPostgreSqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver
-```
-
-## Modify a server configuration parameter value
-
-You can also modify the value of a certain server configuration parameter, which updates the
-underlying configuration value for the PostgreSQL server engine. To update the configuration, use the
-`Update-AzPostgreSqlConfiguration` cmdlet.
-
-To update the **slow\_query\_log** server configuration parameter of server
-**mydemoserver** under resource group **myresourcegroup**.
-
-```azurepowershell-interactive
-Update-AzPostgreSqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver -Value On
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Auto grow storage in Azure Database for PostgreSQL server using PowerShell](howto-auto-grow-storage-powershell.md).
postgresql Howto Configure Sign In Aad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-sign-in-aad-authentication.md
- Title: Use Azure Active Directory - Azure Database for PostgreSQL - Single Server
-description: Learn about how to set up Azure Active Directory (AAD) for authentication with Azure Database for PostgreSQL - Single Server
----- Previously updated : 05/26/2021--
-# Use Azure Active Directory for authentication with PostgreSQL
-
-This article will walk you through the steps how to configure Azure Active Directory access with Azure Database for PostgreSQL, and how to connect using an Azure AD token.
-
-## Setting the Azure AD Admin user
-
-Only Azure AD administrator users can create/enable users for Azure AD-based authentication. We recommend not using the Azure AD administrator for regular database operations, as it has elevated user permissions (e.g. CREATEDB).
-
-To set the Azure AD administrator (you can use a user or a group), please follow the following steps
-
-1. In the Azure portal, select the instance of Azure Database for PostgreSQL that you want to enable for Azure AD.
-2. Under Settings, select Active Directory Admin:
-
-![set azure ad administrator][2]
-
-3. Select a valid Azure AD user in the customer tenant to be Azure AD administrator.
-
-> [!IMPORTANT]
-> When setting the administrator, a new user is added to the Azure Database for PostgreSQL server with full administrator permissions.
-> The Azure AD Admin user in Azure Database for PostgreSQL will have the role `azure_ad_admin`.
-> Only one Azure AD admin can be created per PostgreSQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
-> You can specify an Azure AD group instead of an individual user to have multiple administrators.
-
-Only one Azure AD admin can be created per PostgreSQL server and selection of another one will overwrite the existing Azure AD admin configured for the server. You can specify an Azure AD group instead of an individual user to have multiple administrators. Note that you will then sign in with the group name for administration purposes.
-
-## Connecting to Azure Database for PostgreSQL using Azure AD
-
-The following high-level diagram summarizes the workflow of using Azure AD authentication with Azure Database for PostgreSQL:
-
-![authentication flow][1]
-
-We've designed the Azure AD integration to work with common PostgreSQL tools like psql, which are not Azure AD aware and only support specifying username and password when connecting to PostgreSQL. We pass the Azure AD token as the password as shown in the picture above.
-
-We currently have tested the following clients:
--- psql commandline (utilize the PGPASSWORD variable to pass the token, see step 3 for more information)-- Azure Data Studio (using the PostgreSQL extension)-- Other libpq based clients (e.g. common application frameworks and ORMs)-- PgAdmin (uncheck connect now at server creation. See step 4 for more information)-
-These are the steps that a user/application will need to do authenticate with Azure AD described below:
-
-### Prerequisites
-
-You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine. Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli).
-
-## Authenticate with Azure AD as a single user
-
-### Step 1: Login to the user's Azure subscription
-
-Start by authenticating with Azure AD using the Azure CLI tool. This step is not required in Azure Cloud Shell.
-
-```
-az login
-```
-
-The command will launch a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and the password.
-
-### Step 2: Retrieve Azure AD access token
-
-Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for PostgreSQL.
-
-Example (for Public Cloud):
-
-```azurecli-interactive
-az account get-access-token --resource https://ossrdbms-aad.database.windows.net
-```
-
-The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
-
-```azurecli-interactive
-az cloud show
-```
-
-For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
-
-```azurecli-interactive
-az account get-access-token --resource-type oss-rdbms
-```
-
-After authentication is successful, Azure AD will return an access token:
-
-```json
-{
- "accessToken": "TOKEN",
- "expiresOn": "...",
- "subscription": "...",
- "tenant": "...",
- "tokenType": "Bearer"
-}
-```
-
-The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for PostgreSQL service.
--
-### Step 3: Use token as password for logging in with client psql
-
-When connecting you need to use the access token as the PostgreSQL user password.
-
-When using the `psql` command line client, the access token needs to be passed through the `PGPASSWORD` environment variable, since the access token exceeds the password length that `psql` can accept directly:
-
-Windows Example:
-
-```cmd
-set PGPASSWORD=<copy/pasted TOKEN value from step 2>
-```
-
-```PowerShell
-$env:PGPASSWORD='<copy/pasted TOKEN value from step 2>'
-```
-
-Linux/macOS Example:
-
-```shell
-export PGPASSWORD=<copy/pasted TOKEN value from step 2>
-```
-
-Now you can initiate a connection with Azure Database for PostgreSQL like you normally would:
-
-```shell
-psql "host=mydb.postgres... user=user@tenant.onmicrosoft.com@mydb dbname=postgres sslmode=require"
-```
-### Step 4: Use token as a password for logging in with PgAdmin
-
-To connect using Azure AD token with pgAdmin you need to follow the next steps:
-1. Uncheck the connect now option at server creation.
-2. Enter your server details in the connection tab and save.
-3. From the browser menu, click connect to the Azure Database for PostgreSQL server
-4. Enter the AD token password when prompted.
--
-Important considerations when connecting:
-
-* `user@tenant.onmicrosoft.com` is the name of the Azure AD user
-* Make sure to use the exact way the Azure user is spelled - as the Azure AD user and group names are case sensitive.
-* If the name contains spaces, use `\` before each space to escape it.
-* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
-
-You are now authenticated to your Azure Database for PostgreSQL server using Azure AD authentication.
-
-## Authenticate with Azure AD as a group member
-
-### Step 1: Create Azure AD groups in Azure Database for PostgreSQL
-
-To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
-
-Example:
-
-```
-CREATE ROLE "Prod DB Readonly" WITH LOGIN IN ROLE azure_ad_user;
-```
-When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username.
-
-### Step 2: Login to the userΓÇÖs Azure Subscription
-
-Authenticate with Azure AD using the Azure CLI tool. This step is not required in Azure Cloud Shell. The user needs to be member of the Azure AD group.
-
-```
-az login
-```
-
-### Step 3: Retrieve Azure AD access token
-
-Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 2 to access Azure Database for PostgreSQL.
-
-Example (for Public Cloud):
-
-```azurecli-interactive
-az account get-access-token --resource https://ossrdbms-aad.database.windows.net
-```
-
-The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
-
-```azurecli-interactive
-az cloud show
-```
-
-For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
-
-```azurecli-interactive
-az account get-access-token --resource-type oss-rdbms
-```
-
-After authentication is successful, Azure AD will return an access token:
-
-```json
-{
- "accessToken": "TOKEN",
- "expiresOn": "...",
- "subscription": "...",
- "tenant": "...",
- "tokenType": "Bearer"
-}
-```
-
-### Step 4: Use token as password for logging in with psql or PgAdmin (see above steps for user connection)
-
-Important considerations when connecting as a group member:
-* groupname@mydb is the name of the Azure AD group you are trying to connect as
-* Always append the server name after the Azure AD user/group name (e.g. @mydb)
-* Make sure to use the exact way the Azure AD group name is spelled.
-* Azure AD user and group names are case sensitive
-* When connecting as a group, use only the group name (e.g. GroupName@mydb) and not the alias of a group member.
-* If the name contains spaces, use \ before each space to escape it.
-* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
-
-You are now authenticated to your PostgreSQL server using Azure AD authentication.
--
-## Creating Azure AD users in Azure Database for PostgreSQL
-
-To add an Azure AD user to your Azure Database for PostgreSQL database, perform the following steps after connecting (see later section on how to connect):
-
-1. First ensure that the Azure AD user `<user>@yourtenant.onmicrosoft.com` is a valid user in Azure AD tenant.
-2. Sign in to your Azure Database for PostgreSQL instance as the Azure AD Admin user.
-3. Create role `<user>@yourtenant.onmicrosoft.com` in Azure Database for PostgreSQL.
-4. Make `<user>@yourtenant.onmicrosoft.com` a member of role azure_ad_user. This must only be given to Azure AD users.
-
-**Example:**
-
-```sql
-CREATE ROLE "user1@yourtenant.onmicrosoft.com" WITH LOGIN IN ROLE azure_ad_user;
-```
-
-> [!NOTE]
-> Authenticating a user through Azure AD does not give the user any permissions to access objects within the Azure Database for PostgreSQL database. You must grant the user the required permissions manually.
-
-## Token Validation
-
-Azure AD authentication in Azure Database for PostgreSQL ensures that the user exists in the PostgreSQL server, and it checks the validity of the token by validating the contents of the token. The following token validation steps are performed:
--- Token is signed by Azure AD and has not been tampered with-- Token was issued by Azure AD for the tenant associated with the server-- Token has not expired-- Token is for the Azure Database for PostgreSQL resource (and not another Azure resource)-
-## Migrating existing PostgreSQL users to Azure AD-based authentication
-
-You can enable Azure AD authentication for existing users. There are two cases to consider:
-
-### Case 1: PostgreSQL username matches the Azure AD User Principal Name
-
-In the unlikely case that your existing users already match the Azure AD user names, you can grant the `azure_ad_user` role to them in order to enable them for Azure AD authentication:
-
-```sql
-GRANT azure_ad_user TO "existinguser@yourtenant.onmicrosoft.com";
-```
-
-They will now be able to sign in with Azure AD credentials instead of using their previously configured PostgreSQL user password.
-
-### Case 2: PostgreSQL username is different than the Azure AD User Principal Name
-
-If a PostgreSQL user either does not exist in Azure AD or has a different username, you can use Azure AD groups to authenticate as this PostgreSQL user. You can migrate existing Azure Database for PostgreSQL users to Azure AD by creating an Azure AD group with a name that matches the PostgreSQL user, and then granting role azure_ad_user to the existing PostgreSQL user:
-
-```sql
-GRANT azure_ad_user TO "DBReadUser";
-```
-
-This assumes you have created a group "DBReadUser" in your Azure AD. Users belonging to that group will now be able to sign in to the database as this user.
-
-## Next steps
-
-* Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL - Single Server](concepts-aad-authentication.md)
-
-<!--Image references-->
-
-[1]: ./media/concepts-aad-authentication/authentication-flow.png
-[2]: ./media/concepts-aad-authentication/set-aad-admin.png
postgresql Howto Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-connect-with-managed-identity.md
- Title: Connect with Managed Identity - Azure Database for PostgreSQL - Single Server
-description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for PostgreSQL
------ Previously updated : 05/19/2020--
-# Connect with Managed Identity to Azure Database for PostgreSQL
-
-You can use both system-assigned and user-assigned managed identities to authenticate to Azure Database for PostgreSQL. This article shows you how to use a system-assigned managed identity for an Azure Virtual Machine (VM) to access an Azure Database for PostgreSQL server. Managed Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code.
-
-You learn how to:
-- Grant your VM access to an Azure Database for PostgreSQL server-- Create a user in the database that represents the VM's system-assigned identity-- Get an access token using the VM identity and use it to query an Azure Database for PostgreSQL server-- Implement the token retrieval in a C# example application-
-## Prerequisites
--- If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.-- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../articles/role-based-access-control/role-assignments-portal.md).-- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity-- You need an Azure Database for PostgreSQL database server that has [Azure AD authentication](howto-configure-sign-in-aad-authentication.md) configured-- To follow the C# example, first complete the guide how to [Connect with C#](connect-csharp.md)-
-## Creating a system-assigned managed identity for your VM
-
-Use [az vm identity assign](/cli/azure/vm/identity/) with the `identity assign` command enable the system-assigned identity to an existing VM:
-
-```azurecli-interactive
-az vm identity assign -g myResourceGroup -n myVm
-```
-
-Retrieve the application ID for the system-assigned managed identity, which you'll need in the next few steps:
-
-```azurecli
-# Get the client ID (application ID) of the system-assigned managed identity
-az ad sp list --display-name vm-name --query [*].appId --out tsv
-```
-
-## Creating a PostgreSQL user for your Managed Identity
-
-Now, connect as the Azure AD administrator user to your PostgreSQL database, and run the following SQL statements, replacing `CLIENT_ID` with the client ID you retrieved for your system-assigned managed identity:
-
-```sql
-SET aad_validate_oids_in_tenant = off;
-CREATE ROLE myuser WITH LOGIN PASSWORD 'CLIENT_ID' IN ROLE azure_ad_user;
-```
-
-The managed identity now has access when authenticating with the username `myuser` (replace with a name of your choice).
-
-## Retrieving the access token from Azure Instance Metadata service
-
-Your application can now retrieve an access token from the Azure Instance Metadata service and use it for authenticating with the database.
-
-This token retrieval is done by making an HTTP request to `http://169.254.169.254/metadata/identity/oauth2/token` and passing the following parameters:
-
-* `api-version` = `2018-02-01`
-* `resource` = `https://ossrdbms-aad.database.windows.net`
-* `client_id` = `CLIENT_ID` (that you retrieved earlier)
-
-You'll get back a JSON result that contains an `access_token` field - this long text value is the Managed Identity access token, that you should use as the password when connecting to the database.
-
-For testing purposes, you can run the following commands in your shell. Note you need `curl`, `jq`, and the `psql` client installed.
-
-```bash
-# Retrieve the access token
-export PGPASSWORD=`curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=CLIENT_ID' -H Metadata:true | jq -r .access_token`
-
-# Connect to the database
-psql -h SERVER --user USER@SERVER DBNAME
-```
-
-You are now connected to the database you've configured earlier.
-
-## Connecting using Managed Identity in C#
-
-This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for PostgreSQL. Azure Database for PostgreSQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to PostgreSQL, you pass the access token in the password field.
-
-Here's a .NET code example of opening a connection to PostgreSQL using an access token. This code must run on the VM to use the system-assigned managed identity to obtain an access token from Azure AD. Replace the values of HOST, USER, DATABASE, and CLIENT_ID.
-
-```csharp
-using System;
-using System.Net;
-using System.IO;
-using System.Collections;
-using System.Collections.Generic;
-using System.Text.Json;
-using System.Text.Json.Serialization;
-using Npgsql;
-using Azure.Identity;
-
-namespace Driver
-{
- class Script
- {
- // Obtain connection string information from the portal for use in the following variables
- private static string Host = "HOST";
- private static string User = "USER";
- private static string Database = "DATABASE";
-
- static async Task Main(string[] args)
- {
- //
- // Get an access token for PostgreSQL.
- //
- Console.Out.WriteLine("Getting access token from Azure AD...");
-
- // Azure AD resource ID for Azure Database for PostgreSQL is https://ossrdbms-aad.database.windows.net/
- string accessToken = null;
-
- try
- {
- // Call managed identities for Azure resources endpoint.
- var sqlServerTokenProvider = new DefaultAzureCredential();
- accessToken = (await sqlServerTokenProvider.GetTokenAsync(
- new Azure.Core.TokenRequestContext(scopes: new string[] { "https://ossrdbms-aad.database.windows.net/.default" }) { })).Token;
-
- }
- catch (Exception e)
- {
- Console.Out.WriteLine("{0} \n\n{1}", e.Message, e.InnerException != null ? e.InnerException.Message : "Acquire token failed");
- System.Environment.Exit(1);
- }
-
- //
- // Open a connection to the PostgreSQL server using the access token.
- //
- string connString =
- String.Format(
- "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4}; SSLMode=Prefer",
- Host,
- User,
- Database,
- 5432,
- accessToken);
-
- using (var conn = new NpgsqlConnection(connString))
- {
- Console.Out.WriteLine("Opening connection using access token...");
- conn.Open();
-
- using (var command = new NpgsqlCommand("SELECT version()", conn))
- {
-
- var reader = command.ExecuteReader();
- while (reader.Read())
- {
- Console.WriteLine("\nConnected!\n\nPostgres version: {0}", reader.GetString(0));
- }
- }
- }
- }
- }
-}
-```
-
-When run, this command will give an output like this:
-
-```
-Getting access token from Azure AD...
-Opening connection using access token...
-
-Connected!
-
-Postgres version: PostgreSQL 11.11, compiled by Visual C++ build 1800, 64-bit
-```
-
-## Next steps
-
-* Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL](concepts-aad-authentication.md)
postgresql Howto Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-connection-string-powershell.md
- Title: Generate a connection string with PowerShell - Azure Database for PostgreSQL
-description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for PostgreSQL.
------- Previously updated : 8/6/2020--
-# How to generate an Azure Database for PostgreSQL connection string with PowerShell
-
-This article demonstrates how to generate a connection string for an Azure Database for PostgreSQL
-server. You can use a connection string to connect to an Azure Database for PostgreSQL from many
-different applications.
-
-## Requirements
-
-This article uses the resources created in the following guide as a starting point:
-
-* [Quickstart: Create an Azure Database for PostgreSQL server using PowerShell](quickstart-create-postgresql-server-database-using-azure-powershell.md)
-
-## Get the connection string
-
-The `Get-AzPostgreSqlConnectionString` cmdlet is used to generate a connection string for connecting
-applications to Azure Database for PostgreSQL. The following example returns the connection string
-for a PHP client from **mydemoserver**.
-
-```azurepowershell-interactive
-Get-AzPostgreSqlConnectionString -Client PHP -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-```Output
-host=mydemoserver.postgres.database.azure.com port=5432 dbname={your_database} user=myadmin@mydemoserver password={your_password} sslmode=require
-```
-
-Valid values for the `Client` parameter include:
-
-* ADO&#46;NET
-* JDBC
-* Node.js
-* PHP
-* Python
-* Ruby
-* WebApp
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Customize Azure Database for PostgreSQL server parameters using PowerShell](howto-configure-server-parameters-using-powershell.md)
postgresql Howto Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-create-manage-server-portal.md
- Title: Manage Azure Database for PostgreSQL - Azure portal
-description: Learn how to manage an Azure Database for PostgreSQL server from the Azure portal.
----- Previously updated : 11/20/2019--
-# Manage an Azure Database for PostgreSQL server using the Azure portal
-
-This article shows you how to manage your Azure Database for PostgreSQL servers. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
-
-## Sign in
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create a server
-
-Visit the [quickstart](quickstart-create-server-database-portal.md) to learn how to create and get started with an Azure Database for PostgreSQL server.
-
-## Scale compute and storage
-
-After server creation you can scale between the General Purpose and Memory Optimized tiers as your needs change. You can also scale compute and memory by increasing or decreasing vCores. Storage can be scaled up (however, you cannot scale storage down).
-
-### Scale between General Purpose and Memory Optimized tiers
-
-You can scale from General Purpose to Memory Optimized and vice-versa. Changing to and from the Basic tier after server creation is not supported.
-
-1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
-
-2. Select **General Purpose** or **Memory Optimized**, depending on what you are scaling to.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/change-pricing-tier.png" alt-text="Screenshot of Azure portal to choose Basic, General Purpose, or Memory Optimized tier in Azure Database for PostgreSQL":::
-
- > [!NOTE]
- > Changing tiers causes a server restart.
-
-3. Select **OK** to save changes.
-
-### Scale vCores up or down
-
-1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
-
-2. Change the **vCore** setting by moving the slider to your desired value.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/scaling-compute.png" alt-text="Screenshot of Azure portal to choose vCore option in Azure Database for PostgreSQL":::
-
- > [!NOTE]
- > Scaling vCores causes a server restart.
-
-3. Select **OK** to save changes.
-
-### Scale storage up
-
-1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
-
-2. Change the **Storage** setting by moving the slider up to your desired value.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/scaling-storage.png" alt-text="Screenshot of Azure portal to choose Storage scale in Azure Database for PostgreSQL":::
-
- > [!NOTE]
- > Storage cannot be scaled down.
-
-3. Select **OK** to save changes.
-
-## Update admin password
-
-You can change the administrator role's password using the Azure portal.
-
-1. Select your server in the Azure portal. In the **Overview** window select **Reset password**.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/overview-reset-password.png" alt-text="Screenshot of Azure portal to reset the password in Azure Database for PostgreSQL":::
-
-2. Enter a new password and confirm the password. The textbox will prompt you about password complexity requirements.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/reset-password.png" alt-text="Screenshot of Azure portal to reset your password and save in Azure Database for PostgreSQL":::
-
-3. Select **OK** to save the new password.
-
-## Delete a server
-
-You can delete your server if you no longer need it.
-
-1. Select your server in the Azure portal. In the **Overview** window select **Delete**.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/overview-delete.png" alt-text="Screenshot of Azure portal to Delete the server in Azure Database for PostgreSQL":::
-
-2. Type the name of the server into the input box to confirm that this is the server you want to delete.
-
- :::image type="content" source="./media/howto-create-manage-server-portal/confirm-delete.png" alt-text="Screenshot of Azure portal to confirm the server delete in Azure Database for PostgreSQL":::
-
- > [!NOTE]
- > Deleting a server is irreversible.
-
-3. Select **Delete**.
-
-## Next steps
--- Learn about [backups and server restore](howto-restore-server-portal.md)-- Learn about [tuning and monitoring options in Azure Database for PostgreSQL](concepts-monitoring.md)
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-create-users.md
- Title: Create users - Azure Database for PostgreSQL - Single Server
-description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Single Server.
----- Previously updated : 09/22/2019--
-# Create users in Azure Database for PostgreSQL - Single Server
-
-This article describes how you can create users within an Azure Database for PostgreSQL server.
-
-If you would like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../role-based-access-control/built-in-roles.md) or review [how to customize roles](../role-based-access-control/custom-roles.md).
-
-## The server admin account
-
-When you first created your Azure Database for PostgreSQL, you provided a server admin user name and password. For more information, you can follow the [Quickstart](quickstart-create-server-database-portal.md) to see the step-by-step approach. Since the server admin user name is a custom name, you can locate the chosen server admin user name from the Azure portal.
-
-The Azure Database for PostgreSQL server is created with the 3 default roles defined. You can see these roles by running the command: `SELECT rolname FROM pg_roles;`
--- azure_pg_admin-- azure_superuser-- your server admin user-
-Your server admin user is a member of the azure_pg_admin role. However, the server admin account is not part of the azure_superuser role. Since this service is a managed PaaS service, only Microsoft is part of the super user role.
-
-The PostgreSQL engine uses privileges to control access to database objects, as discussed in the [PostgreSQL product documentation](https://www.postgresql.org/docs/current/static/sql-createrole.html). In Azure Database for PostgreSQL, the server admin user is granted these privileges:
- LOGIN, NOSUPERUSER, INHERIT, CREATEDB, CREATEROLE, REPLICATION
-
-The server admin user account can be used to create additional users and grant those users into the azure_pg_admin role. Also, the server admin account can be used to create less privileged users and roles that have access to individual databases and schemas.
-
-## How to create additional admin users in Azure Database for PostgreSQL
-
-1. Get the connection information and admin user name.
- To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
-
-2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
- If you are unsure of how to connect, see [the quickstart](./quickstart-create-server-database-portal.md)
-
-3. Edit and run the following SQL code. Replace your new user name for the placeholder value <new_user>, and replace the placeholder password with your own strong password.
-
- ```sql
- CREATE ROLE <new_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
-
- GRANT azure_pg_admin TO <new_user>;
- ```
-
-## How to create database users in Azure Database for PostgreSQL
-
-1. Get the connection information and admin user name.
- To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
-
-2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
-
-3. Edit and run the following SQL code. Replace the placeholder value `<db_user>` with your intended new user name, and placeholder value `<newdb>` with your own database name. Replace the placeholder password with your own strong password.
-
- This sql code syntax creates a new database named testdb, for example purposes. Then it creates a new user in the PostgreSQL service, and grants connect privileges to the new database for that user.
-
- ```sql
- CREATE DATABASE <newdb>;
-
- CREATE ROLE <db_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB NOCREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
-
- GRANT CONNECT ON DATABASE <newdb> TO <db_user>;
- ```
-
-4. Using an admin account, you may need to grant additional privileges to secure the objects in the database. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/ddl-priv.html) for further details on database roles and privileges. For example:
-
- ```sql
- GRANT ALL PRIVILEGES ON DATABASE <newdb> TO <db_user>;
- ```
-
- If a user creates a table "role," the table belongs to that user. If another user needs access to the table, you must grant privileges to the other user on the table level.
-
- For example:
-
- ```sql
- GRANT SELECT ON ALL TABLES IN SCHEMA <schema_name> TO <db_user>;
- ```
-
-5. Log in to your server, specifying the designated database, using the new user name and password. This example shows the psql command line. With this command, you are prompted for the password for the user name. Replace your own server name, database name, and user name.
-
- ```shell
- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=db_user@mydemoserver --dbname=newdb
- ```
-
-## Next steps
-
-Open the firewall for the IP addresses of the new users' machines to enable them to connect:
-[Create and manage Azure Database for PostgreSQL firewall rules by using the Azure portal](howto-manage-firewall-using-portal.md) or [Azure CLI](howto-manage-firewall-using-cli.md).
-
-For more information regarding user account management, see PostgreSQL product documentation for [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html), [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html), and [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html).
postgresql Howto Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-cli.md
- Title: Data encryption - Azure CLI - for Azure Database for PostgreSQL - Single server
-description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure CLI.
------ Previously updated : 03/30/2020 ---
-# Data encryption for Azure Database for PostgreSQL Single server by using the Azure CLI
-
-Learn how to use the Azure CLI to set up and manage data encryption for your Azure Database for PostgreSQL Single server.
-
-## Prerequisites for Azure CLI
-
-* You must have an Azure subscription and be an administrator on that subscription.
-* Create a key vault and a key to use for a customer-managed key. Also enable purge protection and soft delete on the key vault.
-
- ```azurecli-interactive
- az keyvault create -g <resource_group> -n <vault_name> --enable-soft-delete true --enable-purge-protection true
- ```
-
-* In the created Azure Key Vault, create the key that will be used for the data encryption of the Azure Database for PostgreSQL Single server.
-
- ```azurecli-interactive
- az keyvault key create --name <key_name> -p software --vault-name <vault_name>
- ```
-
-* In order to use an existing key vault, it must have the following properties to use as a customer-managed key:
- * [Soft delete](../key-vault/general/soft-delete-overview.md)
-
- ```azurecli-interactive
- az resource update --id $(az keyvault show --name \ <key_vault_name> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
- ```
-
- * [Purge protected](../key-vault/general/soft-delete-overview.md#purge-protection)
-
- ```azurecli-interactive
- az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true
- ```
-
-* The key must have the following attributes to use as a customer-managed key:
- * No expiration date
- * Not disabled
- * Perform **get**, **wrap** and **unwrap** operations
-
-## Set the right permissions for key operations
-
-1. There are two ways of getting the managed identity for your Azure Database for PostgreSQL Single server.
-
- ### Create an new Azure Database for PostgreSQL server with a managed identity.
-
- ```azurecli-interactive
- az postgres server create --name <server_name> -g <resource_group> --location <location> --storage-size <size> -u <user> -p <pwd> --backup-retention <7> --sku-name <sku name> --geo-redundant-backup <Enabled/Disabled> --assign-identity
- ```
-
- ### Update an existing the Azure Database for PostgreSQL server to get a managed identity.
-
- ```azurecli-interactive
- az postgres server update --resource-group <resource_group> --name <server_name> --assign-identity
- ```
-
-2. Set the **Key permissions** (**Get**, **Wrap**, **Unwrap**) for the **Principal**, which is the name of the PostgreSQL Single server server.
-
- ```azurecli-interactive
- az keyvault set-policy --name -g <resource_group> --key-permissions get unwrapKey wrapKey --object-id <principal id of the server>
- ```
-
-## Set data encryption for Azure Database for PostgreSQL Single server
-
-1. Enable Data encryption for the Azure Database for PostgreSQL Single server using the key created in the Azure Key Vault.
-
- ```azurecli-interactive
- az postgres server key create --name <server_name> -g <resource_group> --kid <key_url>
- ```
-
- Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>`
-
-## Using Data encryption for restore or replica servers
-
-After Azure Database for PostgreSQL Single server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted PostgreSQL Single server server, you can use the following steps to create an encrypted restored server.
-
-### Creating a restored/replica server
-
-* [Create a restore server](howto-restore-server-cli.md)
-* [Create a read replica server](howto-read-replicas-cli.md)
-
-### Once the server is restored, revalidate data encryption the restored server
-
-* Assign identity for the replica server
-```azurecli-interactive
-az postgres server update --name <server name> -g <resoure_group> --assign-identity
-```
-
-* Get the existing key that has to be used for the restored/replica server
-
-```azurecli-interactive
-az postgres server key list --name '<server_name>' -g '<resource_group_name>'
-```
-
-* Set the policy for the new identity for the restored/replica server
-
-```azurecli-interactive
-az keyvault set-policy --name <keyvault> -g <resoure_group> --key-permissions get unwrapKey wrapKey --object-id <principl id of the server returned by the step 1>
-```
-
-* Re-validate the restored/replica server with the encryption key
-
-```azurecli-interactive
-az postgres server key create ΓÇôname <server name> -g <resource_group> --kid <key url>
-```
-
-## Additional capability for the key being used for the Azure Database for PostgreSQL Single server
-
-### Get the Key used
-
-```azurecli-interactive
-az postgres server key show --name <server name> -g <resource_group> --kid <key url>
-```
-
-Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>`
-
-### List the Key used
-
-```azurecli-interactive
-az postgres server key list --name <server name> -g <resource_group>
-```
-
-### Drop the key being used
-
-```azurecli-interactive
-az postgres server key delete -g <resource_group> --kid <key url>
-```
-
-## Using an Azure Resource Manager template to enable data encryption
-
-Apart from Azure portal, you can also enable data encryption on your Azure Database for PostgreSQL single server using Azure Resource Manager templates for new and existing server.
-
-### For a new server
-
-Use one of the pre-created Azure Resource Manager templates to provision the server with data encryption enabled:
-[Example with Data encryption](https://github.com/Azure/azure-postgresql/tree/master/arm-templates/ExampleWithDataEncryption)
-
-This Azure Resource Manager template creates an Azure Database for PostgreSQL Single server and uses the **KeyVault** and **Key** passed as parameters to enable data encryption on the server.
-
-### For an existing server
-Additionally, you can use Azure Resource Manager templates to enable data encryption on your existing Azure Database for PostgreSQL Single servers.
-
-* Pass the Resource ID of the Azure Key Vault key that you copied earlier under the `Uri` property in the properties object.
-
-* Use *2020-01-01-preview* as the API version.
-
-```json
-{
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string"
- },
- "serverName": {
- "type": "string"
- },
- "keyVaultName": {
- "type": "string",
- "metadata": {
- "description": "Key vault name where the key to use is stored"
- }
- },
- "keyVaultResourceGroupName": {
- "type": "string",
- "metadata": {
- "description": "Key vault resource group name where it is stored"
- }
- },
- "keyName": {
- "type": "string",
- "metadata": {
- "description": "Key name in the key vault to use as encryption protector"
- }
- },
- "keyVersion": {
- "type": "string",
- "metadata": {
- "description": "Version of the key in the key vault to use as encryption protector"
- }
- }
- },
- "variables": {
- "serverKeyName": "[concat(parameters('keyVaultName'), '_', parameters('keyName'), '_', parameters('keyVersion'))]"
- },
- "resources": [
- {
- "type": "Microsoft.DBforPostgreSQL/servers",
- "apiVersion": "2017-12-01",
- "kind": "",
- "location": "[parameters('location')]",
- "identity": {
- "type": "SystemAssigned"
- },
- "name": "[parameters('serverName')]",
- "properties": {
- }
- },
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-05-01",
- "name": "addAccessPolicy",
- "resourceGroup": "[parameters('keyVaultResourceGroupName')]",
- "dependsOn": [
- "[resourceId('Microsoft.DBforPostgreSQL/servers', parameters('serverName'))]"
- ],
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.KeyVault/vaults/accessPolicies",
- "name": "[concat(parameters('keyVaultName'), '/add')]",
- "apiVersion": "2018-02-14-preview",
- "properties": {
- "accessPolicies": [
- {
- "tenantId": "[subscription().tenantId]",
- "objectId": "[reference(resourceId('Microsoft.DBforPostgreSQL/servers/', parameters('serverName')), '2017-12-01', 'Full').identity.principalId]",
- "permissions": {
- "keys": [
- "get",
- "wrapKey",
- "unwrapKey"
- ]
- }
- }
- ]
- }
- }
- ]
- }
- }
- },
- {
- "name": "[concat(parameters('serverName'), '/', variables('serverKeyName'))]",
- "type": "Microsoft.DBforPostgreSQL/servers/keys",
- "apiVersion": "2020-01-01-preview",
- "dependsOn": [
- "addAccessPolicy",
- "[resourceId('Microsoft.DBforPostgreSQL/servers', parameters('serverName'))]"
- ],
- "properties": {
- "serverKeyType": "AzureKeyVault",
- "uri": "[concat(reference(resourceId(parameters('keyVaultResourceGroupName'), 'Microsoft.KeyVault/vaults/', parameters('keyVaultName')), '2018-02-14-preview', 'Full').properties.vaultUri, 'keys/', parameters('keyName'), '/', parameters('keyVersion'))]"
- }
- }
- ]
-}
-```
-
-## Next steps
-
- To learn more about data encryption, see [Azure Database for PostgreSQL Single server data encryption with customer-managed key](concepts-data-encryption-postgresql.md).
postgresql Howto Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-portal.md
- Title: Data encryption - Azure portal - for Azure Database for PostgreSQL - Single server
-description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure portal.
------ Previously updated : 01/13/2020
-
--
-# Data encryption for Azure Database for PostgreSQL Single server by using the Azure portal
-
-Learn how to use the Azure portal to set up and manage data encryption for your Azure Database for PostgreSQL Single server.
-
-## Prerequisites for Azure CLI
-
-* You must have an Azure subscription and be an administrator on that subscription.
-* In Azure Key Vault, create a key vault and key to use for a customer-managed key.
-* The key vault must have the following properties to use as a customer-managed key:
- * [Soft delete](../key-vault/general/soft-delete-overview.md)
-
- ```azurecli-interactive
- az resource update --id $(az keyvault show --name \ <key_vault_name> -test -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
- ```
-
- * [Purge protected](../key-vault/general/soft-delete-overview.md#purge-protection)
-
- ```azurecli-interactive
- az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true
- ```
-
-* The key must have the following attributes to use as a customer-managed key:
- * No expiration date
- * Not disabled
- * Able to perform get, wrap key, and unwrap key operations
-
-## Set the right permissions for key operations
-
-1. In Key Vault, select **Access policies** > **Add Access Policy**.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-access-policy-overview.png" alt-text="Screenshot of Key Vault, with Access policies and Add Access Policy highlighted":::
-
-2. Select **Key permissions**, and select **Get**, **Wrap**, **Unwrap**, and the **Principal**, which is the name of the PostgreSQL server. If your server principal can't be found in the list of existing principals, you need to register it. You're prompted to register your server principal when you attempt to set up data encryption for the first time, and it fails.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/access-policy-wrap-unwrap.png" alt-text="Access policy overview":::
-
-3. Select **Save**.
-
-## Set data encryption for Azure Database for PostgreSQL Single server
-
-1. In Azure Database for PostgreSQL, select **Data encryption** to set up the customer-managed key.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/data-encryption-overview.png" alt-text="Screenshot of Azure Database for PostgreSQL, with Data encryption highlighted":::
-
-2. You can either select a key vault and key pair, or enter a key identifier.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/setting-data-encryption.png" alt-text="Screenshot of Azure Database for PostgreSQL, with data encryption options highlighted":::
-
-3. Select **Save**.
-
-4. To ensure all files (including temp files) are fully encrypted, restart the server.
-
-## Using Data encryption for restore or replica servers
-
-After Azure Database for PostgreSQL Single server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted PostgreSQL server, you can use the following steps to create an encrypted restored server.
-
-1. On your server, select **Overview** > **Restore**.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore.png" alt-text="Screenshot of Azure Database for PostgreSQL, with Overview and Restore highlighted":::
-
- Or for a replication-enabled server, under the **Settings** heading, select **Replication**.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/postgresql-replica.png" alt-text="Screenshot of Azure Database for PostgreSQL, with Replication highlighted":::
-
-2. After the restore operation is complete, the new server created is encrypted with the primary server's key. However, the features and options on the server are disabled, and the server is inaccessible. This prevents any data manipulation, because the new server's identity hasn't yet been given permission to access the key vault.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore-data-encryption.png" alt-text="Screenshot of Azure Database for PostgreSQL, with Inaccessible status highlighted":::
-
-3. To make the server accessible, revalidate the key on the restored server. Select **Data Encryption** > **Revalidate key**.
-
- > [!NOTE]
- > The first attempt to revalidate will fail, because the new server's service principal needs to be given access to the key vault. To generate the service principal, select **Revalidate key**, which will show an error but generates the service principal. Thereafter, refer to [these steps](#set-the-right-permissions-for-key-operations) earlier in this article.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-revalidate-data-encryption.png" alt-text="Screenshot of Azure Database for PostgreSQL, with revalidation step highlighted":::
-
- You will have to give the key vault access to the new server. For more information, see [Enable Azure RBAC permissions on Key Vault](../key-vault/general/rbac-guide.md?tabs=azure-cli#enable-azure-rbac-permissions-on-key-vault).
-
-4. After registering the service principal, revalidate the key again, and the server resumes its normal functionality.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/restore-successful.png" alt-text="Screenshot of Azure Database for PostgreSQL, showing restored functionality":::
-
-## Next steps
-
- To learn more about data encryption, see [Azure Database for PostgreSQL Single server data encryption with customer-managed key](concepts-data-encryption-postgresql.md).
postgresql Howto Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-troubleshoot.md
- Title: Troubleshoot data encryption - Azure Database for PostgreSQL - Single Server
-description: Learn how to troubleshoot the data encryption on your Azure Database for PostgreSQL - Single Server
------ Previously updated : 02/13/2020--
-# Troubleshoot data encryption in Azure Database for PostgreSQL - Single Server
-
-This article helps you identify and resolve common issues that can occur in the single-server deployment of Azure Database for PostgreSQL when configured with data encryption using a customer-managed key.
-
-## Introduction
-
-When you configure data encryption to use a customer-managed key in Azure Key Vault, the server requires continuous access to the key. If the server loses access to the customer-managed key in Azure Key Vault, it will deny all connections, return the appropriate error message, and change its state to ***Inaccessible*** in the Azure portal.
-
-If you no longer need an inaccessible Azure Database for PostgreSQL server, you can delete it to stop incurring costs. No other actions on the server are permitted until access to the key vault has been restored and the server is available. It's also not possible to change the data encryption option from `Yes`(customer-managed) to `No` (service-managed) on an inaccessible server when it's encrypted with a customer-managed key. You'll have to revalidate the key manually before the server is accessible again. This action is necessary to protect the data from unauthorized access while permissions to the customer-managed key are revoked.
-
-## Common errors causing server to become inaccessible
-
-The following misconfigurations cause most issues with data encryption that use Azure Key Vault keys:
--- The key vault is unavailable or doesn't exist:
- - The key vault was accidentally deleted.
- - An intermittent network error causes the key vault to be unavailable.
--- You don't have permissions to access the key vault or the key doesn't exist:
- - The key expired or was accidentally deleted or disabled.
-- The managed identity of the Azure Database for PostgreSQL instance was accidentally deleted.
- - The managed identity of the Azure Database for PostgreSQL instance has insufficient key permissions. For example, the permissions don't include Get, Wrap, and Unwrap.
- - The managed identity permissions to the Azure Database for PostgreSQL instance were revoked or deleted.
-
-## Identify and resolve common errors
-
-### Errors on the key vault
-
-#### Disabled key vault
--- `AzureKeyVaultKeyDisabledMessage`-- **Explanation**: The operation couldn't be completed on server because the Azure Key Vault key is disabled.-
-#### Missing key vault permissions
--- `AzureKeyVaultMissingPermissionsMessage`-- **Explanation**: The server doesn't have the required Get, Wrap, and Unwrap permissions to Azure Key Vault. Grant any missing permissions to the service principal with ID.-
-### Mitigation
--- Confirm that the customer-managed key is present in the key vault.-- Identify the key vault, then go to the key vault in the Azure portal.-- Ensure that the key URI identifies a key that is present.-
-## Next steps
-
-[Use the Azure portal to set up data encryption with a customer-managed key on Azure Database for PostgreSQL](howto-data-encryption-portal.md)
postgresql Howto Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-deny-public-network-access.md
- Title: Deny Public Network Access - Azure portal - Azure Database for PostgreSQL - Single server
-description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for PostgreSQL Single server
------ Previously updated : 03/10/2020--
-# Deny Public Network Access in Azure Database for PostgreSQL Single server using Azure portal
-
-This article describes how you can configure an Azure Database for PostgreSQL Single server to deny all public configurations and allow only connections through private endpoints to further enhance the network security.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
-
-* An [Azure Database for PostgreSQL Single server](quickstart-create-server-database-portal.md) with General Purpose or Memory Optimized pricing tier.
-
-## Set Deny Public Network Access
-
-Follow these steps to set PostgreSQL Single server Deny Public Network Access:
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL Single server.
-
-1. On the PostgreSQL Single server page, under **Settings**, click **Connection security** to open the connection security configuration page.
-
-1. In **Deny Public Network Access**, select **Yes** to enable deny public access for your PostgreSQL Single server.
-
- :::image type="content" source="./media/howto-deny-public-network-access/deny-public-network-access.PNG" alt-text="Azure Database for PostgreSQL Single server Deny network access":::
-
-1. Click **Save** to save the changes.
-
-1. A notification will confirm that connection security setting was successfully enabled.
-
- :::image type="content" source="./media/howto-deny-public-network-access/deny-public-network-access-success.png" alt-text="Azure Database for PostgreSQL Single server Deny network access success":::
-
-## Next steps
-
-Learn about [how to create alerts on metrics](howto-alert-on-metric.md).
postgresql Howto Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-double-encryption.md
- Title: Infrastructure double encryption - Azure portal - Azure Database for PostgreSQL
-description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for PostgreSQL.
----- Previously updated : 03/14/2021--
-# Infrastructure double encryption for Azure Database for PostgreSQL
-
-Learn how to use the how set up and manage Infrastructure double encryption for your Azure Database for PostgreSQL.
-
-## Prerequisites
-
-* You must have an Azure subscription and be an administrator on that subscription.
-
-## Create an Azure Database for PostgreSQL server with Infrastructure Double encryption - Portal
-
-Follow these steps to create an Azure Database for PostgreSQL server with Infrastructure double encryption from Azure portal:
-
-1. Select **Create a resource** (+) in the upper-left corner of the portal.
-
-2. Select **Databases** > **Azure Database for PostgreSQL**. You can also enter PostgreSQL in the search box to find the service. Enabled the **Single server** deployment option.
-
- :::image type="content" source="./media/quickstart-create-database-portal/1-create-database.png" alt-text="The Azure Database for PostgreSQL in menu":::
-
-3. Provide the basic information of the server. Select **Additional settings** and enabled the **Infrastructure double encryption** checkbox to set the parameter.
-
- :::image type="content" source="./media/howto-infrastructure-double-encryption/infrastructure-encryption-selected.png" alt-text="Azure Database for PostgreSQL selections":::
-
-4. Select **Review + create** to provision the server.
-
- :::image type="content" source="./media/howto-infrastructure-double-encryption/infrastructure-encryption-summary.png" alt-text="Azure Database for PostgreSQL summary":::
-
-5. Once the server is created you can validate the infrastructure double encryption by checking the status in the **Data encryption** server blade.
-
- :::image type="content" source="./media/howto-infrastructure-double-encryption/infrastructure-encryption-validation.png" alt-text="Azure Database for MySQL validation":::
-
-## Create an Azure Database for PostgreSQL server with Infrastructure Double encryption - CLI
-
-Follow these steps to create an Azure Database for PostgreSQL server with Infrastructure double encryption from CLI:
-
-This example creates a resource group named `myresourcegroup` in the `westus` location.
-
-```azurecli-interactive
-az group create --name myresourcegroup --location westus
-```
-The following example creates a PostgreSQL 11 server in West US named `mydemoserver` in your resource group `myresourcegroup` with server admin login `myadmin`. This is a **Gen 4** **General Purpose** server with **2 vCores**. This will also enabled infrastructure double encryption for the server created. Substitute the `<server_admin_password>` with your own value.
-
-```azurecli-interactive
-az postgres server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen4_2 --version 11 --infrastructure-encryption >Enabled/Disabled>
-```
-
-## Next steps
-
-To learn more about data encryption, see [Azure Database for PostgreSQL data Infrastructure double encryption](concepts-Infrastructure-double-encryption.md).
-
postgresql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-manage-firewall-using-portal.md
- Title: Manage firewall rules - Azure portal - Azure Database for PostgreSQL - Single Server
-description: Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal
----- Previously updated : 5/6/2019--
-# Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal
-Server-level firewall rules can be used to manage access to an Azure Database for PostgreSQL Server from a specified IP address or range of IP addresses.
-
-Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](howto-manage-vnet-using-portal.md).
-
-## Prerequisites
-To step through this how-to guide, you need:
-- A server [Create an Azure Database for PostgreSQL](quickstart-create-server-database-portal.md)-
-## Create a server-level firewall rule in the Azure portal
-1. On the PostgreSQL server page, under Settings heading, click **Connection security** to open the Connection security page for the Azure Database for PostgreSQL.
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection Security":::
-
-2. Click **Add client IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/2-add-my-ip.png" alt-text="Azure portal - click Add My IP":::
-
-3. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Therefore, you may need to change the Start IP and End IP to make the rule function as expected.
- Use a search engine or other online tool to check your own IP address. For example, search for "what is my IP."
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/3-what-is-my-ip.png" alt-text="Bing search for What is my IP":::
-
-4. Add additional address ranges. In the firewall rules for the Azure Database for PostgreSQL, you can specify a single IP address, or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP and End IP. Opening the firewall enables administrators, users, and applications to access any database on the PostgreSQL server to which they have valid credentials.
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/4-specify-addresses.png" alt-text="Azure portal - firewall rules":::
-
-5. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules was successful.
-
- :::image type="content" source="./media/howto-manage-firewall-using-portal/5-save-firewall-rule.png" alt-text="Azure portal - click Save":::
-
-## Connecting from Azure
-To allow applications from Azure to connect to your Azure Database for PostgreSQL server, Azure connections must be enabled. For example, to host an Azure Web Apps application, or an application that runs in an Azure VM, or to connect from an Azure Data Factory data management gateway. The resources do not need to be in the same Virtual Network (VNet) or Resource Group for the firewall rule to enable those connections. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. There are a couple of methods to enable these types of connections. A firewall setting with starting and ending address equal to 0.0.0.0 indicates these connections are allowed. Alternatively, you can set the **Allow access to Azure services** option to **ON** in the portal from the **Connection security** pane and hit **Save**. If the connection attempt is not allowed, the request does not reach the Azure Database for PostgreSQL server.
-
-> [!IMPORTANT]
-> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
-
-## Manage existing server-level firewall rules through the Azure portal
-Repeat the steps to manage the firewall rules.
-* To add the current computer, click the button to + **Add My IP**. Click **Save** to save the changes.
-* To add additional IP addresses, type in the Rule Name, Start IP Address, and End IP Address. Click **Save** to save the changes.
-* To modify an existing rule, click any of the fields in the rule and modify. Click **Save** to save the changes.
-* To delete an existing rule, click the ellipsis […] and click **Delete** to remove the rule. Click **Save** to save the changes.
-
-## Next steps
-- Similarly, you can script to [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](howto-manage-firewall-using-cli.md).-- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure portal](howto-manage-vnet-using-portal.md).-- For help in connecting to an Azure Database for PostgreSQL server, see [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).
postgresql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-manage-vnet-using-cli.md
- Title: Use virtual network rules - Azure CLI - Azure Database for PostgreSQL - Single Server
-description: This article describes how to create and manage VNet service endpoints and rules for Azure Database for PostgreSQL using Azure CLI command line.
------ Previously updated : 01/26/2022 --
-# Create and manage VNet service endpoints for Azure Database for PostgreSQL - Single Server using Azure CLI
-
-Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for PostgreSQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for PostgreSQL VNet service endpoints, including limitations, see [Azure Database for PostgreSQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for PostgreSQL.
---
-> [!NOTE]
-> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
-
-## Configure Vnet service endpoints
-
-The [az network vnet](/cli/azure/network/vnet) commands are used to configure virtual networks. Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
-
-To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
-
-Learn more about [built-in roles](../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../role-based-access-control/custom-roles.md).
-
-VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** resource provider registered. For more information, see [resource-manager-registration][resource-manager-portal].
-
-> [!IMPORTANT]
-> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
-
-## Sample script
--
-### Run the script
--
-## Clean up deployment
--
- ```azurecli
- echo "Cleaning up resources by removing the resource group..."
- az group delete --name $resourceGroup -y
- ```
-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
postgresql Howto Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-manage-vnet-using-portal.md
- Title: Use virtual network rules - Azure portal - Azure Database for PostgreSQL - Single Server
-description: Create and manage VNet service endpoints and rules Azure Database for PostgreSQL - Single Server using the Azure portal
----- Previously updated : 5/6/2019-
-# Create and manage VNet service endpoints and VNet rules in Azure Database for PostgreSQL - Single Server by using the Azure portal
-Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for PostgreSQL server. For an overview of Azure Database for PostgreSQL VNet service endpoints, including limitations, see [Azure Database for PostgreSQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for PostgreSQL.
-
-> [!NOTE]
-> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
-> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
--
-## Create a VNet rule and enable service endpoints in the Azure portal
-
-1. On the PostgreSQL server page, under the Settings heading, click **Connection Security** to open the Connection Security pane for Azure Database for PostgreSQL.
-
-2. Ensure that the Allow access to Azure services control is set to **OFF**.
-
-> [!Important]
-> If you leave the control set to ON, your Azure PostgreSQL Database server accepts communication from any subnet. Leaving the control set to ON might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for PostgreSQL, together can reduce your security surface area.
-
-3. Next, click on **+ Adding existing virtual network**. If you do not have an existing VNet you can click **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)
-
- :::image type="content" source="./media/howto-manage-vnet-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security":::
-
-4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then click **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag.
-
- :::image type="content" source="./media/howto-manage-vnet-using-portal/2-configure-vnet.png" alt-text="Azure portal - configure VNet":::
-
- The account must have the necessary permissions to create a virtual network and service endpoint.
-
- Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
-
- To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
-
- Learn more about [built-in roles](../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../role-based-access-control/custom-roles.md).
-
- VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
- > [!IMPORTANT]
- > It is highly recommended to read this article about service endpoint configurations and considerations before configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
- >
-
-5. Once enabled, click **OK** and you will see that VNet service endpoints are enabled along with a VNet rule.
-
- :::image type="content" source="./media/howto-manage-vnet-using-portal/3-vnet-service-endpoints-enabled-vnet-rule-created.png" alt-text="VNet service endpoints enabled and VNet rule created":::
-
-## Next steps
-- Similarly, you can script to [Enable VNet service endpoints and create a VNET rule for Azure Database for PostgreSQL using Azure CLI](howto-manage-vnet-using-cli.md).-- For help in connecting to an Azure Database for PostgreSQL server, see [Connection libraries for Azure Database for PostgreSQL](./concepts-connection-libraries.md)-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
postgresql Howto Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-migrate-from-oracle.md
- Title: "Oracle to Azure Database for PostgreSQL: Migration guide"
-description: This guide helps you to migrate your Oracle schema to Azure Database for PostgreSQL.
----- Previously updated : 03/18/2021--
-# Migrate Oracle to Azure Database for PostgreSQL
-
-This guide helps you to migrate your Oracle schema to Azure Database for PostgreSQL.
-
-For detailed and comprehensive migration guidance, see the [Migration guide resources](https://github.com/microsoft/OrcasNinjaTeam/blob/master/Oracle%20to%20PostgreSQL%20Migration%20Guide/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Guide.pdf).
-
-## Prerequisites
-
-To migrate your Oracle schema to Azure Database for PostgreSQL, you need to:
--- Verify your source environment is supported. -- Download the latest version of [ora2pg](https://ora2pg.darold.net/). -- Have the latest version of the [DBD module](https://www.cpan.org/modules/by-module/DBD/). --
-## Overview
-
-PostgreSQL is one of world's most advanced open-source databases. This article describes how to use the free ora2pg tool to migrate an Oracle database to PostgreSQL. You can use ora2pg to migrate an Oracle database or MySQL database to a PostgreSQL-compatible schema.
-
-The ora2pg tool connects your Oracle database, scans it automatically, and extracts its structure or data. Then ora2pg generates SQL scripts that you can load into your PostgreSQL database. You can use ora2pg for tasks such as reverse-engineering an Oracle database, migrating a huge enterprise database, or simply replicating some Oracle data into a PostgreSQL database. The tool is easy to use and requires no Oracle database knowledge besides the ability to provide the parameters needed to connect to the Oracle database.
-
-> [!NOTE]
-> For more information about using the latest version of ora2pg, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
-
-### Typical ora2pg migration architecture
-
-![Screenshot of the ora2pg migration architecture.](media/howto-migrate-from-oracle/ora2pg-migration-architecture.png)
-
-After you provision the VM and Azure Database for PostgreSQL, you need two configurations to enable connectivity between them: **Allow access to Azure services** and **Enforce SSL Connection**:
--- **Connection Security** blade > **Allow access to Azure services** > **ON**--- **Connection Security** blade > **SSL Settings** > **Enforce SSL Connection** > **DISABLED**-
-### Recommendations
--- To improve the performance of the assessment or export operations in the Oracle server, collect statistics:-
- ```
- BEGIN
-
- DBMS_STATS.GATHER_SCHEMA_STATS
- DBMS_STATS.GATHER_DATABASE_STATS
- DBMS_STATS.GATHER_DICTIONARY_STATS
- END;
- ```
--- Export data by using the `COPY` command instead of `INSERT`.--- Avoid exporting tables with their foreign keys (FKs), constraints, and indexes. These elements slow down the process of importing data into PostgreSQL.--- Create materialized views by using the *no data clause*. Then refresh the views later.--- If possible, use unique indexes in materialized views. These indexes can speed up the refresh when you use the syntax `REFRESH MATERIALIZED VIEW CONCURRENTLY`.--
-## Pre-migration
-
-After you verify that your source environment is supported and that you've addressed any prerequisites, you're ready to start the premigration stage. To begin:
-
-1. **Discover**: Inventory the databases that you need to migrate.
-2. **Assess**: Assess those databases for potential migration issues or blockers.
-3. **Convert**: Resolve any items you uncovered.
-
-For heterogenous migrations such as Oracle to Azure Database for PostgreSQL, this stage also involves making the source database schemas compatible with the target environment.
-
-### Discover
-
-The goal of the discovery phase is to identify existing data sources and details about the features that are being used. This phase helps you better understand and plan for the migration. The process involves scanning the network to identify all your organization's Oracle instances together with the version and features in use.
-
-Microsoft pre-assessment scripts for Oracle run against the Oracle database. The pre-assessment scripts query the Oracle metadata. The scripts provide:
--- A database inventory, including counts of objects by schema, type, and status.-- A rough estimate of the raw data in each schema, based on statistics.-- The size of tables in each schema.-- The number of code lines per package, function, procedure, and so on.-
-Download the related scripts from [github](https://github.com/microsoft/DataMigrationTeam/tree/master/Whitepapers).
-
-### Assess
-
-After you inventory the Oracle databases, you'll have an idea of the database size and potential challenges. The next step is to run the assessment.
-
-Estimating the cost of a migration from Oracle to PostgreSQL isn't easy. To assess the migration cost, ora2pg checks all database objects, functions, and stored procedures for objects and PL/SQL code that it can't automatically convert.
-
-The ora2pg tool has a content analysis mode that inspects the Oracle database to generate a text report. The report describes what the Oracle database contains and what can't be exported.
-
-To activate the *analysis and report* mode, use the exported type `SHOW_REPORT` as shown in the following command:
-
-```
-ora2pg -t SHOW_REPORT
-```
-
-The ora2pg tool can convert SQL and PL/SQL code from Oracle syntax to PostgreSQL. So after the database is analyzed, ora2pg can estimate the code difficulties and the time necessary to migrate a full database.
-
-To estimate the migration cost in human-days, ora2pg allows you to use a configuration directive called `ESTIMATE_COST`. You can also enable this directive at a command prompt:
-
-```
-ora2pg -t SHOW_REPORT --estimate_cost
-```
-
-The default migration unit represents around five minutes for a PostgreSQL expert. If this migration is your first, you can increase the default migration unit by using the configuration directive `COST_UNIT_VALUE` or the `--cost_unit_value` command-line option.
-
-The last line of the report shows the total estimated migration code in human-days. The estimate follows the number of migration units estimated for each object.
-
-In the following code example, you see some assessment variations:
-* Tables assessment
-* Columns assessment
-* Schema assessment that uses a default cost unit of 5 minutes
-* Schema assessment that uses a cost unit of 10 minutes
-
-```
-ora2pg -t SHOW_TABLE -c c:\ora2pg\ora2pg_hr.conf > c:\ts303\hr_migration\reports\tables.txt
-ora2pg -t SHOW_COLUMN -c c:\ora2pg\ora2pg_hr.conf > c:\ts303\hr_migration\reports\columns.txt
-ora2pg -t SHOW_REPORT -c c:\ora2pg\ora2pg_hr.conf --dump_as_html --estimate_cost > c:\ts303\hr_migration\reports\report.html
-ora2pg -t SHOW_REPORT -c c:\ora2pg\ora2pg_hr.conf ΓÇô-cost_unit_value 10 --dump_as_html --estimate_cost > c:\ts303\hr_migration\reports\report2.html
-```
-
-Here's the output of the schema assessment migration level B-5:
-
-* Migration levels:
-
- * A - Migration that can be run automatically
-
- * B - Migration with code rewrite and a human-days cost up to 5 days
-
- * C - Migration with code rewrite and a human-days cost over 5 days
-
-* Technical levels:
-
- * 1 = Trivial: No stored functions and no triggers
-
- * 2 = Easy: No stored functions, but triggers; no manual rewriting
-
- * 3 = Simple: Stored functions and/or triggers; no manual rewriting
-
- * 4 = Manual: No stored functions, but triggers or views with code rewriting
-
- * 5 = Difficult: Stored functions and/or triggers with code rewriting
-
-The assessment consists of:
-* A letter (A or B) to specify whether the migration needs manual rewriting.
-
-* A number from 1 to 5 to indicate the technical difficulty.
-
-Another option, `-human_days_limit`, specifies the limit of human-days. Here, set the migration level to C to indicate that the migration needs a large amount of work, full project management, and migration support. The default is 10 human-days. You can use the configuration directive `HUMAN_DAYS_LIMIT` to change this default value permanently.
-
-This schema assessment was developed to help users decide which database to migrate first and which teams to mobilize.
-
-### Convert
-
-In minimal-downtime migrations, your migration source changes. It drifts from the target in terms of data and schema after the one-time migration. During the *Data sync* phase, ensure that all changes in the source are captured and applied to the target in near real time. After you verify that all changes are applied to the target, you can *cut over* from the source to the target environment.
-
-In this step of the migration, the Oracle code and DDL scripts are converted or translated to PostgreSQL. The ora2pg tool exports the Oracle objects in a PostgreSQL format automatically. Some of the generated objects can't be compiled in the PostgreSQL database without manual changes.
-
-To understand which elements need manual intervention, first compile the files generated by ora2pg against the PostgreSQL database. Check the log, and then make any necessary changes until the schema structure is compatible with PostgreSQL syntax.
--
-#### Create a migration template
-
-We recommend using the migration template that ora2pg provides. When you use the options `--project_base` and `--init_project`, ora2pg creates a project template with a work tree, a configuration file, and a script to export all objects from the Oracle database. For more information, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
-
-Use the following command:
-
-```
-ora2pg --project_base /app/migration/ --init_project test_project
-```
-
-Here's the example output:
-
-```
-ora2pg --project_base /app/migration/ --init_project test_project
- Creating project test_project.
- /app/migration/test_project/
- schema/
- dblinks/
- directories/
- functions/
- grants/
- mviews/
- packages/
- partitions/
- procedures/
- sequences/
- synonyms/
- tables/
- tablespaces/
- triggers/
- types/
- views/
- sources/
- functions/
- mviews/
- packages/
- partitions/
- procedures/
- triggers/
- types/
- views/
- data/
- config/
- reports/
-
- Generating generic configuration file
- Creating script export_schema.sh to automate all exports.
- Creating script import_all.sh to automate all imports.
-```
-
-The `sources/` directory contains the Oracle code. The `schema/` directory contains the code ported to PostgreSQL. And the `reports/` directory contains the HTML reports and the migration cost assessment.
--
-After the project structure is created, a generic config file is created. Define the Oracle database connection and the relevant config parameters in the config file. For more information about the config file, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
--
-#### Export Oracle objects
-
-Next, export the Oracle objects as PostgreSQL objects by running the file *export_schema.sh*.
-
-```
-cd /app/migration/mig_project
-./export_schema.sh
-```
-
-Run the following command manually.
-
-```
-SET namespace="/app/migration/mig_project"
-
-ora2pg -p -t DBLINK -o dblink.sql -b %namespace%/schema/dblinks -c %namespace%/config/ora2pg.conf
-ora2pg -p -t DIRECTORY -o directory.sql -b %namespace%/schema/directories -c %namespace%/config/ora2pg.conf
-ora2pg -p -t FUNCTION -o functions2.sql -b %namespace%/schema/functions -c %namespace%/config/ora2pg.conf
-ora2pg -p -t GRANT -o grants.sql -b %namespace%/schema/grants -c %namespace%/config/ora2pg.conf
-ora2pg -p -t MVIEW -o mview.sql -b %namespace%/schema/mviews -c %namespace%/config/ora2pg.conf
-ora2pg -p -t PACKAGE -o packages.sql -b %namespace%/schema/packages -c %namespace%/config/ora2pg.conf
-ora2pg -p -t PARTITION -o partitions.sql -b %namespace%/schema/partitions -c %namespace%/config/ora2pg.conf
-ora2pg -p -t PROCEDURE -o procs.sql -b %namespace%/schema/procedures -c %namespace%/config/ora2pg.conf
-ora2pg -p -t SEQUENCE -o sequences.sql -b %namespace%/schema/sequences -c %namespace%/config/ora2pg.conf
-ora2pg -p -t SYNONYM -o synonym.sql -b %namespace%/schema/synonyms -c %namespace%/config/ora2pg.conf
-ora2pg -p -t TABLE -o table.sql -b %namespace%/schema/tables -c %namespace%/config/ora2pg.conf
-ora2pg -p -t TABLESPACE -o tablespaces.sql -b %namespace%/schema/tablespaces -c %namespace%/config/ora2pg.conf
-ora2pg -p -t TRIGGER -o triggers.sql -b %namespace%/schema/triggers -c %namespace%/config/ora2pg.conf
-ora2pg -p -t TYPE -o types.sql -b %namespace%/schema/types -c %namespace%/config/ora2pg.conf
-ora2pg -p -t VIEW -o views.sql -b %namespace%/schema/views -c %namespace%/config/ora2pg.conf
-```
-
-To extract the data, use the following command.
-
-```
-ora2pg -t COPY -o data.sql -b %namespace/data -c %namespace/config/ora2pg.conf
-```
-
-#### Compile files
-
-Finally, compile all files against the Azure Database for PostgreSQL server. You can choose to load the manually generated DDL files or use the second script *import_all.sh* to import those files interactively.
-
-```
-psql -f %namespace%\schema\sequences\sequence.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\ schema\sequences\create_sequences.log
-
-psql -f %namespace%\schema\tables\table.sql -h server1-server.postgres.database.azure.com p 5432 -U username@server1-server -d database -l %namespace%\schema\tables\create_table.log
-```
-
-Here's the data import command:
-
-```
-psql -f %namespace%\data\table1.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\data\table1.log
-
-psql -f %namespace%\data\table2.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\data\table2.log
-```
-
-While the files are being compiled, check the logs and correct any syntax that ora2pg couldn't convert on its own.
-
-For more information, see [Oracle to Azure Database for PostgreSQL migration workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf).
-
-## Migrate
-
-After you have the necessary prerequisites and you've completed the premigration steps, you can start the schema and data migration.
-
-### Migrate schema and data
-
-When you've made the necessary fixes, a stable build of the database is ready to deploy. Run the `psql` import commands, pointing to the files that contain the modified code. This task compiles the database objects against the PostgreSQL database and imports the data.
-
-In this step, you can implement a level of parallelism on importing the data.
-
-### Sync data and cut over
-
-In online (minimal-downtime) migrations, the migration source continues to change. It drifts from the target in terms of data and schema after the one-time migration.
-
-During the *Data sync* phase, ensure that all changes in the source are captured and applied to the target in near real time. After you verify that all changes are applied, you can cut over from the source to the target environment.
-
-To do an online migration, contact AskAzureDBforPostgreSQL@service.microsoft.com for support.
-
-In a *delta/incremental* migration that uses ora2pg, for each table, use a query that filters (*cuts*) by date, time, or another parameter. Then finish the migration by using a second query that migrates the remaining data.
-
-In the source data table, migrate all the historical data first. Here's an example:
-
-```
-select * from table1 where filter_data < 01/01/2019
-```
-
-You can query the changes since the initial migration by running a command like this one:
-
-```
-select * from table1 where filter_data >= 01/01/2019
-```
-
-In this case, we recommended that you enhance validation by checking data parity on both sides, the source and the target.
-
-## Post-migration
-
-After the *Migration* stage, complete the post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
-
-### Remediate applications
-
-After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. The setup sometimes requires changes to the applications.
-
-### Test
-
-After the data is migrated to the target, run tests against the databases to verify that the applications work well with the target. Make sure the source and target are properly migrated by running the manual data validation scripts against the Oracle source and PostgreSQL target databases.
-
-Ideally, if the source and target databases have a networking path, ora2pg should be used for data validation. You can use the `TEST` action to ensure that all objects from the Oracle database have been created in PostgreSQL.
-
-Run this command:
-
-```
-ora2pg -t TEST -c config/ora2pg.conf > migration_diff.txt
-```
-
-### Optimize
-
-The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness. In this phase, you also address performance issues with the workload.
-
-## Migration assets
-
-For more information about this migration scenario, see the following resources. They support real-world migration project engagement.
-
-| Resource | Description |
-| -- | |
-| [Oracle to Azure PostgreSQL migration cookbook](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20PostgreSQL%20Migration%20Cookbook.pdf) | This document helps architects, consultants, database administrators, and related roles quickly migrate workloads from Oracle to Azure Database for PostgreSQL by using ora2pg. |
-| [Oracle to Azure PostgreSQL migration workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf) | This document helps architects, consultants, database administrators, and related roles quickly fix or work around issues while migrating workloads from Oracle to Azure Database for PostgreSQL. |
-| [Steps to install ora2pg on Windows or Linux](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Steps%20to%20Install%20ora2pg%20on%20Windows%20and%20Linux.pdf) | This document provides a quick installation guide for migrating schema and data from Oracle to Azure Database for PostgreSQL by using ora2pg on Windows or Linux. For more information, see the [ora2pg documentation](http://ora2pg.darold.net/documentation.html). |
-
-The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to the Microsoft Azure data platform.
-
-## More support
-
-For migration help beyond the scope of ora2pg tooling, contact [@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com).
-
-## Next steps
-
-For a matrix of services and tools for database and data migration and for specialty tasks, see [Services and tools for data migration](../dms/dms-tools-matrix.md).
-
-Documentation:
-- [Azure Database for PostgreSQL documentation](./index.yml)-- [ora2pg documentation](https://ora2pg.darold.net/documentation.html)-- [PostgreSQL website](https://www.postgresql.org/)-- [Autonomous transaction support in PostgreSQL](http://blog.dalibo.com/2016/08/19/Autonoumous_transactions_support_in_PostgreSQL.html) 
postgresql Howto Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-migrate-online.md
- Title: Minimal-downtime migration to Azure Database for PostgreSQL - Single Server
-description: This article describes how to perform a minimal-downtime migration of a PostgreSQL database to Azure Database for PostgreSQL - Single Server by using the Azure Database Migration Service.
----- Previously updated : 5/6/2019--
-# Minimal-downtime migration to Azure Database for PostgreSQL - Single Server
-
-You can perform PostgreSQL migrations to Azure Database for PostgreSQL with minimal downtime by using the newly introduced **continuous sync capability** for the [Azure Database Migration Service](https://aka.ms/get-dms) (DMS). This functionality limits the amount of downtime that is incurred by the application.
-
-## Overview
-Azure DMS performs an initial load of your on-premises to Azure Database for PostgreSQL, and then continuously syncs any new transactions to Azure while the application remains running. After the data catches up on the target Azure side, you stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then update your connection string to point to Azure. When you are finished, your application will be live on Azure!
--
-## Next steps
-- View the video [App Modernization with Microsoft Azure](https://medius.studios.ms/Embed/Video/BRK2102?sid=BRK2102), which contains a demo showing how to migrate PostgreSQL apps to Azure Database for PostgreSQL.-- See the tutorial [Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS](../dms/tutorial-postgresql-azure-postgresql-online.md).
postgresql Howto Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-migrate-using-dump-and-restore.md
- Title: Dump and restore - Azure Database for PostgreSQL - Single Server
-description: You can extract a PostgreSQL database into a dump file. Then, you can restore from a file created by pg_dump in Azure Database for PostgreSQL Single Server.
----- Previously updated : 09/22/2020--
-# Migrate your PostgreSQL database by using dump and restore
-
-You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a dump file. Then use [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) to restore the PostgreSQL database from an archive file created by `pg_dump`.
-
-## Prerequisites
-
-To step through this how-to guide, you need:
-- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md), including firewall rules to allow access.-- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed.-
-## Create a dump file that contains the data to be loaded
-
-To back up an existing PostgreSQL database on-premises or in a VM, run the following command:
-
-```bash
-pg_dump -Fc -v --host=<host> --username=<name> --dbname=<database name> -f <database>.dump
-```
-For example, if you have a local server and a database called **testdb** in it, run:
-
-```bash
-pg_dump -Fc -v --host=localhost --username=masterlogin --dbname=testdb -f testdb.dump
-```
-
-## Restore the data into the target database
-
-After you've created the target database, you can use the `pg_restore` command and the `--dbname` parameter to restore the data into the target database from the dump file.
-
-```bash
-pg_restore -v --no-owner --host=<server name> --port=<port> --username=<user-name> --dbname=<target database name> <database>.dump
-```
-
-Including the `--no-owner` parameter causes all objects created during the restore to be owned by the user specified with `--username`. For more information, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/app-pgrestore.html).
-
-> [!NOTE]
-> On Azure Database for PostgreSQL servers, TLS/SSL connections are on by default. If your PostgreSQL server requires TLS/SSL connections, but doesn't have them, set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error might read: "FATAL: SSL connection is required. Please specify SSL options and retry." In the Windows command line, run the command `SET PGSSLMODE=require` before running the `pg_restore` command. In Linux or Bash, run the command `export PGSSLMODE=require` before running the `pg_restore` command.
->
-
-In this example, restore the data from the dump file **testdb.dump** into the database **mypgsqldb**, on target server **mydemoserver.postgres.database.azure.com**.
-
-Here's an example for how to use this `pg_restore` for Single Server:
-
-```bash
-pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb testdb.dump
-```
-
-Here's an example for how to use this `pg_restore` for Flexible Server:
-
-```bash
-pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin --dbname=mypgsqldb testdb.dump
-```
-
-## Optimize the migration process
-
-One way to migrate your existing PostgreSQL database to Azure Database for PostgreSQL is to back up the database on the source and restore it in Azure. To minimize the time required to complete the migration, consider using the following parameters with the backup and restore commands.
-
-> [!NOTE]
-> For detailed syntax information, see [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
->
-
-### For the backup
-
-Take the backup with the `-Fc` switch, so that you can perform the restore in parallel to speed it up. For example:
-
-```bash
-pg_dump -h my-source-server-name -U source-server-username -Fc -d source-databasename -f Z:\Data\Backups\my-database-backup.dump
-```
-
-### For the restore
--- Move the backup file to an Azure VM in the same region as the Azure Database for PostgreSQL server you are migrating to. Perform the `pg_restore` from that VM to reduce network latency. Create the VM with [accelerated networking](../virtual-network/create-vm-accelerated-networking-powershell.md) enabled.--- Open the dump file to verify that the create index statements are after the insert of the data. If it isn't the case, move the create index statements after the data is inserted. This should already be done by default, but it's a good idea to confirm.--- Restore with the switches `-Fc` and `-j` (with a number) to parallelize the restore. The number you specify is the number of cores on the target server. You can also set to twice the number of cores of the target server to see the impact.-
- Here's an example for how to use this `pg_restore` for Single Server:
-
- ```bash
- pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
- ```
-
- Here's an example for how to use this `pg_restore` for Flexible Server:
-
- ```bash
- pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
- ```
--- You can also edit the dump file by adding the command `set synchronous_commit = off;` at the beginning, and the command `set synchronous_commit = on;` at the end. Not turning it on at the end, before the apps change the data, might result in subsequent loss of data.--- On the target Azure Database for PostgreSQL server, consider doing the following before the restore:
-
- - Turn off query performance tracking. These statistics aren't needed during the migration. You can do this by setting `pg_stat_statements.track`, `pg_qs.query_capture_mode`, and `pgms_wait_sampling.query_capture_mode` to `NONE`.
-
- - Use a high compute and high memory SKU, like 32 vCore Memory Optimized, to speed up the migration. You can easily scale back down to your preferred SKU after the restore is complete. The higher the SKU, the more parallelism you can achieve by increasing the corresponding `-j` parameter in the `pg_restore` command.
-
- - More IOPS on the target server might improve the restore performance. You can provision more IOPS by increasing the server's storage size. This setting isn't reversible, but consider whether a higher IOPS would benefit your actual workload in the future.
-
-Remember to test and validate these commands in a test environment before you use them in production.
-
-## Next steps
--- To migrate a PostgreSQL database by using export and import, see [Migrate your PostgreSQL database using export and import](howto-migrate-using-export-and-import.md).-- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
postgresql Howto Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-migrate-using-export-and-import.md
- Title: Migrate a database - Azure Database for PostgreSQL - Single Server
-description: Describes how extract a PostgreSQL database into a script file and import the data into the target database from that file.
----- Previously updated : 09/22/2020-
-# Migrate your PostgreSQL database using export and import
-
-You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a script file and [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to import the data into the target database from that file.
-
-## Prerequisites
-To step through this how-to guide, you need:
-- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md) with firewall rules to allow access and database under it.-- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) command-line utility installed-- [psql](https://www.postgresql.org/docs/current/static/app-psql.html) command-line utility installed-
-Follow these steps to export and import your PostgreSQL database.
-
-## Create a script file using pg_dump that contains the data to be loaded
-To export your existing PostgreSQL database on-premises or in a VM to a sql script file, run the following command in your existing environment:
-
-```bash
-pg_dump --host=<host> --username=<name> --dbname=<database name> --file=<database>.sql
-```
-For example, if you have a local server and a database called **testdb** in it:
-```bash
-pg_dump --host=localhost --username=masterlogin --dbname=testdb --file=testdb.sql
-```
-
-## Import the data on target Azure Database for PostgreSQL
-You can use the psql command line and the --dbname parameter (-d) to import the data into the Azure Database for PostgreSQL server and load data from the sql file.
-
-```bash
-psql --file=<database>.sql --host=<server name> --port=5432 --username=<user> --dbname=<target database name>
-```
-This example uses psql utility and a script file named **testdb.sql** from previous step to import data into the database **mypgsqldb** on the target server **mydemoserver.postgres.database.azure.com**.
-
-For **Single Server**, use this command
-```bash
-psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb
-```
-
-For **Flexible Server**, use this command
-```bash
-psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --username=mylogin --dbname=mypgsqldb
-```
---
-## Next steps
-- To migrate a PostgreSQL database using dump and restore, see [Migrate your PostgreSQL database using dump and restore](howto-migrate-using-dump-and-restore.md).-- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
postgresql Howto Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-move-regions-portal.md
- Title: Move Azure regions - Azure portal - Azure Database for PostgreSQL - Single Server
-description: Move an Azure Database for PostgreSQL server from one Azure region to another using a read replica and the Azure portal.
------ Previously updated : 06/29/2020
-#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region
--
-# Move an Azure Database for Azure Database for PostgreSQL - Single Server to another region by using the Azure portal
-
-There are various scenarios for moving an existing Azure Database for PostgreSQL server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning.
-
-You can use an Azure Database for PostgreSQL [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
-
-> [!NOTE]
-> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
-
-## Prerequisites
--- The cross-region read replica feature is only available for Azure Database for PostgreSQL - Single Server in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.--- Make sure that your Azure Database for PostgreSQL source server is in the Azure region that you want to move from.-
-## Prepare to move
-
-To prepare the source server for replication using the Azure portal, use the following steps:
-
-1. Sign into the [Azure portal](https://portal.azure.com/).
-1. Select the existing Azure Database for PostgreSQL server that you want to use as the source server. This action opens the **Overview** page.
-1. From the server's menu, select **Replication**. If Azure replication support is set to at least **Replica**, you can create read replicas.
-1. If Azure replication support is not set to at least **Replica**, set it. Select **Save**.
-1. Restart the server to apply the change by selecting **Yes**.
-1. You will receive two Azure portal notifications once the operation is complete. There is one notification for updating the server parameter. There is another notification for the server restart that follows immediately.
-1. Refresh the Azure portal page to update the Replication toolbar. You can now create read replicas for this server.
-
-To create a cross-region read replica server in the target region using the Azure portal, use the following steps:
-
-1. Select the existing Azure Database for PostgreSQL server that you want to use as the source server.
-1. Select **Replication** from the menu, under **SETTINGS**.
-1. Select **Add Replica**.
-1. Enter a name for the replica server.
-1. Select the location for the replica server. The default location is the same as the primary server's. Verify that you've selected the target location where you want the replica to be deployed.
-1. Select **OK** to confirm creation of the replica. During replica creation, data is copied from the source server to the replica. Create time may last several minutes or more, in proportion to the size of the source server.
-
->[!NOTE]
-> When you create a replica, it doesn't inherit the firewall rules and VNet service endpoints of the primary server. These rules must be set up independently for the replica.
-
-## Move
-
-> [!IMPORTANT]
-> The standalone server can't be made into a replica again.
-> Before you stop replication on a read replica, ensure the replica has all the data that you require.
-
-To stop replication to the replica from the Azure portal, use the following steps:
-
-1. Once the replica has been created, locate and select your Azure Database for PostgreSQL source server.
-1. Select **Replication** from the menu, under **SETTINGS**.
-1. Select the replica server.
-1. Select **Stop replication**.
-1. Confirm you want to stop replication by clicking **OK**.
-
-## Clean up source server
-
-You may want to delete the source Azure Database for PostgreSQL server. To do so, use the following steps:
-
-1. Once the replica has been created, locate and select your Azure Database for PostgreSQL source server.
-1. In the **Overview** window, select **Delete**.
-1. Type in the name of the source server to confirm you want to delete.
-1. Select **Delete**.
-
-## Next steps
-
-In this tutorial, you moved an Azure Database for PostgreSQL server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
--- Learn more about [read replicas](concepts-read-replicas.md)-- Learn more about [managing read replicas in the Azure portal](howto-read-replicas-portal.md)-- Learn more about [business continuity](concepts-business-continuity.md) options
postgresql Howto Optimize Query Stats Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-optimize-query-stats-collection.md
- Title: Optimize query stats collection - Azure Database for PostgreSQL - Single Server
-description: This article describes how you can optimize query stats collection on an Azure Database for PostgreSQL - Single Server
----- Previously updated : 5/6/2019--
-# Optimize query statistics collection on an Azure Database for PostgreSQL - Single Server
-This article describes how to optimize query statistics collection on an Azure Database for PostgreSQL server.
-
-## Use pg_stats_statements
-**Pg_stat_statements** is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. This module hooks into every query execution and comes with a non-trivial performance cost. Enabling **pg_stat_statements** forces query text writes to files on disk.
-
-If you have unique queries with long query text or you don't actively monitor **pg_stat_statements**, disable **pg_stat_statements** for best performance. To do so, change the setting to `pg_stat_statements.track = NONE`.
-
-Some customer workloads have seen up to a 50 percent performance improvement when **pg_stat_statements** is disabled. The tradeoff you make when you disable pg_stat_statements is the inability to troubleshoot performance issues.
-
-To set `pg_stat_statements.track = NONE`:
--- In the Azure portal, go to the [PostgreSQL resource management page and select the server parameters blade](howto-configure-server-parameters-using-portal.md).-
- :::image type="content" source="./media/howto-optimize-query-stats-collection/pg_stats_statements_portal.png" alt-text="PostgreSQL server parameter blade":::
--- Use the [Azure CLI](howto-configure-server-parameters-using-cli.md) az postgres server configuration set to `--name pg_stat_statements.track --resource-group myresourcegroup --server mydemoserver --value NONE`.-
-## Use the Query Store
-The [Query Store](concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using *pg_stats_statements*.
-
-## Next steps
-Consider setting `pg_stat_statements.track = NONE` in the [Azure portal](howto-configure-server-parameters-using-portal.md) or by using the [Azure CLI](howto-configure-server-parameters-using-cli.md).
-
-For more information, see:
-- [Query Store usage scenarios](concepts-query-store-scenarios.md) -- [Query Store best practices](concepts-query-store-best-practices.md)
postgresql Howto Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-read-replicas-cli.md
- Title: Manage read replicas - Azure CLI, REST API - Azure Database for PostgreSQL - Single Server
-description: Learn how to manage read replicas in Azure Database for PostgreSQL - Single Server from the Azure CLI and REST API
----- Previously updated : 12/17/2020 ---
-# Create and manage read replicas from the Azure CLI, REST API
-
-In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL by using the Azure CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
-
-## Azure replication support
-[Read replicas](concepts-read-replicas.md) and [logical decoding](concepts-logical.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas.
-
-To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
-
-* **Off** - Puts the least information in the WAL. This setting is not available on most Azure Database for PostgreSQL servers.
-* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers.
-* **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting.
--
-> [!NOTE]
-> When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica.
-
-## Azure CLI
-You can create and manage read replicas using the Azure CLI.
-
-### Prerequisites
--- [Install Azure CLI 2.0](/cli/azure/install-azure-cli)-- An [Azure Database for PostgreSQL server](quickstart-create-server-up-azure-cli.md) to be the primary server.--
-### Prepare the primary server
-
-1. Check the primary server's `azure.replication_support` value. It should be at least REPLICA for read replicas to work.
-
- ```azurecli-interactive
- az postgres server configuration show --resource-group myresourcegroup --server-name mydemoserver --name azure.replication_support
- ```
-
-2. If `azure.replication_support` is not at least REPLICA, set it.
-
- ```azurecli-interactive
- az postgres server configuration set --resource-group myresourcegroup --server-name mydemoserver --name azure.replication_support --value REPLICA
- ```
-
-3. Restart the server to apply the change.
-
- ```azurecli-interactive
- az postgres server restart --name mydemoserver --resource-group myresourcegroup
- ```
-
-### Create a read replica
-
-The [az postgres server replica create](/cli/azure/postgres/server/replica#az-postgres-server-replica-create) command requires the following parameters:
-
-| Setting | Example value | Description |
-| | | |
-| resource-group | myresourcegroup | The resource group where the replica server will be created. |
-| name | mydemoserver-replica | The name of the new replica server that is created. |
-| source-server | mydemoserver | The name or resource ID of the existing primary server to replicate from. Use the resource ID if you want the replica and master's resource groups to be different. |
-
-In the CLI example below, the replica is created in the same region as the master.
-
-```azurecli-interactive
-az postgres server replica create --name mydemoserver-replica --source-server mydemoserver --resource-group myresourcegroup
-```
-
-To create a cross region read replica, use the `--location` parameter. The CLI example below creates the replica in West US.
-
-```azurecli-interactive
-az postgres server replica create --name mydemoserver-replica --source-server mydemoserver --resource-group myresourcegroup --location westus
-```
-
-> [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
-
-If you haven't set the `azure.replication_support` parameter to **REPLICA** on a General Purpose or Memory Optimized primary server and restarted the server, you receive an error. Complete those two steps before you create a replica.
-
-> [!IMPORTANT]
-> Review the [considerations section of the Read Replica overview](concepts-read-replicas.md#considerations).
->
-> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
-
-### List replicas
-You can view the list of replicas of a primary server by using [az postgres server replica list](/cli/azure/postgres/server/replica#az-postgres-server-replica-list) command.
-
-```azurecli-interactive
-az postgres server replica list --server-name mydemoserver --resource-group myresourcegroup
-```
-
-### Stop replication to a replica server
-You can stop replication between a primary server and a read replica by using [az postgres server replica stop](/cli/azure/postgres/server/replica#az-postgres-server-replica-stop) command.
-
-After you stop replication to a primary server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
-
-```azurecli-interactive
-az postgres server replica stop --name mydemoserver-replica --resource-group myresourcegroup
-```
-
-### Delete a primary or replica server
-To delete a primary or replica server, you use the [az postgres server delete](/cli/azure/postgres/server#az-postgres-server-delete) command.
-
-When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
-
-```azurecli-interactive
-az postgres server delete --name myserver --resource-group myresourcegroup
-```
-
-## REST API
-You can create and manage read replicas using the [Azure REST API](/rest/api/azure/).
-
-### Prepare the primary server
-
-1. Check the primary server's `azure.replication_support` value. It should be at least REPLICA for read replicas to work.
-
- ```http
- GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}/configurations/azure.replication_support?api-version=2017-12-01
- ```
-
-2. If `azure.replication_support` is not at least REPLICA, set it.
-
- ```http
- PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}/configurations/azure.replication_support?api-version=2017-12-01
- ```
-
- ```JSON
- {
- "properties": {
- "value": "replica"
- }
- }
- ```
-
-2. [Restart the server](/rest/api/postgresql/singleserver/servers/restart) to apply the change.
-
- ```http
- POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}/restart?api-version=2017-12-01
- ```
-
-### Create a read replica
-You can create a read replica by using the [create API](/rest/api/postgresql/singleserver/servers/create):
-
-```http
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{replicaName}?api-version=2017-12-01
-```
-
-```json
-{
- "location": "southeastasia",
- "properties": {
- "createMode": "Replica",
- "sourceServerId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}"
- }
-}
-```
-
-> [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
-
-If you haven't set the `azure.replication_support` parameter to **REPLICA** on a General Purpose or Memory Optimized primary server and restarted the server, you receive an error. Complete those two steps before you create a replica.
-
-A replica is created by using the same compute and storage settings as the master. After a replica is created, several settings can be changed independently from the primary server: compute generation, vCores, storage, and back-up retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
--
-> [!IMPORTANT]
-> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
-
-### List replicas
-You can view the list of replicas of a primary server using the [replica list API](/rest/api/postgresql/singleserver/replicas/listbyserver):
-
-```http
-GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}/Replicas?api-version=2017-12-01
-```
-
-### Stop replication to a replica server
-You can stop replication between a primary server and a read replica by using the [update API](/rest/api/postgresql/singleserver/servers/update).
-
-After you stop replication to a primary server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
-
-```http
-PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{replicaServerName}?api-version=2017-12-01
-```
-
-```json
-{
- "properties": {
- "replicationRole":"None"
- }
-}
-```
-
-### Delete a primary or replica server
-To delete a primary or replica server, you use the [delete API](/rest/api/postgresql/singleserver/servers/delete):
-
-When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
-
-```http
-DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{serverName}?api-version=2017-12-01
-```
-
-## Next steps
-* Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
-* Learn how to [create and manage read replicas in the Azure portal](howto-read-replicas-portal.md).
postgresql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-read-replicas-portal.md
- Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Single Server
-description: Learn how to manage read replicas Azure Database for PostgreSQL - Single Server from the Azure portal.
----- Previously updated : 11/05/2020--
-# Create and manage read replicas in Azure Database for PostgreSQL - Single Server from the Azure portal
-
-In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
--
-## Prerequisites
-An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md) to be the primary server.
-
-## Azure replication support
-
-[Read replicas](concepts-read-replicas.md) and [logical decoding](concepts-logical.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas.
-
-To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
-
-* **Off** - Puts the least information in the WAL. This setting is not available on most Azure Database for PostgreSQL servers.
-* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers.
-* **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting.
--
-> [!NOTE]
-> When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica.
-
-## Prepare the primary server
-
-1. In the Azure portal, select an existing Azure Database for PostgreSQL server to use as a master.
-
-2. From the server's menu, select **Replication**. If Azure replication support is set to at least **Replica**, you can create read replicas.
-
-3. If Azure replication support is not set to at least **Replica**, set it. Select **Save**.
-
- :::image type="content" source="./media/howto-read-replicas-portal/set-replica-save.png" alt-text="Azure Database for PostgreSQL - Replication - Set replica and save":::
-
-4. Restart the server to apply the change by selecting **Yes**.
-
- :::image type="content" source="./media/howto-read-replicas-portal/confirm-restart.png" alt-text="Azure Database for PostgreSQL - Replication - Confirm restart":::
-
-5. You will receive two Azure portal notifications once the operation is complete. There is one notification for updating the server parameter. There is another notification for the server restart that follows immediately.
-
- :::image type="content" source="./media/howto-read-replicas-portal/success-notifications.png" alt-text="Success notifications":::
-
-6. Refresh the Azure portal page to update the Replication toolbar. You can now create read replicas for this server.
-
-
-## Create a read replica
-To create a read replica, follow these steps:
-
-1. Select an existing Azure Database for PostgreSQL server to use as the primary server.
-
-2. On the server sidebar, under **SETTINGS**, select **Replication**.
-
-3. Select **Add Replica**.
-
- :::image type="content" source="./media/howto-read-replicas-portal/add-replica.png" alt-text="Add a replica":::
-
-4. Enter a name for the read replica.
-
- :::image type="content" source="./media/howto-read-replicas-portal/name-replica.png" alt-text="Name the replica":::
-
-5. Select a location for the replica. The default location is the same as the primary server's.
-
- :::image type="content" source="./media/howto-read-replicas-portal/location-replica.png" alt-text="Select a location":::
-
- > [!NOTE]
- > To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
-
-6. Select **OK** to confirm the creation of the replica.
-
-After the read replica is created, it can be viewed from the **Replication** window:
-
-
-
-> [!IMPORTANT]
-> Review the [considerations section of the Read Replica overview](concepts-read-replicas.md#considerations).
->
-> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
-
-## Stop replication
-You can stop replication between a primary server and a read replica.
-
-> [!IMPORTANT]
-> After you stop replication to a primary server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
-
-To stop replication between a primary server and a read replica from the Azure portal, follow these steps:
-
-1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
-
-2. On the server menu, under **SETTINGS**, select **Replication**.
-
-3. Select the replica server for which to stop replication.
-
- :::image type="content" source="./media/howto-read-replicas-portal/select-replica.png" alt-text="Select the replica":::
-
-4. Select **Stop replication**.
-
- :::image type="content" source="./media/howto-read-replicas-portal/select-stop-replication.png" alt-text="Select stop replication":::
-
-5. Select **OK** to stop replication.
-
- :::image type="content" source="./media/howto-read-replicas-portal/confirm-stop-replication.png" alt-text="Confirm to stop replication":::
-
-
-## Delete a primary server
-To delete a primary server, you use the same steps as to delete a standalone Azure Database for PostgreSQL server.
-
-> [!IMPORTANT]
-> When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
-
-To delete a server from the Azure portal, follow these steps:
-
-1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
-
-2. Open the **Overview** page for the server. Select **Delete**.
-
- :::image type="content" source="./media/howto-read-replicas-portal/delete-server.png" alt-text="On the server Overview page, select to delete the primary server":::
-
-3. Enter the name of the primary server to delete. Select **Delete** to confirm deletion of the primary server.
-
- :::image type="content" source="./media/howto-read-replicas-portal/confirm-delete.png" alt-text="Confirm to delete the primary server":::
-
-
-## Delete a replica
-You can delete a read replica similar to how you delete a primary server.
--- In the Azure portal, open the **Overview** page for the read replica. Select **Delete**.-
- :::image type="content" source="./media/howto-read-replicas-portal/delete-replica.png" alt-text="On the replica Overview page, select to delete the replica":::
-
-You can also delete the read replica from the **Replication** window by following these steps:
-
-1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
-
-2. On the server menu, under **SETTINGS**, select **Replication**.
-
-3. Select the read replica to delete.
-
- :::image type="content" source="./media/howto-read-replicas-portal/select-replica.png" alt-text="Select the replica to delete":::
-
-4. Select **Delete replica**.
-
- :::image type="content" source="./media/howto-read-replicas-portal/select-delete-replica.png" alt-text="Select delete replica":::
-
-5. Enter the name of the replica to delete. Select **Delete** to confirm deletion of the replica.
-
- :::image type="content" source="./media/howto-read-replicas-portal/confirm-delete-replica.png" alt-text="Confirm to delete te replica":::
-
-
-## Monitor a replica
-Two metrics are available to monitor read replicas.
-
-### Max Lag Across Replicas metric
-The **Max Lag Across Replicas** metric shows the lag in bytes between the primary server and the most-lagging replica.
-
-1. In the Azure portal, select the primary Azure Database for PostgreSQL server.
-
-2. Select **Metrics**. In the **Metrics** window, select **Max Lag Across Replicas**.
-
- :::image type="content" source="./media/howto-read-replicas-portal/select-max-lag.png" alt-text="Monitor the max lag across replicas":::
-
-3. For your **Aggregation**, select **Max**.
--
-### Replica Lag metric
-The **Replica Lag** metric shows the time since the last replayed transaction on a replica. If there are no transactions occurring on your master, the metric reflects this time lag.
-
-1. In the Azure portal, select the Azure Database for PostgreSQL read replica.
-
-2. Select **Metrics**. In the **Metrics** window, select **Replica Lag**.
-
- :::image type="content" source="./media/howto-read-replicas-portal/select-replica-lag.png" alt-text="Monitor the replica lag":::
-
-3. For your **Aggregation**, select **Max**.
-
-## Next steps
-* Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
-* Learn how to [create and manage read replicas in the Azure CLI and REST API](howto-read-replicas-cli.md).
postgresql Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-read-replicas-powershell.md
- Title: Manage read replicas - Azure PowerShell - Azure Database for PostgreSQL
-description: Learn how to set up and manage read replicas in Azure Database for PostgreSQL using PowerShell.
----- Previously updated : 06/08/2020 ---
-# How to create and manage read replicas in Azure Database for PostgreSQL using PowerShell
-
-In this article, you learn how to create and manage read replicas in the Azure Database for PostgreSQL
-service using PowerShell. To learn more about read replicas, see the
-[overview](concepts-read-replicas.md).
-
-## Azure PowerShell
-
-You can create and manage read replicas using PowerShell.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed
- locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
--
-> [!IMPORTANT]
-> The read replica feature is only available for Azure Database for PostgreSQL servers in the General
-> Purpose or Memory Optimized pricing tiers. Ensure the primary server is in one of these pricing
-> tiers.
-
-### Create a read replica
-
-A read replica server can be created using the following command:
-
-```azurepowershell-interactive
-Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- New-AzPostgreSqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup
-```
-
-The `New-AzPostgreSqlReplica` command requires the following parameters:
-
-| Setting | Example value | Description  |
-| | | |
-| ResourceGroupName |  myresourcegroup |  The resource group where the replica server is created.  |
-| Name | mydemoreplicaserver | The name of the new replica server that is created. |
-
-To create a cross region read replica, use the **Location** parameter. The following example creates
-a replica in the **West US** region.
-
-```azurepowershell-interactive
-Get-AzPostgreSqlServer -Name mrdemoserver -ResourceGroupName myresourcegroup |
- New-AzPostgreSqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -Location westus
-```
-
-To learn more about which regions you can create a replica in, visit the
-[read replica concepts article](concepts-read-replicas.md).
-
-By default, read replicas are created with the same server configuration as the primary unless the
-**Sku** parameter is specified.
-
-> [!NOTE]
-> It is recommended that the replica server's configuration should be kept at equal or greater
-> values than the primary to ensure the replica is able to keep up with the master.
-
-### List replicas for a primary server
-
-To view all replicas for a given primary server, run the following command:
-
-```azurepowershell-interactive
-Get-AzPostgreSQLReplica -ResourceGroupName myresourcegroup -ServerName mydemoserver
-```
-
-The `Get-AzPostgreSQLReplica` command requires the following parameters:
-
-| Setting | Example value | Description  |
-| | | |
-| ResourceGroupName |  myresourcegroup |  The resource group where the replica server will be created to.  |
-| ServerName | mydemoserver | The name or ID of the primary server. |
-
-### Stop a replica server
-
-Stopping a read replica server promotes the read replica to be an independent server. It can be done by running the `Update-AzPostgreSqlServer` cmdlet and by setting the ReplicationRole value to `None`.
-
-```azurepowershell-interactive
-Update-AzPostgreSqlServer -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -ReplicationRole None
-```
-
-### Delete a replica server
-
-Deleting a read replica server can be done by running the `Remove-AzPostgreSqlServer` cmdlet.
-
-```azurepowershell-interactive
-Remove-AzPostgreSqlServer -Name mydemoreplicaserver -ResourceGroupName myresourcegroup
-```
-
-### Delete a primary server
-
-> [!IMPORTANT]
-> Deleting a primary server stops replication to all replica servers and deletes the primary server
-> itself. Replica servers become standalone servers that now support both read and writes.
-
-To delete a primary server, you can run the `Remove-AzPostgreSqlServer` cmdlet.
-
-```azurepowershell-interactive
-Remove-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Restart Azure Database for PostgreSQL server using PowerShell](howto-restart-server-powershell.md)
postgresql Howto Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restart-server-cli.md
- Title: Restart server - Azure CLI - Azure Database for PostgreSQL - Single Server
-description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure CLI
----- Previously updated : 5/6/2019 ---
-# Restart Azure Database for PostgreSQL - Single Server using the Azure CLI
-This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation.
-
-The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
-
-> [!NOTE]
-> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server.
-
-## Prerequisites
-To complete this how-to guide:
-- Create an [Azure Database for PostgreSQL server](quickstart-create-server-up-azure-cli.md).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Restart the server
-
-Restart the server with the following command:
-
-```azurecli-interactive
-az postgres server restart --name mydemoserver --resource-group myresourcegroup
-```
-
-## Next steps
-
-Learn about [how to set parameters in Azure Database for PostgreSQL](howto-configure-server-parameters-using-cli.md)
postgresql Howto Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restart-server-portal.md
- Title: Restart server - Azure portal - Azure Database for PostgreSQL - Single Server
-description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure portal.
----- Previously updated : 12/20/2020--
-# Restart Azure Database for PostgreSQL - Single Server using the Azure portal
-This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation.
-
-The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
-
-> [!NOTE]
-> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server to enable faster recovery time. If the `CHECKPOINT` command is not performed prior to restarting the server then it may lead to longer recovery time.
-
-## Prerequisites
-To complete this how-to guide, you need:
-- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md)-
-## Perform server restart
-
-The following steps restart the PostgreSQL server:
-
-1. In the [Azure portal](https://portal.azure.com/), select your Azure Database for PostgreSQL server.
-
-2. In the toolbar of the server's **Overview** page, click **Restart**.
-
- :::image type="content" source="./media/howto-restart-server-portal/2-server.png" alt-text="Azure Database for PostgreSQL - Overview - Restart button":::
-
-3. Click **Yes** to confirm restarting the server.
-
- :::image type="content" source="./media/howto-restart-server-portal/3-restart-confirm.png" alt-text="Azure Database for PostgreSQL - Restart confirm":::
-
-4. Observe that the server status changes to "Restarting".
-
- :::image type="content" source="./media/howto-restart-server-portal/4-restarting-status.png" alt-text="Azure Database for PostgreSQL - Restart status":::
-
-5. Confirm server restart is successful.
-
- :::image type="content" source="./media/howto-restart-server-portal/5-restart-success.png" alt-text="Azure Database for PostgreSQL - Restart success":::
-
-## Next steps
-
-Learn about [how to set parameters in Azure Database for PostgreSQL](howto-configure-server-parameters-using-portal.md)
postgresql Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restart-server-powershell.md
- Title: Restart server - Azure PowerShell - Azure Database for PostgreSQL
-description: This article describes how you can restart an Azure Database for PostgreSQL server using PowerShell.
----- Previously updated : 06/08/2020 ---
-# Restart Azure Database for PostgreSQL server using PowerShell
-
-This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart
-your server for maintenance reasons, which causes a short outage during the operation.
-
-The server restart is blocked if the service is busy. For example, the service may be processing a
-previously requested operation such as scaling vCores.
-
-> [!NOTE]
-> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
- [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
--
-## Restart the server
-
-Restart the server with the following command:
-
-```azurepowershell-interactive
-Restart-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create an Azure Database for PostgreSQL server using PowerShell](quickstart-create-postgresql-server-database-using-azure-powershell.md)
postgresql Howto Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restore-dropped-server.md
- Title: Restore a dropped Azure Database for PostgreSQL server
-description: This article describes how to restore a dropped server in Azure Database for PostgreSQL using the Azure portal.
----- Previously updated : 04/26/2021-
-# Restore a dropped Azure Database for PostgreSQL server
-
-When a server is dropped, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped PostgreSQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
-
-## Pre-requisites
-To restore a dropped Azure Database for PostgreSQL server, you need following:
-- Azure Subscription name hosting the original server-- Location where the server was created-
-## Steps to restore
-
-1. Browse to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade). Select the **Azure Monitor** service, then select **Activity Log**.
-
-2. In Activity Log, click on **Add filter** as shown and set following filters for the following
-
- - **Subscription** = Your Subscription hosting the deleted server
- - **Resource Type** = Azure Database for PostgreSQL servers (Microsoft.DBforPostgreSQL/servers)
- - **Operation** = Delete PostgreSQL Server (Microsoft.DBforPostgreSQL/servers/delete)
-
- ![Activity log filtered for delete PostgreSQL server operation](./media/howto-restore-dropped-server/activity-log-azure.png)
-
-3. Select the **Delete PostgreSQL Server** event, then select the **JSON tab**. Copy the `resourceId` and `submissionTimestamp` attributes in JSON output. The resourceId is in the following format: `/subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBforPostgreSQL/servers/deletedserver`.
--
- 1. Browse to the PostgreSQL [Create Server REST API Page](/rest/api/postgresql/singleserver/servers/create) and select the **Try It** tab highlighted in green. Sign in with your Azure account.
-
- 2. Provide the **resourceGroupName**, **serverName** (deleted server name), **subscriptionId** properties, based on the resourceId attribute JSON value captured in the preceding step 3. The api-version property is pre-populated and can be left as-is, as shown in the following image.
-
- ![Create server using REST API](./media/howto-restore-dropped-server/create-server-from-rest-api-azure.png)
-
- 3. Scroll below on Request Body section and paste the following replacing the "Dropped server Location"(e.g. CentralUS, EastUS etc.), "submissionTimestamp", and "resourceId". For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
-
- ```json
- {
- "location": "Dropped Server Location",
- "properties":
- {
- "restorePointInTime": "submissionTimestamp - 15 minutes",
- "createMode": "PointInTimeRestore",
- "sourceServerId": "resourceId"
- }
- }
- ```
-
- For example, if the current time is 2020-11-02T23:59:59.0000000Z, we recommend a minimum of 15 minutes prior restore point in time 2020-11-02T23:44:59.0000000Z. Please see below example and ensure that you are changing three parameters (location,restorePointInTime,sourceServerId) as per your restore requirements.
-
- ```json
- {
- "location": "EastUS",
- "properties":
- {
- "restorePointInTime": "2020-11-02T23:44:59.0000000Z",
- "createMode": "PointInTimeRestore",
- "sourceServerId": "/subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/SourceResourceGroup/providers/Microsoft.DBforPostgreSQL/servers/sourceserver"
- }
- }
- ```
-
- > [!Important]
- > There is a time limit of five days after the server was dropped. After five days, an error is expected since the backup file cannot be found.
-
-4. If you see Response Code 201 or 202, the restore request is successfully submitted.
-
- The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for
- - **Subscription** = Your Subscription
- - **Resource Type** = Azure Database for PostgreSQL servers (Microsoft.DBforPostgreSQL/servers)
- - **Operation** = Update PostgreSQL Server Create
-
-## Next steps
-- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a dropped server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system. -- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-PostgreSQL/preventing-the-disaster-of-accidental-deletion-for-your-PostgreSQL/ba-p/825222).
postgresql Howto Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restore-server-cli.md
- Title: Backup and restore - Azure CLI - Azure Database for PostgreSQL - Single Server
-description: Learn how to set backup configurations and restore a server in Azure Database for PostgreSQL - Single Server by using the Azure CLI.
------ Previously updated : 10/25/2019 --
-# How to back up and restore a server in Azure Database for PostgreSQL - Single Server using the Azure CLI
-
-Azure Database for PostgreSQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
-
-## Prerequisites
-To complete this how-to guide:
--- You need an [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md).---
-## Set backup configuration
-
-You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation.
-
-> [!NOTE]
-> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched.
->
-
-While creating a server via the `az postgres server create` command, the `--geo-redundant-backup` parameter decides your Backup Redundancy Option. If `Enabled`, geo redundant backups are taken. Or if `Disabled` locally redundant backups are taken.
-
-The backup retention period is set by the parameter `--backup-retention-days`.
-
-For more information about setting these values during create, see the [Azure Database for PostgreSQL server CLI Quickstart](quickstart-create-server-database-azure-cli.md).
-
-The backup retention period of a server can be changed as follows:
-
-```azurecli-interactive
-az postgres server update --name mydemoserver --resource-group myresourcegroup --backup-retention 10
-```
-
-The preceding example changes the backup retention period of mydemoserver to 10 days.
-
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the next section.
-
-## Server point-in-time restore
-You can restore the server to a previous point in time. The restored data is copied to a new server, and the existing server is left as is. For example, if a table is accidentally dropped at noon today, you can restore to the time just before noon. Then, you can retrieve the missing table and data from the restored copy of the server.
-
-To restore the server, use the Azure CLI [az postgres server restore](/cli/azure/postgres/server) command.
-
-### Run the restore command
-
-To restore the server, at the Azure CLI command prompt, enter the following command:
-
-```azurecli-interactive
-az postgres server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time 2018-03-13T13:59:00Z --source-server mydemoserver
-```
-
-The `az postgres server restore` command requires the following parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-| resource-group |  myresourcegroup |  The resource group where the source server exists.  |
-| name | mydemoserver-restored | The name of the new server that is created by the restore command. |
-| restore-point-in-time | 2018-03-13T13:59:00Z | Select a point in time to restore to. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as `2018-03-13T05:59:00-08:00`. You can also use the UTC Zulu format, for example, `2018-03-13T13:59:00Z`. |
-| source-server | mydemoserver | The name or ID of the source server to restore from. |
-
-When you restore a server to an earlier point in time, a new server is created. The original server and its databases from the specified point in time are copied to the new server.
-
-The location and pricing tier values for the restored server remain the same as the original server.
-
-After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
-
-The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server.
-
-## Geo restore
-If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for PostgreSQL is available.
-
-To create a server using a geo redundant backup, use the Azure CLI `az postgres server georestore` command.
-
-> [!NOTE]
-> When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated.
->
-
-To geo restore the server, at the Azure CLI command prompt, enter the following command:
-
-```azurecli-interactive
-az postgres server georestore --resource-group myresourcegroup --name mydemoserver-georestored --source-server mydemoserver --location eastus --sku-name GP_Gen4_8
-```
-This command creates a new server called *mydemoserver-georestored* in East US that will belong to *myresourcegroup*. It is a General Purpose, Gen 4 server with 8 vCores. The server is created from the geo-redundant backup of *mydemoserver*, which is also in the resource group *myresourcegroup*
-
-If you want to create the new server in a different resource group from the existing server, then in the `--source-server` parameter you would qualify the server name as in the following example:
-
-```azurecli-interactive
-az postgres server georestore --resource-group newresourcegroup --name mydemoserver-georestored --source-server "/subscriptions/$<subscription ID>/resourceGroups/$<resource group ID>/providers/Microsoft.DBforPostgreSQL/servers/mydemoserver" --location eastus --sku-name GP_Gen4_8
-
-```
-
-The `az postgres server georestore` command requires the following parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-|resource-group| myresourcegroup | The name of the resource group the new server will belong to.|
-|name | mydemoserver-georestored | The name of the new server. |
-|source-server | mydemoserver | The name of the existing server whose geo redundant backups are used. |
-|location | eastus | The location of the new server. |
-|sku-name| GP_Gen4_8 | This parameter sets the pricing tier, compute generation, and number of vCores of the new server. GP_Gen4_8 maps to a General Purpose, Gen 4 server with 8 vCores.|
-
-When creating a new server by a geo restore, it inherits the same storage size and pricing tier as the source server. These values cannot be changed during creation. After the new server is created, its storage size can be scaled up.
-
-After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
-
-The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server.
-
-## Next steps
-- Learn more about the service's [backups](concepts-backup.md)-- Learn about [replicas](concepts-read-replicas.md)-- Learn more about [business continuity](concepts-business-continuity.md) options
postgresql Howto Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restore-server-portal.md
- Title: Backup and restore - Azure portal - Azure Database for PostgreSQL - Single Server
-description: This article describes how to restore a server in Azure Database for PostgreSQL - Single Server using the Azure portal.
----- Previously updated : 6/30/2020--
-# How to backup and restore a server in Azure Database for PostgreSQL - Single Server using the Azure portal
-
-## Backup happens automatically
-Azure Database for PostgreSQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
-
-## Set backup configuration
-
-You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation, in the **Pricing Tier** window.
-
-> [!NOTE]
-> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched.
->
-
-While creating a server through the Azure portal, the **Pricing Tier** window is where you select either **Locally Redundant** or **Geographically Redundant** backups for your server. This window is also where you select the **Backup Retention Period** - how long (in number of days) you want the server backups stored for.
-
- :::image type="content" source="./media/howto-restore-server-portal/pricing-tier.png" alt-text="Pricing Tier - Choose Backup Redundancy":::
-
-For more information about setting these values during create, see the [Azure Database for PostgreSQL server quickstart](quickstart-create-server-database-portal.md).
-
-The backup retention period of a server can be changed through the following steps:
-1. Sign into the [Azure portal](https://portal.azure.com/).
-2. Select your Azure Database for PostgreSQL server. This action opens the **Overview** page.
-3. Select **Pricing Tier** from the menu, under **SETTINGS**. Using the slider you can change the **Backup Retention Period** to your preference between 7 and 35 days.
-In the screenshot below it has been increased to 34 days.
-
-4. Click **OK** to confirm the change.
-
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
-
-## Point-in-time restore
-Azure Database for PostgreSQL allows you to restore the server back to a point-in-time and into to a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server.
-
-For example, if a table was accidentally dropped at noon today, you could restore to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level.
-
-The following steps restore the sample server to a point-in-time:
-1. In the Azure portal, select your Azure Database for PostgreSQL server.
-
-2. In the toolbar of the server's **Overview** page, select **Restore**.
-
- :::image type="content" source="./media/howto-restore-server-portal/2-server.png" alt-text="Azure Database for PostgreSQL - Overview - Restore button":::
-
-3. Fill out the Restore form with the required information:
-
- :::image type="content" source="./media/howto-restore-server-portal/3-restore.png" alt-text="Azure Database for PostgreSQL - Restore information":::
- - **Restore point**: Select the point-in-time you want to restore to.
- - **Target server**: Provide a name for the new server.
- - **Location**: You cannot select the region. By default it is same as the source server.
- - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
-
-4. Click **OK** to restore the server to restore to a point-in-time.
-
-5. Once the restore finishes, locate the new server that is created to verify the data was restored as expected.
-
-The new server created by point-in-time restore has the same server admin login name and password that was valid for the existing server at the point-in-time chose. You can change the password from the new server's **Overview** page.
-
-The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server.
-
-If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
-
-## Geo restore
-
-If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for PostgreSQL is available.
-
-1. Select the **Create a resource** button (+) in the upper-left corner of the portal. Select **Databases** > **Azure Database for PostgreSQL**.
-
- :::image type="content" source="./media/howto-restore-server-portal/1-navigate-to-postgres.png" alt-text="Navigate to Azure Database for PostgreSQL.":::
-
-2. Select the **Single server** deployment option.
-
- :::image type="content" source="./media/howto-restore-server-portal/2-select-deployment-option.png" alt-text="Select Azure Database for PostgreSQL - Single server deployment option.":::
-
-3. Provide the subscription, resource group, and name of the new server.
-
-4. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo redundant backups enabled.
-
- :::image type="content" source="./media/howto-restore-server-portal/4-geo-restore.png" alt-text="Select data source.":::
-
- > [!NOTE]
- > When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated.
- >
-
-5. Select the **Backup** dropdown.
-
- :::image type="content" source="./media/howto-restore-server-portal/5-geo-restore-backup.png" alt-text="Select backup dropdown.":::
-
-6. Select the source server to restore from.
-
- :::image type="content" source="./media/howto-restore-server-portal/6-select-backup.png" alt-text="Select backup.":::
-
-7. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**.
-
- :::image type="content" source="./media/howto-restore-server-portal/7-accept-backup.png" alt-text="Continue with backup.":::
-
-8. Fill out the rest of the form with your preferences. You can select any **Location**.
-
- After selecting the location, you can select **Configure server** to update the **Compute Generation** (if available in the region you have chosen), number of **vCores**, **Backup Retention Period**, and **Backup Redundancy Option**. Changing **Pricing Tier** (Basic, General Purpose, or Memory Optimized) or **Storage** size during restore is not supported.
-
- :::image type="content" source="./media/howto-restore-server-portal/8-create.png" alt-text="Fill form.":::
-
-9. Select **Review + create** to review your selections.
-
-10. Select **Create** to provision the server. This operation may take a few minutes.
-
-The new server created by geo restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
-
-The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server.
-
-If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
--
-## Next steps
-- Learn more about the service's [backups](concepts-backup.md).-- Learn more about [business continuity](concepts-business-continuity.md) options.
postgresql Howto Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restore-server-powershell.md
- Title: Backup and restore - Azure PowerShell - Azure Database for PostgreSQL
-description: Learn how to backup and restore a server in Azure Database for PostgreSQL by using Azure PowerShell.
------ Previously updated : 06/08/2020-
-# How to back up and restore an Azure Database for PostgreSQL server using PowerShell
-
-Azure Database for PostgreSQL servers is backed up periodically to enable restore features. Using
-this feature you may restore the server and all its databases to an earlier point-in-time, on a new
-server.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- The [Az PowerShell module](/powershell/azure/install-az-ps) installed
- locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser
-- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)-
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If you choose to use PowerShell locally, connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
--
-## Set backup configuration
-
-At server creation, you make the choice between configuring your server for either locally redundant
-or geographically redundant backups.
-
-> [!NOTE]
-> After a server is created, the kind of redundancy it has, geographically redundant vs locally
-> redundant, can't be changed.
-
-While creating a server via the `New-AzPostgreSqlServer` command, the **GeoRedundantBackup**
-parameter decides your backup redundancy option. If **Enabled**, geo redundant backups are taken. Or
-if **Disabled**, locally redundant backups are taken.
-
-The backup retention period is set by the **BackupRetentionDay** parameter.
-
-For more information about setting these values during server creation, see
-[Create an Azure Database for PostgreSQL server using PowerShell](quickstart-create-postgresql-server-database-using-azure-powershell.md).
-
-The backup retention period of a server can be changed as follows:
-
-```azurepowershell-interactive
-Update-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -BackupRetentionDay 10
-```
-
-The preceding example changes the backup retention period of mydemoserver to 10 days.
-
-The backup retention period governs how far back a point-in-time restore can be retrieved, since
-it's based on available backups. Point-in-time restore is described further in the next section.
-
-## Server point-in-time restore
-
-You can restore the server to a previous point-in-time. The restored data is copied to a new server,
-and the existing server is left unchanged. For example, if a table is accidentally dropped, you can
-restore to the time just the drop occurred. Then, you can retrieve the missing table and data from
-the restored copy of the server.
-
-To restore the server, use the `Restore-AzPostgreSqlServer` PowerShell cmdlet.
-
-### Run the restore command
-
-To restore the server, run the following example from PowerShell.
-
-```azurepowershell-interactive
-$restorePointInTime = (Get-Date).AddMinutes(-10)
-Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Restore-AzPostgreSqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore
-```
-
-The **PointInTimeRestore** parameter set of the `Restore-AzPostgreSqlServer` cmdlet requires the
-following parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-| ResourceGroupName |  myresourcegroup |  The resource group where the source server exists.  |
-| Name | mydemoserver-restored | The name of the new server that is created by the restore command. |
-| RestorePointInTime | 2020-03-13T13:59:00Z | Select a point in time to restore. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as **2020-03-13T05:59:00-08:00**. You can also use the UTC Zulu format, for example, **2018-03-13T13:59:00Z**. |
-| UsePointInTimeRestore | `<SwitchParameter>` | Use point-in-time mode to restore. |
-
-When you restore a server to an earlier point-in-time, a new server is created. The original server
-and its databases from the specified point-in-time are copied to the new server.
-
-The location and pricing tier values for the restored server remain the same as the original server.
-
-After the restore process finishes, locate the new server and verify that the data is restored as
-expected. The new server has the same server admin login name and password that was valid for the
-existing server at the time the restore was started. The password can be changed from the new
-server's **Overview** page.
-
-The new server created during a restore does not have the VNet service endpoints that existed on the
-original server. These rules must be set up separately for the new server. Firewall rules from the
-original server are restored.
-
-## Geo restore
-
-If you configured your server for geographically redundant backups, a new server can be created from
-the backup of the existing server. This new server can be created in any region that Azure Database
-for PostgreSQL is available.
-
-To create a server using a geo redundant backup, use the `Restore-AzPostgreSqlServer` command with the
-**UseGeoRestore** parameter.
-
-> [!NOTE]
-> When a server is first created it may not be immediately available for geo restore. It may take a
-> few hours for the necessary metadata to be populated.
-
-To geo restore the server, run the following example from PowerShell:
-
-```azurepowershell-interactive
-Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Restore-AzPostgreSqlServer -Name mydemoserver-georestored -ResourceGroupName myresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore
-```
-
-This example creates a new server called **mydemoserver-georestored** in the East US region that
-belongs to **myresourcegroup**. It is a General Purpose, Gen 5 server with 8 vCores. The server is
-created from the geo-redundant backup of **mydemoserver**, also in the resource group
-**myresourcegroup**.
-
-To create the new server in a different resource group from the existing server, specify the new
-resource group name using the **ResourceGroupName** parameter as shown in the following example:
-
-```azurepowershell-interactive
-Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Restore-AzPostgreSqlServer -Name mydemoserver-georestored -ResourceGroupName newresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore
-```
-
-The **GeoRestore** parameter set of the `Restore-AzPostgreSqlServer` cmdlet requires the following
-parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-|ResourceGroupName | myresourcegroup | The name of the resource group the new server belongs to.|
-|Name | mydemoserver-georestored | The name of the new server. |
-|Location | eastus | The location of the new server. |
-|UseGeoRestore | `<SwitchParameter>` | Use geo mode to restore. |
-
-When creating a new server using geo restore, it inherits the same storage size and pricing tier as
-the source server unless the **Sku** parameter is specified.
-
-After the restore process finishes, locate the new server and verify that the data is restored as
-expected. The new server has the same server admin login name and password that was valid for the
-existing server at the time the restore was started. The password can be changed from the new
-server's **Overview** page.
-
-The new server created during a restore does not have the VNet service endpoints that existed on the
-original server. These rules must be set up separately for this new server. Firewall rules from the
-original server are restored.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to generate an Azure Database for PostgreSQL connection string with PowerShell](howto-connection-string-powershell.md)
postgresql Howto Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-tls-configurations.md
- Title: TLS configuration - Azure portal - Azure Database for PostgreSQL - Single server
-description: Learn how to set TLS configuration using Azure portal for your Azure Database for PostgreSQL Single server
------ Previously updated : 06/02/2020--
-# Configuring TLS settings in Azure Database for PostgreSQL Single - server using Azure portal
-
-This article describes how you can configure an Azure Database for PostgreSQL to enforce minimum TLS version allowed for connections and deny all connections with lower TLS version than configured minimum TLS version thereby enhancing the network security.
-
-You can enforce TLS version for connecting to their Azure Database for PostgreSQL. Customers now have a choice to set the minimum TLS version for their database server. For example, setting the minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Instead, setting minimum tls version to 1.2+ means you only allow connections from clients using TLS 1.2 and all connections with TLS 1.0 and TLS 1.1 will be rejected.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
-
-* An [Azure Database for PostgreSQL](quickstart-create-server-database-portal.md)
-
-## Set TLS configurations for Azure Database for PostgreSQL - Single server
-
-Follow these steps to set PostgreSQL minimum TLS version:
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL.
-
-1. On the Azure Database for PostgreSQL - Single server page, under **Settings**, click **Connection security** to open the connection security configuration page.
-
-1. In **Minimum TLS version**, select **1.2** to deny connections with TLS version less than TLS 1.2 for your PostgreSQL Single server.
-
- :::image type="content" source="./media/howto-tls-configurations/setting-tls-value.png" alt-text="Azure Database for PostgreSQL Single - server TLS configuration":::
-
-1. Click **Save** to save the changes.
-
-1. A notification will confirm that connection security setting was successfully enabled.
-
- :::image type="content" source="./media/howto-tls-configurations/setting-tls-value-success.png" alt-text="Azure Database for PostgreSQL - Single server TLS configuration success":::
-
-## Next steps
-
-Learn about [how to create alerts on metrics](howto-alert-on-metric.md)
postgresql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-troubleshoot-common-connection-issues.md
- Title: Troubleshoot connections - Azure Database for PostgreSQL - Single Server
-description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Single Server.
----- Previously updated : 5/6/2019--
-# Troubleshoot connection issues to Azure Database for PostgreSQL - Single Server
-
-Connection problems may be caused by various things, including:
-
-* Firewall settings
-* Connection time-out
-* Incorrect login information
-* Maximum limit reached on some Azure Database for PostgreSQL resources
-* Issues with the infrastructure of the service
-* Maintenance being performed in the service
-* The compute allocation of the server is changed by scaling the number of vCores or moving to a different service tier
-
-Generally, connection issues to Azure Database for PostgreSQL can be classified as follows:
-
-* Transient errors (short-lived or intermittent)
-* Persistent or non-transient errors (errors that regularly recur)
-
-## Troubleshoot transient errors
-
-Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. The Azure Database for PostgreSQL service has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery.
-
-### Steps to resolve transient connectivity issues
-
-1. Check the [Microsoft Azure Service Dashboard](https://azure.microsoft.com/status) for any known outages that occurred during the time in which the errors were reported by the application.
-2. Applications that connect to a cloud service such as Azure Database for PostgreSQL should expect transient errors and implement retry logic to handle these errors instead of surfacing these as application errors to users. Review [Handling of transient connectivity errors for Azure Database for PostgreSQL](concepts-connectivity.md) for best practices and design guidelines for handling transient errors.
-3. As a server approaches its resource limits, errors can seem to be transient connectivity issue. See [Limitations in Azure Database for PostgreSQL](concepts-limits.md).
-4. If connectivity problems continue, or if the duration for which your application encounters the error exceeds 60 seconds or if you see multiple occurrences of the error in a given day, file an Azure support request by selecting **Get Support** on the [Azure Support](https://azure.microsoft.com/support/options) site.
-
-## Troubleshoot persistent errors
-
-If the application persistently fails to connect to Azure Database for PostgreSQL, it usually indicates an issue with one of the following:
-
-* Server firewall configuration: Make sure that the Azure Database for PostgreSQL server firewall is configured to allow connections from your client, including proxy servers and gateways.
-* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you can't connect to must be allowed and the application names such as PostgreSQL in some firewalls.
-* User error: You might have mistyped connection parameters, such as the server name in the connection string or a missing *\@servername* suffix in the user name.
-* If you see the error _Server isn't configured to allow ipv6 connections_, note that the Basic tier doesn't support VNet service endpoints. You have to remove the Microsoft.Sql endpoint from the subnet that is attempting to connect to the Basic server.
-* If you see the connection error _sslmode value "***" invalid when SSL support is not compiled in_ error, it means your PostgreSQL client doesn't support SSL. Most probably, the client-side libpq hasn't been compiled with the "--with-openssl" flag. Try connecting with a PostgreSQL client that has SSL support.
-
-### Steps to resolve persistent connectivity issues
-
-1. Set up [firewall rules](howto-manage-firewall-using-portal.md) to allow the client IP address. For temporary testing purposes only, set up a firewall rule using 0.0.0.0 as the starting IP address and using 255.255.255.255 as the ending IP address. This will open the server to all IP addresses. If this resolves your connectivity issue, remove this rule and create a firewall rule for an appropriately limited IP address or address range.
-2. On all firewalls between the client and the internet, make sure that port 5432 is open for outbound connections.
-3. Verify your connection string and other connection settings.
-4. Check the service health in the dashboard. If you think thereΓÇÖs a regional outage, see [Overview of business continuity with Azure Database for PostgreSQL](concepts-business-continuity.md) for steps to recover to a new region.
-
-## Next steps
-
-* [Handling of transient connectivity errors for Azure Database for PostgreSQL](concepts-connectivity.md)
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/overview-postgres-choose-server-options.md
- Title: Choose the right PostgreSQL server option in Azure
-description: Provides guidelines for choosing the right PostgreSQL server option for your deployments.
------ Previously updated : 12/01/2021-
-# Choose the right PostgreSQL server option in Azure
-
-With Azure, your PostgreSQL Server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has multiple deployment options, each with multiple service tiers. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, and make backups, or if you want to delegate these operations to Azure.
-
-When making your decision, consider the following three options in PaaS or alternatively running on Azure VMs (IaaS)
-- [Azure Database for PostgreSQL Single Server](./overview-single-server.md)-- [Azure Database for PostgreSQL Flexible Server](./flexible-server/overview.md)-- [Azure Database for PostgreSQL Hyperscale (Citus)](./hyperscale/overview.md)-
-**PostgreSQL on Azure VMs** option falls into the industry category of IaaS. With this service, you can run PostgreSQL Server inside a fully managed virtual machine on the Azure cloud platform. All recent versions and editions of PostgreSQL can be installed on an IaaS virtual machine. In the most significant difference from Azure Database for PostgreSQL, PostgreSQL on Azure VMs offers control over the database engine. However, this control comes at the cost of responsibility to manage the VMs and many database administration (DBA) tasks. These tasks include maintaining and patching database servers, database recovery, and high-availability design.
-
-The main differences between these options are listed in the following table:
-
-| **Attribute** | **Postgres on Azure VMs** | **PostgreSQL as PaaS** |
-| -- | -- | -- |
-| **Availability SLA** |- [Virtual Machine SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/) | - [Single Server, Flexible Server, and Hyperscale (Citus) SLA](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/)|
-| **OS and PostgreSQL patching** | - Customer managed | - Single Server ΓÇô Automatic <br> - Flexible Server ΓÇô Automatic with optional customer managed window <br> - Hyperscale (Citus) ΓÇô Automatic |
-| **High availability** | - Customers architect, implement, test, and maintain high availability. Capabilities might include clustering, replication etc. | - Single Server: built-in <br> - Flexible Server: built-in <br> - Hyperscale (Citus): built with standby |
-| **Zone Redundancy** | - Azure VMs can be set up to run in different availability zones. For an on-premises solution, customers must create, manage, and maintain their own secondary data center. | - Single Server: No <br> - Flexible Server: Yes <br> - Hyperscale (Citus): No |
-| **Hybrid Scenario** | - Customer managed |- Single Server: Read-replica <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
-| **Backup and Restore** | - Customer Managed | - Single Server: built-in with user configuration for local and geo <br> - Flexible Server: built-in with user configuration on zone-redundant storage <br> - Hyperscale (Citus): built-in |
-| **Monitoring Database Operations** | - Customer Managed | - Single Server, Flexible Server, and Hyperscale (Citus): All offer customers the ability to set alerts on the database operation and act upon reaching thresholds. |
-| **Advanced Threat Protection** | - Customers must build this protection for themselves. |- Single Server: Yes <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
-| **Disaster Recovery** | - Customer Managed | - Single Server: Geo redundant backup and geo read-replica <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
-| **Intelligent Performance** | - Customer Managed | - Single Server: Yes <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
-
-## Total cost of ownership (TCO)
-
-TCO is often the primary consideration that determines the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for PostgreSQL and PostgreSQL on Azure VMs.
-
-## Billing
-
-Azure Database for PostgreSQL is currently available as a service in several tiers with different prices for resources. All resources are billed hourly at a fixed rate. For the latest information on the currently supported service tiers, compute sizes, and storage amounts, see [pricing page](https://azure.microsoft.com/pricing/details/postgresql/server/) You can dynamically adjust service tiers and compute sizes to match your application's varied throughput needs. You're billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/).
-
-With Azure Database for PostgreSQL, Microsoft automatically configures, patches, and upgrades the database software. These automated actions reduce your administration costs. Also, Azure Database for PostgreSQL has [automated backup-link]() capabilities. These capabilities help you achieve significant cost savings, especially when you have a large number of databases. In contrast, with PostgreSQL on Azure VMs you can choose and run any PostgreSQL version. However, you need to pay for the provisioned VM, storage cost associated with the data, backup, monitoring data and log storage and the costs for the specific PostgreSQL license type used (if any).
-
-Azure Database for PostgreSQL Single Server provides built-in high availability at the zonal-level (within an AZ) for any kind of node-level interruption while still maintaining the [SLA guarantee](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/) for the service. Flexible Server provides [uptime SLAs](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/) with and without zone-redundant configuration. However, for database high availability within VMs, you use the high availability options like [Streaming Replication](https://www.postgresql.org/docs/12/warm-standby.html#STREAMING-REPLICATION) that are available on a PostgreSQL database. Using a supported high availability option doesn't provide an additional SLA. But it does let you achieve greater than 99.99% database availability at additional cost and administrative overhead.
-
-For more information on pricing, see the following articles:
-- [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/)-- [Virtual machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/)-- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)-
-## Administration
-
-For many businesses, the decision to transition to a cloud service is as much about offloading complexity of administration as it is about cost.
-
-With IaaS, Microsoft:
--- Administers the underlying infrastructure.-- Provides automated patching for underlying hardware and OS-
-With PaaS, Microsoft:
--- Administers the underlying infrastructure.-- Provides automated patching for underlying hardware, OS and database engine.-- Manages high availability of the database.-- Automatically performs backups and replicates all data to provide disaster recovery.-- Encrypts the data at rest and in motion by default.-- Monitors your server and provides features for query performance insights and performance recommendations.-
-With Azure Database for PostgreSQL, you can continue to administer your database. But you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include:
--- Databases-- Sign-in-- Index tuning-- Query tuning-- Auditing-- Security-
-Additionally, configuring high availability to another data center requires minimal to no configuration or administration.
--- With PostgreSQL on Azure VMs, you have full control over the operating system and the PostgreSQL server instance configuration. With a VM, you decide when to update or upgrade the operating system and database software and what patches to apply. You also decide when to install any additional software such as an antivirus application. Some automated features are provided to greatly simplify patching, backup, and high availability. You can control the size of the VM, the number of disks, and their storage configurations. For more information, see [Virtual machine and cloud service sizes for Azure](../virtual-machines/sizes.md).-
-## Time to move to Azure PostgreSQL Service (PaaS)
--- Azure Database for PostgreSQL is the right solution for cloud-designed applications when developer productivity and fast time to market for new solutions are critical. With programmatic functionality that is like DBA, the service is suitable for cloud architects and developers because it lowers the need for managing the underlying operating system and database.--- When you want to avoid the time and expense of acquiring new on-premises hardware, PostgreSQL on Azure VMs is the right solution for applications that require a granular control and customization of PostgreSQL engine not supported by the service or requiring access of the underlying OS.-
-## Next steps
--- See Azure Database for [PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).-- Get started by creating your first server.
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/overview-single-server.md
- Title: Azure Database for PostgreSQL Single Server
-description: Provides an overview of Azure Database for PostgreSQL Single Server.
------ Previously updated : 11/30/2021-
-# Azure Database for PostgreSQL Single Server
-
-[Azure Database for PostgreSQL](./overview.md) powered by the PostgreSQL community edition is available in three deployment modes:
--- Single Server-- Flexible Server-- Hyperscale (Citus)-
-In this article, we will provide an overview and introduction to core concepts of single server deployment model. To learn about flexible server deployment mode, see [flexible server overview](./flexible-server/overview.md) and Hyperscale (Citus) Overview respectively.
-
-## Overview
-
-Single Server is a fully managed database service with minimal requirements for customizations of the database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized to provide 99.99% availability on single availability zone. It supports community version of PostgreSQL 9.6, 10, and 11. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
-
-Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom PostgreSQL configuration settings.
-
-## High availability
-
-The single server deployment model is optimized for built-in high availability, and elasticity at reduced cost. The architecture separates compute and storage. The database engine runs on a proprietary compute container, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
-
-During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
-
-1. A new compute container is provisioned.
-2. The storage with data files is mapped to the new container.
-3. PostgreSQL database engine is brought online on the new compute container.
-4. Gateway service ensures transparent failover ensuring no application side changes requires.
-
- :::image type="content" source="./media/overview/overview-azure-postgres-single-server.png" alt-text="Azure Database for PostgreSQL Single Server":::
-
-The typical failover time ranges from 60-120 seconds. The cloud native design of the single server service allows it to support 99.99% of availability eliminating the cost of passive hot standby.
-
-Azure's industry leading 99.99% availability service level agreement (SLA), powered by a global network of Microsoft-managed datacenters, helps keep your applications running 24/7.
-
-## Automated patching
-
-The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For PostgreSQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patching frequency is service managed based on the criticality of the payload. In general, the service follows monthly release schedule as part of the continuous integration and release. Users can subscribe to the [planned maintenance notification]() to receive notification of the upcoming maintenance 72 hours before the event.
-
-## Automatic backups
-
-The single server service automatically creates server backups and stores them in user configured locally redundant (LRS) or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption. See [Backups](./concepts-backup.md) for details.
-
-## Adjust performance and scale within seconds
-
-The single server service is available in three SKU tiers: Basic, General Purpose, and Memory Optimized. The Basic tier is best suited for low-cost development and low concurrency workloads. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage auto-growth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume. See [Pricing tiers]() for details.
-
-## Enterprise grade security, compliance, and governance
-
-The single server service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default) or [customer managed](). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service supports TLS versions 1.2, 1.1 and 1.0 with an ability to enforce [minimum TLS version]().
-
-The service allows private access to the servers using private link and provides Advanced threat protection feature. Advanced threat protection detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
-
-In addition to native authentication, the single server service supports Azure Active Directory authentication. Azure AD authentication is a mechanism of connecting to the PostgreSQL servers using identities defined and managed in Azure AD. With Azure AD authentication, you can manage database user identities and other Azure services in a central location, which simplifies and centralizes access control.
-
-[Audit logging]() (in preview) is available to track all database level activity.
-
-The single server service is compliant with all the industry-leading certifications like FedRAMP, HIPAA, PCI DSS. Visit the [Azure Trust Center]() for information about Azure's platform security.
-
-For more information about Azure Database for PostgreSQL security features, see the [security overview]().
-
-## Monitoring and alerting
-
-The single server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service allows configuring slow query logs and comes with a differentiated [Query store](./concepts-query-store.md) feature. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. See [Monitoring](./concepts-monitoring.md) for details.
-
-## Migration
-
-The service runs community version of PostgreSQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing application developed on PostgreSQL engine to single server service. The migration to the single server can be performed using one of the following options:
--- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like Pg_dump and Pg_restore can provide fastest way to migrate. See [Migrate using dump and restore](./howto-migrate-using-dump-and-restore.md) for details.-- **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to single server with minimal downtime, Azure Database Migration Service can be leveraged. See [DMS via portal](../dms/tutorial-postgresql-azure-postgresql-online-portal.md) and [DMS via CLI](../dms/tutorial-postgresql-azure-postgresql-online.md).-
-## Frequently Asked Questions
-
- Will Flexible Server replace Single Server or Will Single Server be retired soon?
-
-We continue to support Single Server and encourage you to adopt Flexible Server which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
--
-## Contacts
-
-For any questions or suggestions you might have about working with Azure Database for PostgreSQL, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). This email address is not a technical support alias.
-
-In addition, consider the following points of contact as appropriate:
--- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).-- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.-- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).-
-## Next steps
-
-Now that you've read an introduction to Azure Database for PostgreSQL single server deployment mode, you're ready to:
-- Create your first server.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/overview.md
- Title: What is Azure Database for PostgreSQL
-description: Provides an overview of Azure Database for PostgreSQL relational database service in the context of flexible server.
------ Previously updated : 01/24/2022
-adobe-target: true
--
-# What is Azure Database for PostgreSQL?
-
-Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on the [PostgreSQL open source relational database](https://www.postgresql.org/). Azure Database for PostgreSQL delivers:
--- Built-in high availability.-- Data protection using automatic backups and point-in-time-restore for up to 35 days.-- Automated maintenance for underlying hardware, operating system and database engine to keep the service secure and up to date.-- Predictable performance, using inclusive pay-as-you-go pricing.-- Elastic scaling within seconds.-- Enterprise grade security and industry-leading compliance to protect sensitive data at-rest and in-motion.-- Monitoring and automation to simplify management and monitoring for large-scale deployments.-- Industry-leading support experience.-
- :::image type="content" source="./media/overview/overview-what-is-azure-postgres.png" alt-text="Azure Database for PostgreSQL":::
-
-These capabilities require almost no administration, and all are provided at no additional cost. They allow you to focus on rapid application development and accelerating your time to market rather than allocating precious time and resources to managing virtual machines and infrastructure. In addition, you can continue to develop your application with the open-source tools and platform of your choice to deliver with the speed and efficiency your business demands, all without having to learn new skills.
-
-## Deployment models
-
-Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in three deployment modes:
--- Single Server-- Flexible Server-- Hyperscale (Citus)-
-### Azure Database for PostgreSQL - Single Server
-
-Azure Database for PostgreSQL Single Server is a fully managed database service with minimal requirements for customizations of database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of PostgreSQL 9.5, 9,6, 10, and 11. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
-
-The Single Server deployment option offers three pricing tiers: Basic, General Purpose, and Memory Optimized. Each tier offers different resource capabilities to support your database workloads. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Pricing tiers](./concepts-pricing-tiers.md) for details.
-
-Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom PostgreSQL configuration settings.
-
-For detailed overview of single server deployment mode, refer [single server overview](./overview-single-server.md).
-
-### Azure Database for PostgreSQL - Flexible Server
-
-Azure Database for PostgreSQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible Server provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. The service currently supports community version of PostgreSQL 11, 12, and 13, with plans to add newer versions soon. The service is generally available today in wide variety of Azure regions.
-
-Flexible servers are best suited for
--- Application developments requiring better control and customizations-- Cost optimization controls with ability to stop/start server-- Zone redundant high availability-- Managed maintenance windows
-
-For a detailed overview of flexible server deployment mode, see [flexible server overview](./flexible-server/overview.md).
-
-### Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
-
-The Hyperscale (Citus) option horizontally scales queries across multiple machines using sharding. Its query engine parallelizes incoming SQL queries across these servers for faster responses on large datasets. It serves applications that require greater scale and performance, generally workloads that are approaching--or already exceed--100 GB of data.
-
-The Hyperscale (Citus) deployment option delivers:
--- Horizontal scaling across multiple machines using sharding-- Query parallelization across these servers for faster responses on large datasets-- Excellent support for multi-tenant applications, real-time operational analytics, and high-throughput transactional workloads
-
-Applications built for PostgreSQL can run distributed queries on Hyperscale (Citus) with standard [connection libraries](./concepts-connection-libraries.md) and minimal changes.
-
-## Next steps
-
-Learn more about the three deployment modes for Azure Database for PostgreSQL and choose the right options based on your needs.
--- [Single Server](./overview-single-server.md)-- [Flexible Server](./flexible-server/overview.md)-- [Hyperscale (Citus)](hyperscale/overview.md)
postgresql Partners Migration Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/partners-migration-postgresql.md
-
Title: Azure Database for PostgreSQL migration partners
-description: Lists of third-party migration partners with solutions that support Azure Database for PostgreSQL.
----- Previously updated : 08/07/2018--
-# Azure Database for PostgreSQL migration partners
-To broadly support your Azure Database for PostgreSQL solution, you can choose from a wide variety of industry-leading partners and tools. This article highlights Microsoft partners with migration solutions that support Azure Database for PostgreSQL.
-
-## Migration partners
-| Partner | Description | Links | Videos |
-| | | | |
-| ![SNP Technologies][1] |**SNP Technologies**<br>SNP Technologies is a cloud-only service provider, building secure and reliable solutions for businesses of the future. The company believes in generating real value for your business. From thought to execution, SNP Technologies shares a common purpose with clients, to turn their investment into an advantage.|[Website][snp_website]<br>[Twitter][snp_twitter]<br>[Contact][snp_contact] | |
-| ![Pragmatic Works][3] |**Pragmatic Works**<br>Pragmatic Works is a training and consulting company with deep expertise in data management and performance, Business Intelligence, Big Data, Power BI, and Azure. They focus on data optimization and improving the efficiency of SQL Server and cloud management.|[Website][pragmatic-works_website]<br>[Twitter][pragmatic-works_twitter]<br>[YouTube][pragmatic-works_youtube]<br>[Contact][pragmatic-works_contact] | |
-| ![Infosys][4] |**Infosys**<br>Infosys is a global leader in the latest digital services and consulting. With over three decades of experience managing the systems of global enterprises, Infosys expertly steers clients through their digital journey by enabling organizations with an AI-powered core. Doing so helps prioritize the execution of change. Infosys also provides businesses with agile digital at scale to deliver unprecedented levels of performance and customer delight.|[Website][infosys_website]<br>[Twitter][infosys_twitter]<br>[YouTube][infosys_youtube]<br>[Contact][infosys_contact] | |
-| ![credativ][5] |**credativ**<br>credativ is an independent consulting and services company. Since 1999, they have offered comprehensive services and technical support for the implementation and operation of Open Source software in business applications. Their comprehensive range of services includes strategic consulting, sound technical advice, qualified training, and personalized support up to 24 hours per day for all your IT needs.|[Marketplace][credativ_marketplace]<br>[Website][credativ_website]<br>[Twitter][credative_twitter]<br>[YouTube][credativ_youtube]<br>[Contact][credativ_contact] | |
-| ![Pactera][6] |**Pactera**<br>Pactera is a global company offering consulting, digital, technology, and operations services to the worldΓÇÖs leading enterprises. From their roots in engineering to the latest in digital transformation, they give customers a competitive edge. Their proven methodologies and tools ensure your data is secure, authentic, and accurate.|[Website][pactera_website]<br>[Twitter][pactera_twitter]<br>[Contact][pactera_contact] | |
-
-## Next steps
-To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/).
-
-<!--Image references-->
-[1]: ./media/partner-migration-postgresql/SNP_Logo.png
-[2]: ./media/partner-migration-postgresql/DB_Best_logo.png
-[3]: ./media/partner-migration-postgresql/PW-logo-text-CMYK1000.png
-[4]: ./media/partner-migration-postgresql/InfosysLogo.png
-[5]: ./media/partner-migration-postgresql/credativ_round_logo2.png
-[6]: ./media/partner-migration-postgresql/Pactera_logo_small2.png
-
-<!--Website links -->
-[snp_website]:https://www.snp.com//
-[pragmatic-works_website]:https://pragmaticworks.com//
-[infosys_website]:https://www.infosys.com/
-[credativ_website]:https://www.credativ.com/postgresql-competence-center/microsoft-azure
-[pactera_website]:https://en.pactera.com/
-
-<!--Get Started Links-->
-<!--Datasheet Links-->
-<!--Marketplace Links -->
-[credativ_marketplace]:https://azuremarketplace.microsoft.com/de-de/marketplace/apps?search=credativ&page=1
-
-<!--Press links-->
-
-<!--YouTube links-->
-[pragmatic-works_youtube]:https://www.youtube.com/user/PragmaticWorks
-[infosys_youtube]:https://www.youtube.com/user/Infosys
-[credativ_youtube]:https://www.youtube.com/channel/UCnSnr6_TcILUQQvAwlYFc8A
-
-<!--Twitter links-->
-[snp_twitter]:https://twitter.com/snptechnologies
-[pragmatic-works_twitter]:https://twitter.com/PragmaticWorks
-[infosys_twitter]:https://twitter.com/infosys
-[credative_twitter]:https://twitter.com/credativ
-[pactera_twitter]:https://twitter.com/Pactera?s=17
-
-<!--Contact links-->
-[snp_contact]:mailto:sachin@snp.com
-[pragmatic-works_contact]:mailto:marketing@pragmaticworks.com
-[infosys_contact]:https://www.infosys.com/contact/
-[credativ_contact]:mailto:info@credativ.com
-[pactera_contact]:mailto:shushi.gaur@pactera.com
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/policy-reference.md
- Title: Built-in policy definitions for Azure Database for PostgreSQL
-description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources.
------ Previously updated : 05/11/2022-
-# Azure Policy built-in definitions for Azure Database for PostgreSQL
-
-This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
-definitions for Azure Database for PostgreSQL. For additional Azure Policy built-ins for other
-services, see
-[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
-
-The name of each built-in policy definition links to the policy definition in the Azure portal. Use
-the link in the **Version** column to view the source on the
-[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
-
-## Azure Database for PostgreSQL
--
-## Next steps
--- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-postgresql-server-database-using-arm-template.md
- Title: 'Quickstart: Create an Azure DB for PostgreSQL - ARM template'
-description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server by using an Azure Resource Manager template.
------ Previously updated : 02/11/2021--
-# Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - single server
-
-Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for PostgreSQL - single server in the Azure portal, PowerShell, or Azure CLI.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.dbforpostgresql%2Fmanaged-postgresql-with-vnet%2Fazuredeploy.json)
-
-## Prerequisites
-
-# [Portal](#tab/azure-portal)
-
-An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-
-# [PowerShell](#tab/PowerShell)
-
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-* If you want to run the code locally, [Azure PowerShell](/powershell/azure/).
-
-# [CLI](#tab/CLI)
-
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-* If you want to run the code locally, [Azure CLI](/cli/azure/).
---
-## Review the template
-
-You create an Azure Database for PostgreSQL server with a configured set of compute and storage resources. To learn more, see [Pricing tiers in Azure Database for PostgreSQL - Single Server](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../azure-resource-manager/management/overview.md).
-
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-postgresql-with-vnet/).
--
-The template defines five Azure resources:
-
-* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
-* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets)
-* [**Microsoft.DBforPostgreSQL/servers**](/azure/templates/microsoft.dbforpostgresql/servers)
-* [**Microsoft.DBforPostgreSQL/servers/virtualNetworkRules**](/azure/templates/microsoft.dbforpostgresql/servers/virtualnetworkrules)
-* [**Microsoft.DBforPostgreSQL/servers/firewallRules**](/azure/templates/microsoft.dbforpostgresql/servers/firewallrules)
-
-More Azure Database for PostgreSQL template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Dbforpostgresql&pageNumber=1&sort=Popular).
-
-## Deploy the template
-
-# [Portal](#tab/azure-portal)
-
-Select the following link to deploy the Azure Database for PostgreSQL server template in the Azure portal:
-
-[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.dbforpostgresql%2Fmanaged-postgresql-with-vnet%2Fazuredeploy.json)
-
-On the **Deploy Azure Database for PostgreSQL with VNet** page:
-
-1. For **Resource group**, select **Create new**, enter a name for the new resource group, and select **OK**.
-
-2. If you created a new resource group, select a **Location** for the resource group and the new server.
-
-3. Enter a **Server Name**, **Administrator Login**, and **Administrator Login Password**.
-
- :::image type="content" source="./media/quickstart-create-postgresql-server-database-using-arm-template/deploy-azure-database-for-postgresql-with-vnet.png" alt-text="Deploy Azure Database for PostgreSQL with VNet window, Azure quickstart template, Azure portal":::
-
-4. Change the other default settings if you want:
-
- * **Subscription**: the Azure subscription you want to use for the server.
- * **Sku Capacity**: the vCore capacity, which can be *2* (the default), *4*, *8*, *16*, *32*, or *64*.
- * **Sku Name**: the SKU tier prefix, SKU family, and SKU capacity, joined by underscores, such as *B_Gen5_1*, *GP_Gen5_2* (the default), or *MO_Gen5_32*.
- * **Sku Size MB**: the storage size, in megabytes, of the Azure Database for PostgreSQL server (default *51200*).
- * **Sku Tier**: the deployment tier, such as *Basic*, *GeneralPurpose* (the default), or *MemoryOptimized*.
- * **Sku Family**: *Gen4* or *Gen5* (the default), which indicates hardware generation for server deployment.
- * **PostgreSQL Version**: the version of PostgreSQL server to deploy, such as *9.5*, *9.6*, *10*, or *11* (the default).
- * **Backup Retention Days**: the desired period for geo-redundant backup retention, in days (default *7*).
- * **Geo Redundant Backup**: *Enabled* or *Disabled* (the default), depending on geo-disaster recovery (Geo-DR) requirements.
- * **Virtual Network Name**: the name of the virtual network (default *azure_postgresql_vnet*).
- * **Subnet Name**: the name of the subnet (default *azure_postgresql_subnet*).
- * **Virtual Network Rule Name**: the name of the virtual network rule allowing the subnet (default *AllowSubnet*).
- * **Vnet Address Prefix**: the address prefix for the virtual network (default *10.0.0.0/16*).
- * **Subnet Prefix**: the address prefix for the subnet (default *10.0.0.0/16*).
-
-5. Read the terms and conditions, and then select **I agree to the terms and conditions stated above**.
-
-6. Select **Purchase**.
-
-# [PowerShell](#tab/PowerShell)
-
-Use the following interactive code to create a new Azure Database for PostgreSQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password.
-
-To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
-
-```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for PostgreSQL server"
-$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist"
-$location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
-$adminUser = Read-Host -Prompt "Enter the Azure Database for PostgreSQL server's administrator account name"
-$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString
-
-New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment
-New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
- -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbforpostgresql/managed-postgresql-with-vnet/azuredeploy.json `
- -serverName $serverName `
- -administratorLogin $adminUser `
- -administratorLoginPassword $adminPassword
-
-Read-Host -Prompt "Press [ENTER] to continue: "
-```
-
-# [CLI](#tab/CLI)
-
-Use the following interactive code to create a new Azure Database for PostgreSQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password.
-
-To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
-
-```azurecli-interactive
-read -p "Enter a name for the new Azure Database for PostgreSQL server:" serverName &&
-read -p "Enter a name for the new resource group where the server will exist:" resourceGroupName &&
-read -p "Enter an Azure region (for example, centralus) for the resource group:" location &&
-read -p "Enter the Azure Database for PostgreSQL server's administrator account name:" adminUser &&
-read -p "Enter the administrator password:" adminPassword &&
-params='serverName='$serverName' administratorLogin='$adminUser' administratorLoginPassword='$adminPassword &&
-az group create --name $resourceGroupName --location $location &&
-az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbforpostgresql/managed-postgresql-with-vnet/azuredeploy.json &&
-read -p "Press [ENTER] to continue: "
-```
---
-## Review deployed resources
-
-# [Portal](#tab/azure-portal)
-
-Follow these steps to see an overview of your new Azure Database for PostgreSQL server:
-
-1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for PostgreSQL servers**.
-
-2. In the database list, select your new server. The **Overview** page for your new Azure Database for PostgreSQL server appears.
-
-# [PowerShell](#tab/PowerShell)
-
-Run the following interactive code to view details about your Azure Database for PostgreSQL server. You'll have to enter the name of the new server.
-
-```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter the name of your Azure Database for PostgreSQL server"
-Get-AzResource -ResourceType "Microsoft.DbForPostgreSQL/servers" -Name $serverName | ft
-Read-Host -Prompt "Press [ENTER] to continue: "
-```
-
-# [CLI](#tab/CLI)
-
-Run the following interactive code to view details about your Azure Database for PostgreSQL server. You'll have to enter the name and the resource group of the new server.
-
-```azurecli-interactive
-read -p "Enter your Azure Database for PostgreSQL server name: " serverName &&
-read -p "Enter the resource group where the Azure Database for PostgreSQL server exists: " resourcegroupName &&
-az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DbForPostgreSQL/servers" &&
-read -p "Press [ENTER] to continue: "
-```
---
-## Exporting ARM template from the portal
-You can [export an ARM template](../azure-resource-manager/templates/export-template-portal.md) from the Azure portal. There are two ways to export a template:
--- [Export from resource group or resource](../azure-resource-manager/templates/export-template-portal.md#export-template-from-a-resource). This option generates a new template from existing resources. The exported template is a "snapshot" of the current state of the resource group. You can export an entire resource group or specific resources within that resource group.-- [Export before deployment or from history](../azure-resource-manager/templates/export-template-portal.md#download-template-before-deployment). This option retrieves an exact copy of a template used for deployment.-
-When exporting the template, in the ```"properties":{ }``` section of the PostgreSQL server resource you will notice that ```administratorLogin``` and ```administratorLoginPassword``` will not be included for security reasons. You **MUST** add these parameters to your template before deploying the template or the template will fail.
-
-```
-"resources": [
- {
- "type": "Microsoft.DBforPostgreSQL/servers",
- "apiVersion": "2017-12-01",
- "name": "[parameters('servers_name')]",
- "location": "southcentralus",
- "sku": {
- "name": "B_Gen5_1",
- "tier": "Basic",
- "family": "Gen5",
- "capacity": 1
- },
- "properties": {
- "administratorLogin": "[parameters('administratorLogin')]",
- "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
-```
---
-## Clean up resources
-
-When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
-
-# [Portal](#tab/azure-portal)
-
-1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
-
-2. In the resource group list, choose the name of your resource group.
-
-3. In the **Overview** page of your resource group, select **Delete resource group**.
-
-4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-Read-Host -Prompt "Press [ENTER] to continue: "
-```
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-read -p "Enter the Resource Group name: " resourceGroupName &&
-az group delete --name $resourceGroupName &&
-read -p "Press [ENTER] to continue: "
-```
---
-## Next steps
-
-For a step-by-step tutorial that guides you through the process of creating a template, see:
-
-> [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-postgresql-server-database-using-azure-powershell.md
- Title: 'Quickstart: Create server - Azure PowerShell - Azure Database for PostgreSQL - Single Server'
-description: Quickstart guide to create an Azure Database for PostgreSQL - Single Server using Azure PowerShell.
------ Previously updated : 06/08/2020--
-# Quickstart: Create an Azure Database for PostgreSQL - Single Server using PowerShell
-
-This quickstart describes how to use PowerShell to create an Azure Database for PostgreSQL server in an
-Azure resource group. You can use PowerShell to create and manage Azure resources interactively or
-in scripts.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
-
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)
-cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
-
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If this is your first time using the Azure Database for PostgreSQL service, you must register the
-**Microsoft.DBforPostgreSQL** resource provider.
-
-```azurepowershell-interactive
-Register-AzResourceProvider -ProviderNamespace Microsoft.DBforPostgreSQL
-```
--
-If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
-should be billed. Select a specific subscription ID using the
-[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
-
-```azurepowershell-interactive
-Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
-```
-
-## Create a resource group
-
-Create an
-[Azure resource group](../azure-resource-manager/management/overview.md)
-using the
-[New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)
-cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as
-a group.
-
-The following example creates a resource group named **myresourcegroup** in the **West US** region.
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name myresourcegroup -Location westus
-```
-
-## Create an Azure Database for PostgreSQL server
-
-Create an Azure Database for PostgreSQL server with the `New-AzPostgreSqlServer` cmdlet. A server
-can manage multiple databases. Typically, a separate database is used for each project or for each
-user.
-
-The following table contains a list of commonly used parameters and sample values for the
-`New-AzPostgreSqlServer` cmdlet.
-
-| **Setting** | **Sample value** | **Description** |
-| -- | - | - |
-| Name | mydemoserver | Choose a globally unique name in Azure that identifies your Azure Database for PostgreSQL server. The server name can only contain letters, numbers, and the hyphen (-) character. Any uppercase characters that are specified are automatically converted to lowercase during the creation process. It must contain from 3 to 63 characters. |
-| ResourceGroupName | myresourcegroup | Provide the name of the Azure resource group. |
-| Sku | GP_Gen5_2 | The name of the SKU. Follows the convention **pricing-tier\_compute-generation\_vCores** in shorthand. For more information about the Sku parameter, see the information following this table. |
-| BackupRetentionDay | 7 | How long a backup should be retained. Unit is days. Range is 7-35. |
-| GeoRedundantBackup | Enabled | Whether geo-redundant backups should be enabled for this server or not. This value cannot be enabled for servers in the basic pricing tier and it cannot be changed after the server is created. Allowed values: Enabled, Disabled. |
-| Location | westus | The Azure region for the server. |
-| SslEnforcement | Enabled | Whether SSL should be enabled or not for this server. Allowed values: Enabled, Disabled. |
-| StorageInMb | 51200 | The storage capacity of the server (unit is megabytes). Valid StorageInMb is a minimum of 5120 MB and increases in 1024 MB increments. For more information about storage size limits, see [Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md). |
-| Version | 9.6 | The PostgreSQL major version. |
-| AdministratorUserName | myadmin | The username for the administrator login. It cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**. |
-| AdministratorLoginPassword | `<securestring>` | The password of the administrator user in the form of a secure string. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. |
-
-The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as
-shown in the following examples.
--- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.-- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.-- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.-
-For information about valid **Sku** values by region and for tiers, see
-[Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md).
-
-The following example creates a PostgreSQL server in the **West US** region named **mydemoserver**
-in the **myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5
-server in the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document
-the password used in the first line of the example as this is the password for the PostgreSQL server
-admin account.
-
-> [!TIP]
-> A server name maps to a DNS name and must be globally unique in Azure.
-
-```azurepowershell-interactive
-$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
-New-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
-```
-
-Consider using the basic pricing tier if light compute and I/O are adequate for your workload.
-
-> [!IMPORTANT]
-> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory-
-> optimized and cannot be geo-replicated.
-
-## Configure a firewall rule
-
-Create an Azure Database for PostgreSQL server-level firewall rule using the
-`New-AzPostgreSqlFirewallRule` cmdlet. A server-level firewall rule allows an external application,
-such as the `psql` command-line tool or PostgreSQL Workbench to connect to your server through the
-Azure Database for PostgreSQL service firewall.
-
-The following example creates a firewall rule named **AllowMyIP** that allows connections from a
-specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond
-to the location that you are connecting from.
-
-```azurepowershell-interactive
-New-AzPostgreSqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1
-```
-
-> [!NOTE]
-> Connections to Azure Database for PostgreSQL communicate over port 5432. If you try to connect from
-> within a corporate network, outbound traffic over port 5432 might not be allowed. In this
-> scenario, you can only connect to the server if your IT department opens port 5432.
-
-## Get the connection information
-
-To connect to your server, you need to provide host information and access credentials. Use the
-following example to determine the connection information. Make a note of the values for
-**FullyQualifiedDomainName** and **AdministratorLogin**.
-
-```azurepowershell-interactive
-Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Select-Object -Property FullyQualifiedDomainName, AdministratorLogin
-```
-
-```Output
-FullyQualifiedDomainName AdministratorLogin
-
-mydemoserver.postgres.database.azure.com myadmin
-```
-
-## Connect to PostgreSQL database using psql
-
-If your client computer has PostgreSQL installed, you can use a local instance of
-[psql](https://www.postgresql.org/docs/current/static/app-psql.html) to connect to an Azure
-PostgreSQL server. You can also access a pre-installed version of the `psql` command-line tool in
-Azure Cloud Shell by selecting the **Try It** button on a code sample in this article. Other ways to
-access Azure Cloud Shell are to select the **>_** button on the upper-right toolbar in the Azure
-portal or by visiting [shell.azure.com](https://shell.azure.com/).
-
-1. Connect to your Azure PostgreSQL server using the `psql` command-line utility.
-
- ```azurepowershell-interactive
- psql --host=<servername> --port=<port> --username=<user@servername> --dbname=<dbname>
- ```
-
- For example, the following command connects to the default database called **postgres** on your
- PostgreSQL server `mydemoserver.postgres.database.azure.com` using access credentials. Enter
- the `<server_admin_password>` you chose when prompted for password.
-
- ```azurepowershell-interactive
- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- ```
-
- > [!TIP]
- > If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username
- > with `%40`. For example the connection string for psql would be,
- > `psql postgresql://myadmin%40mydemoserver@mydemoserver.postgres.database.azure.com:5432/postgres`
-
-1. Once you are connected to the server, create a blank database at the prompt.
-
- ```sql
- CREATE DATABASE mypgsqldb;
- ```
-
-1. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
-
- ```sql
- \c mypgsqldb
- ```
-
-## Connect to the PostgreSQL Server using pgAdmin
-
-pgAdmin is an open-source tool used with PostgreSQL. You can install pgAdmin from the
-[pgAdmin website](https://www.pgadmin.org/). The pgAdmin version you're using may be different from
-what is used in this Quickstart. Read the pgAdmin documentation if you need additional guidance.
-
-1. Open the pgAdmin application on your client computer.
-
-1. From the toolbar go to **Object**, hover over **Create**, and select **Server**.
-
-1. In the **Create - Server** dialog box, on the **General** tab, enter a unique friendly name for
- the server, such as **mydemoserver**.
-
- :::image type="content" source="./media/quickstart-create-postgresql-server-database-using-azure-powershell/9-pgadmin-create-server.png" alt-text="The General tab":::
-
-1. In the **Create - Server** dialog box, on the **Connection** tab, fill in the settings table.
-
- :::image type="content" source="./media/quickstart-create-postgresql-server-database-using-azure-powershell/10-pgadmin-create-server.png" alt-text="The Connection tab":::
-
- pgAdmin parameter |Value|Description
- ||
- Host name/address | Server name | The server name value that you used when you created the Azure Database for PostgreSQL server earlier. Our example server is **mydemoserver.postgres.database.azure.com.** Use the fully qualified domain name (**\*.postgres.database.azure.com**) as shown in the example. If you don't remember your server name, follow the steps in the previous section to get the connection information.
- Port | 5432 | The port to use when you connect to the Azure Database for PostgreSQL server.
- Maintenance database | *postgres* | The default system-generated database name.
- Username | Server admin login name | The server admin login username that you supplied when you created the Azure Database for PostgreSQL server earlier. If you don't remember the username, follow the steps in the previous section to get the connection information. The format is *username\@servername*.
- Password | Your admin password | The password you chose when you created the server earlier in this Quickstart.
- Role | Leave blank | There's no need to provide a role name at this point. Leave the field blank.
- SSL mode | *Require* | You can set the TLS/SSL mode in pgAdmin's SSL tab. By default, all Azure Database for PostgreSQL servers are created with TLS enforcing turned on. To turn off TLS enforcing, see [Configure Enforcement of TLS](./concepts-ssl-connection-security.md#configure-enforcement-of-tls).
-
-1. Select **Save**.
-
-1. In the **Browser** pane on the left, expand the **Servers** node. Select your server, for
- example, **mydemoserver**. Click to connect to it.
-
-1. Expand the server node, and then expand **Databases** under it. The list should include your
- existing *postgres* database and any other databases you've created. You can create multiple
- databases per server with Azure Database for PostgreSQL.
-
-1. Right-click **Databases**, choose the **Create** menu, and then select **Database**.
-
-1. Type a database name of your choice in the **Database** field, such as **mypgsqldb2**.
-
-1. Select the **Owner** for the database from the list box. Choose your server admin login name,
- such as the example, **my admin**.
-
- :::image type="content" source="./media/quickstart-create-postgresql-server-database-using-azure-powershell/11-pgadmin-database.png" alt-text="Create a database in pgAdmin":::
-
-1. Select **Save** to create a new blank database.
-
-1. In the **Browser** pane, you can see the database that you created in the list of databases under
- your server name.
-
-## Clean up resources
-
-If the resources created in this quickstart aren't needed for another quickstart or tutorial, you
-can delete them by running the following example.
-
-> [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this quickstart exist in the specified resource group, they will
-> also be deleted.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myresourcegroup
-```
-
-To delete only the server created in this quickstart without deleting the resource group, use the
-`Remove-AzPostgreSqlServer` cmdlet.
-
-```azurepowershell-interactive
-Remove-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Design an Azure Database for PostgreSQL using PowerShell](tutorial-design-database-using-powershell.md)
postgresql Quickstart Create Postgresql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-postgresql-server-database-using-bicep.md
- Title: 'Quickstart: Create an Azure DB for PostgreSQL - Bicep'
-description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server using Bicep.
------ Previously updated : 04/29/2022--
-# Quickstart: Use Bicep to create an Azure Database for PostgreSQL - single server
-
-Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for PostgreSQL - single server in Azure CLI or PowerShell.
--
-## Prerequisites
-
-You'll need an Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-
-# [CLI](#tab/CLI)
-
-* If you want to run the code locally, [Azure CLI](/cli/azure/).
-
-# [PowerShell](#tab/PowerShell)
-
-* If you want to run the code locally, [Azure PowerShell](/powershell/azure/).
---
-## Review the Bicep file
-
-You create an Azure Database for PostgreSQL server with a configured set of compute and storage resources. To learn more, see [Pricing tiers in Azure Database for PostgreSQL - Single Server](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../azure-resource-manager/management/overview.md).
-
-The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-postgresql-with-vnet/).
--
-The Bicep file defines five Azure resources:
-
-* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
-* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets)
-* [**Microsoft.DBforPostgreSQL/servers**](/azure/templates/microsoft.dbforpostgresql/servers)
-* [**Microsoft.DBforPostgreSQL/servers/virtualNetworkRules**](/azure/templates/microsoft.dbforpostgresql/servers/virtualnetworkrules)
-* [**Microsoft.DBforPostgreSQL/servers/firewallRules**](/azure/templates/microsoft.dbforpostgresql/servers/firewallrules)
-
-## Deploy the Bicep file
-
-1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
-
- # [CLI](#tab/CLI)
-
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serverName=<server-name> administratorLogin=<admin-login>
- ```
-
- # [PowerShell](#tab/PowerShell)
-
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serverName "<server-name>" -administratorLogin "<admin-login>"
- ```
-
-
-
- > [!NOTE]
- > Replace **\<server-name\>** with the name of the server for Azure database for PostgreSQL. Replace **\<admin-login\>** with the database administrator name, which has a minimum length of one character. You'll also be prompted to enter **administratorLoginPassword**, which has a minimum length of eight characters.
-
- When the deployment finishes, you should see a message indicating the deployment succeeded.
-
-## Review deployed resources
-
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az resource list --resource-group exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName exampleRG
-```
---
-## Clean up resources
-
-When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az group delete --name exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name exampleRG
-```
---
-## Next steps
-
-For a step-by-step tutorial that guides you through the process of creating a Bicep file, see:
-
-> [!div class="nextstepaction"]
-> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-server-database-azure-cli.md
- Title: 'Quickstart: Create server - Azure CLI - Azure Database for PostgreSQL - single server'
-description: In this quickstart guide, you'll create an Azure Database for PostgreSQL server by using the Azure CLI.
------ Previously updated : 01/26/2022 --
-# Quickstart: Create an Azure Database for PostgreSQL server by using the Azure CLI
-
-This quickstart shows how to use [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create a single Azure Database for PostgreSQL server in five minutes.
-
-> [!TIP]
-> Consider using the simpler [az postgres up](/cli/azure/postgres#az-postgres-up) Azure CLI command. Try out the [quickstart](./quickstart-create-server-up-azure-cli.md).
----
-## Set parameter values
-
-The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
-
-Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
--
-## Create a resource group
-
-Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
--
-## Create a server
-
-Create a server with the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command.
--
-> [!NOTE]
->
->- The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. For more information, see [Azure Database for PostgreSQL Naming Rules](../azure-resource-manager/management/resource-name-rules.md#microsoftdbforpostgresql).
->- The user name for the admin user can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
->- The password must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
->- For information about SKUs, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
-
->[!IMPORTANT]
->
->- The default PostgreSQL version on your server is 9.6. To see all the versions supported, see [Supported PostgreSQL major versions](./concepts-supported-versions.md).
->- SSL is enabled by default on your server. For more information on SSL, see [Configure SSL connectivity](./concepts-ssl-connection-security.md).
-
-## Configure a server-based firewall rule
-
-Create a firewall rule with the [az postgres server firewall-rule create](/cli/azure/mysql/server/firewall-rule) command to give your local environment access to connect to the server.
--
-> [!TIP]
-> If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
-
-> [!NOTE]
-> To avoid connectivity issues, make sure your network's firewall allows port 5432. Azure Database for PostgreSQL servers use that port.
-
-## List server-based firewall rules
-
-To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command.
--
-The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
-
-## Get the connection information
-
-To connect to your server, provide host information and access credentials.
-
-```azurecli
-az postgres server show --resource-group $resourceGroup --name $server
-```
-
-Make a note of the **administratorLogin** and **fullyQualifiedDomainName** values.
-
-## Connect to the Azure Database for PostgreSQL server by using psql
-
-The [psql](https://www.postgresql.org/docs/current/static/app-psql.html) client is a popular choice for connecting to PostgreSQL servers. You can connect to your server by using `psql` with [Azure Cloud Shell](../cloud-shell/overview.md). You can also use `psql` on your local environment if you have it available. An empty database, **postgres**, is automatically created with a new PostgreSQL server. You can use that database to connect with `psql`, as shown in the following code.
-
-```bash
-psql --host=<server_name>.postgres.database.azure.com --port=5432 --username=<admin_user>@<server_name> --dbname=postgres
-```
-
-> [!TIP]
-> If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example, the connection string for psql would be:
->
-> ```bash
-> psql postgresql://<admin_user>%40<server_name>@<server_name>.postgres.database.azure.com:5432/postgres
-> ```
-
-## Clean up resources
-
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az-vm-extension-set) command - unless you have an ongoing need for these resources. Some of these resources may take a while to create, as well as to delete.
-
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Design your first Azure Database for PostgreSQL using the Azure CLI](tutorial-design-database-using-azure-cli.md)
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-server-database-portal.md
- Title: 'Quickstart: Create server - Azure portal - Azure Database for PostgreSQL - single server'
-description: In this quickstart guide, you'll create and manage an Azure Database for PostgreSQL server by using the Azure portal.
------ Previously updated : 10/18/2020--
-# Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal
-
-Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This quickstart shows you how to create a single Azure Database for PostgreSQL server and connect to it.
-
-## Prerequisites
-An Azure subscription is required. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
-
-## Create an Azure Database for PostgreSQL server
-Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database for PostgreSQL Single Server database. Search for and select *Azure Database for PostgreSQL servers*.
-
->[!div class="mx-imgBorder"]
-> :::image type="content" source="./media/quickstart-create-database-portal/search-postgres.png" alt-text="Find Azure Database for PostgreSQL.":::
-
-1. Select **Add**.
-
-2. On the Create a Azure Database for PostgreSQL page , select **Single server**.
-
- >[!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-database-portal/select-single-server.png" alt-text="Select single server":::
-
-3. Now enter the **Basics** form with the following information.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-database-portal/create-basics.png" alt-text="Screenshot that shows the Basics tab for creating a single server.":::
-
- |Setting|Suggested value|Description|
- |:|:|:|
- |Subscription|your subscription name|select the desired Azure Subscription.|
- |Resource group|*myresourcegroup*| A new or an existing resource group from your subscription.|
- |Server name |*mydemoserver*|A unique name that identifies your Azure Database for PostgreSQL server. The domain name *postgres.database.azure.com* is appended to the server name that you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters.|
- |Data source | None | Select **None** to create a new server from scratch. Select **Backup** only if you were restoring from a geo-backup of an existing server.|
- |Admin username |*myadmin*| Enter your server admin username. It can't start with **pg_** and these values are not allowed: **azure_superuser**, **azure_pg_admin**, **admin**, **administrator**, **root**, **guest**, or **public**.|
- |Password |your password| A new password for the server admin user. It must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, !, $, #, %).|
- |Location|your desired location| Select a location from the dropdown list.|
- |Version|The latest major version| The latest PostgreSQL major version, unless you have specific requirements otherwise.|
- |Compute + storage | *use the defaults*| The default pricing tier is **General Purpose** with **4 vCores** and **100 GB** storage. Backup retention is set to **7 days** with **Geographically Redundant** backup option.<br/>Learn about the [pricing](https://azure.microsoft.com/pricing/details/postgresql/server/) and update the defaults if needed.|
--
- > [!NOTE]
- > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier can't later be scaled to General Purpose or Memory Optimized.
-
-5. Select **Review + create** to review your selections. Select **Create** to provision the server. This operation might take a few minutes.
- > [!NOTE]
- > An empty database, **postgres**, is created. You'll also find an **azure_maintenance** database that's used to separate the managed service processes from user actions. You can't access the **azure_maintenance** database.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/quickstart-create-database-portal/deployment-success.png" alt-text="success deployment.":::
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Configure a firewall rule
-By default, the server that you create is not publicly accessible. You need to give permissions to your IP address. Go to your server resource in the Azure portal and select **Connection security** from left-side menu for your server resource. If you're not sure how to find your resource, see [Open resources](../azure-resource-manager/management/manage-resources-portal.md#open-resources).
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/quickstart-create-database-portal/add-current-ip-firewall.png" alt-text="Screenshot that shows firewall rules for connection security.":::
-
-Select **Add current client IP address**, and then select **Save**. You can add more IP addresses or provide an IP range to connect to your server from those IP addresses. For more information, see [Firewall rules in Azure Database for PostgreSQL](./concepts-firewall-rules.md).
-
-> [!NOTE]
-> To avoid connectivity issues, check if your network allows outbound traffic over port 5432. Azure Database for PostgreSQL uses that port.
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Connect to the server with psql
-
-You can use [psql](http://postgresguide.com/utilities/psql.html) or [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/connecting.html), which are popular PostgreSQL clients. For this quickstart, we'll connect by using psql in [Azure Cloud Shell](../cloud-shell/overview.md) within the Azure portal.
-
-1. Make a note of your server name, server admin login name, password, and subscription ID for your newly created server from the **Overview** section of your server.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-database-portal/overview-new.png" alt-text="get connection information.":::
--
-2. Open Azure Cloud Shell in the portal by selecting the icon on the upper-left side.
-
- > [!NOTE]
- > If you're opening Cloud Shell for the first time, you'll see a prompt to create a resource group and a storage account. This is a one-time step and will be automatically attached for all sessions.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="media/quickstart-create-database-portal/use-in-cloud-shell.png" alt-text="Screenshot that shows server information and the icon for opening Azure Cloud Shell.":::
-
-3. Run the following command in the Azure Cloud Shell terminal. Replace values with your actual server name and admin user login name. Use the empty database **postgres** with admin user in this format: `<admin-username>@<servername>`.
-
- ```azurecli-interactive
- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- ```
-
- Here's how the experience looks in the Cloud Shell terminal:
-
- ```bash
- Requesting a Cloud Shell.Succeeded.
- Connecting terminal...
-
- Welcome to Azure Cloud Shell
-
- Type "az" to use Azure CLI
- Type "help" to learn about Cloud Shell
-
- user@Azure:~$psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- Password for user myadmin@mydemoserver.postgres.database.azure.com:
- psql (12.2 (Ubuntu 12.2-2.pgdg16.04+1), server 11.6)
- SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
- Type "help" for help.
-
- postgres=>
- ```
-4. In the same Azure Cloud Shell terminal, create a database called **guest**.
-
- ```bash
- postgres=> CREATE DATABASE guest;
- ```
-
-5. Switch connections to the newly created **guest** database.
-
- ```bash
- \c guest
- ```
-6. Type `\q`, and then select the Enter key to close psql.
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Clean up resources
-You've successfully created an Azure Database for PostgreSQL server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting either the resource group or the PostgreSQL server.
-
-To delete the resource group:
-
-1. In the Azure portal, search for and select **Resource groups**.
-2. In the resource group list, choose the name of your resource group.
-3. On the **Overview** page of your resource group, select **Delete resource group**.
-4. In the confirmation dialog box, enter the name of your resource group, and then select **Delete**.
-
-To delete the server, select the **Delete** button on the **Overview** page of your server:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="media/quickstart-create-database-portal/12-delete.png" alt-text="Screenshot that shows the button for deleting a server.":::
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using export and import](./howto-migrate-using-export-and-import.md) <br/>
-
-> [!div class="nextstepaction"]
-> [Design a database](./tutorial-design-database-using-azure-portal.md#create-tables-in-the-database)
-
-[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-server-up-azure-cli.md
- Title: 'Quickstart: Create server - az postgres up - Azure Database for PostgreSQL - Single Server'
-description: Quickstart guide to create Azure Database for PostgreSQL - Single Server using Azure CLI (command-line interface) up command.
------ Previously updated : 01/25/2022-
-# Quickstart: Use the az postgres up command to create an Azure Database for PostgreSQL - Single Server
-
-Azure Database for PostgreSQL is a managed service that enables you to run, manage, and scale highly available PostgreSQL databases in the cloud. The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the [az postgres up](/cli/azure/postgres#az-postgres-up) command to create an Azure Database for PostgreSQL server using the Azure CLI. In addition to creating the server, the `az postgres up` command creates a sample database, a root user in the database, opens the firewall for Azure services, and creates default firewall rules for the client computer. These defaults help to expedite the development process.
--
-## Create an Azure Database for PostgreSQL server
---
-Install the [db-up](/cli/azure/ext/db-up/mysql) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
-
-```azurecli
-az extension add --name db-up
-```
-
-Create an Azure Database for PostgreSQL server using the following command:
-
-```azurecli
-az postgres up
-```
-
-The server is created with the following default values (unless you manually override them):
-
-**Setting** | **Default value** | **Description**
-||
-server-name | System generated | A unique name that identifies your Azure Database for PostgreSQL server.
-resource-group | System generated | A new Azure resource group.
-sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. See our [pricing page](https://azure.microsoft.com/pricing/details/postgresql/) for more information about the tiers.
-backup-retention | 7 | How long a backup is retained. Unit is days.
-geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not.
-location | westus2 | The Azure location for the server.
-ssl-enforcement | Disabled | Whether TLS/SSL should be enabled or not for this server.
-storage-size | 5120 | The storage capacity of the server (unit is megabytes).
-version | 10 | The PostgreSQL major version.
-admin-user | System generated | The username for the administrator.
-admin-password | System generated | The password of the administrator user.
-
-> [!NOTE]
-> For more information about the `az postgres up` command and its additional parameters, see the [Azure CLI documentation](/cli/azure/postgres#az-postgres-up).
-
-Once your server is created, it comes with the following settings:
--- A firewall rule called "devbox" is created. The Azure CLI attempts to detect the IP address of the machine the `az postgres up` command is run from and allows that IP address.-- "Allow access to Azure services" is set to ON. This setting configures the server's firewall to accept connections from all Azure resources, including resources not in your subscription.-- An empty database named "sampledb" is created-- A new user named "root" with privileges to "sampledb" is created-
-> [!NOTE]
-> Azure Database for PostgreSQL communicates over port 5432. When connecting from within a corporate network, outbound traffic over port 5432 may not be allowed by your network's firewall. Have your IT department open port 5432 to connect to your server.
-
-## Get the connection information
-
-After the `az postgres up` command is completed, a list of connection strings for popular programming languages is returned to you. These connection strings are pre-configured with the specific attributes of your newly created Azure Database for PostgreSQL server.
-
-You can use the [az postgres show-connection-string](/cli/azure/postgres#az-postgres-show-connection-string) command to list these connection strings again.
-
-## Clean up resources
-
-Clean up all resources you created in the quickstart using the following command. This command deletes the Azure Database for PostgreSQL server and the resource group.
-
-```azurecli
-az postgres down --delete-group
-```
-
-If you would just like to delete the newly created server, you can run [az postgres down](/cli/azure/postgres#az-postgres-down) command.
-
-```azurecli
-az postgres down
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./howto-migrate-using-export-and-import.md)
postgresql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/sample-scripts-azure-cli.md
- Title: Azure CLI samples - Azure Database for PostgreSQL - Single Server | Microsoft Docs
-description: This article lists several Azure CLI code samples available for interacting with Azure Database for PostgreSQL - Single Server.
------ Previously updated : 09/17/2021-
-# Azure CLI samples for Azure Database for PostgreSQL - Single Server
-
-The following table includes links to sample Azure CLI scripts for Azure Database for PostgreSQL.
-
-| Sample link | Description |
-|||
-|**Create a server**||
-| [Create a server and firewall rule](scripts/sample-create-server-and-firewall-rule.md) | Azure CLI script that creates an Azure Database for PostgreSQL server and configures a server-level firewall rule. |
-| **Create server with vNet rules**||
-| [Create a server with vNet rules](scripts/sample-create-server-with-vnet-rule.md) | Azure CLI that creates an Azure Database for PostgreSQL server with a service endpoint on a virtual network and configures a vNet rule. |
-|**Scale a server**||
-| [Scale a server](scripts/sample-scale-server-up-or-down.md) | Azure CLI script that scales an Azure Database for PostgreSQL server up or down to allow for changing performance needs. |
-|**Change server configurations**||
-| [Change server configurations](./scripts/sample-change-server-configuration.md) | Azure CLI script that change configurations options of an Azure Database for PostgreSQL server. |
-|**Restore a server**||
-| [Restore a server](./scripts/sample-point-in-time-restore.md) | Azure CLI script that restores an Azure Database for PostgreSQL server to a previous point in time. |
-|**Download server logs**||
-| [Enable and download server logs](./scripts/sample-server-logs.md) | Azure CLI script that enables and downloads server logs of an Azure Database for PostgreSQL server. |
-|||
-
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/security-controls-policy.md
- Title: Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
-description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
------ Previously updated : 05/10/2022--
-# Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
-
-[Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
-provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
-**compliance domains** and **security controls** related to different compliance standards. This
-page lists the **compliance domains** and **security controls** for Azure Database for PostgreSQL.
-You can assign the built-ins for a **security control** individually to help make your Azure
-resources compliant with the specific standard.
---
-## Next steps
--- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/application-best-practices.md
+
+ Title: App development best practices - Azure Database for PostgreSQL
+description: Learn about best practices for building an app by using Azure Database for PostgreSQL.
++++++ Last updated : 12/10/2020++
+# Best practices for building an application with Azure Database for PostgreSQL
+
+Here are some best practices to help you build a cloud-ready application by using Azure Database for PostgreSQL. These best practices can reduce development time for your app.
+
+## Configuration of application and database resources
+
+### Keep the application and database in the same region
+Make sure all your dependencies are in the same region when deploying your application in Azure. Spreading instances across regions or availability zones creates network latency, which might affect the overall performance of your application.
+
+### Keep your PostgreSQL server secure
+Configure your PostgreSQL server to be [secure](./concepts-security.md) and not accessible publicly. Use one of these options to secure your server:
+- [Firewall rules](./concepts-firewall-rules.md)
+- [Virtual networks](./concepts-data-access-and-security-vnet.md)
+- [Azure Private Link](./concepts-data-access-and-security-private-link.md)
+
+For security, you must always connect to your PostgreSQL server over SSL and configure your PostgreSQL server and your application to use TLS 1.2. See [How to configure SSL/TLS](./concepts-ssl-connection-security.md).
+
+### Tune your server parameters
+For read-heavy workloads tuning server parameters, `tmp_table_size` and `max_heap_table_size` can help optimize for better performance. To calculate the values required for these variables, look at the total per-connection memory values and the base memory. The sum of per-connection memory parameters, excluding `tmp_table_size`, combined with the base memory accounts for total memory of the server.
+
+### Use environment variables for connection information
+Do not save your database credentials in your application code. Depending on the front end application, follow the guidance to set up environment variables. For App service use, see [how to configure app settings](../../app-service/configure-common.md#configure-app-settings) and for Azure Kubernetes service, see [how to use Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
+
+## Performance and resiliency
+Here are a few tools and practices that you can use to help debug performance issues with your application.
+
+### Use Connection Pooling
+With connection pooling, a fixed set of connections is established at the startup time and maintained. This also helps reduce the memory fragmentation on the server that is caused by the dynamic new connections established on the database server. The connection pooling can be configured on the application side if the app framework or database driver supports it. If that is not supported, the other recommended option is to leverage a proxy connection pooler service like [PgBouncer](https://pgbouncer.github.io/) or [Pgpool](https://pgpool.net/mediawiki/index.php/Main_Page) running outside the application and connecting to the database server. Both PgBouncer and Pgpool are community based tools that work with Azure Database for PostgreSQL.
+
+### Retry logic to handle transient errors
+Your application might experience transient errors where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds. A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./concepts-connectivity.md) to learn more.
+
+### Enable read replication to mitigate failovers
+You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
++
+## Database deployment
+
+### Configure CI/CD deployment pipeline
+Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
+
+### Define manual database deployment process
+During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
+
+- Create a copy of a production database on a new database by using pg_dump.
+- Update the new database with your new schema changes or updates needed for your database.
+- Put the production database in a read-only state. You should not have write operations on the production database until deployment is completed.
+- Test your application with the newly updated database from step 1.
+- Deploy your application changes and make sure the application is now using the new database that has the latest updates.
+- Keep the old production database so that you can roll back the changes. You can then evaluate to either delete the old production database or export it on Azure Storage if needed.
+
+> [!NOTE]
+> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests. Make sure your application code also handles any failed requests.
+
+## Database schema and queries
+Here are few tips to keep in mind when you build your database schema and your queries.
+
+### Use BIGINT or UUID for Primary Keys
+When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/8.1/datatype.html).
+
+### Use indexes
+
+There are many types of [indexes](https://www.postgresql.org/docs/9.1/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes.
+
+### Use autovacuum
+You can optimize your server with autovacuum on an Azure Database for PostgreSQL server. PostgreSQL allow greater database concurrency but with every update results in insert and delete. For delete, the records are soft marked which will be purged later. To carry out these tasks, PostgreSQL runs a vacuum job. If you don't vacuum from time to time, the dead tuples that accumulate can result in:
+
+- Data bloat, such as larger databases and tables.
+- Larger suboptimal indexes.
+- Increased I/O.
+
+Learn more about [how to optimize with autovacuum](how-to-optimize-autovacuum.md).
+
+### Use pg_stats_statements
+Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
++
+### Use the Query Store
+The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements.
+
+### Optimize bulk inserts and use transient data
+If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables. It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties. See [how to optimize bulk inserts](how-to-optimize-bulk-inserts.md).
+
+## Next Steps
+[Postgres Guide](http://postgresguide.com/)
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
+
+ Title: Reserved compute pricing - Azure Database for PostgreSQL
+description: Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
++++++ Last updated : 10/06/2021++
+# Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
++
+Azure Database for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL reserved capacity, you make an upfront commitment on PostgreSQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
+
+## How does the instance reservation work?
+You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br>
+
+> [!IMPORTANT]
+> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server), [Flexible Server](../flexible-server/overview.md), and [Hyperscale Citus](./overview.md#azure-database-for-postgresql--hyperscale-citus) deployment options. For information about RI pricing on Hyperscale (Citus), see [this page](../hyperscale/concepts-reserved-pricing.md).
+
+You can buy Azure Database for PostgreSQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
+
+* You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
+* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
+* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for PostgreSQL reserved capacity. </br>
+
+The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md).
+
+## Reservation exchanges and refunds
+
+You can exchange a reservation for another reservation of the same type, you can also exchange a reservation from Azure Database for PostgreSQL - Single Server with Flexible Server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+## Reservation discount
+
+You may save up to 65% on compute costs with reserved instances. In order to find the discount for your case, please visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
+
+## Determine the right server size before purchase
+
+The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed servers within a specific region and using the same performance tier and hardware generation.</br>
+
+For example, let's suppose that you are running one general purpose Gen5 ΓÇô 32 vCore PostgreSQL database, and two memory-optimized Gen5 ΓÇô 16 vCore PostgreSQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose Gen5 ΓÇô 8 vCore database server, and one memory-optimized Gen5 ΓÇô 32 vCore database server. Let's suppose that you know that you will need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5
++
+## Buy Azure Database for PostgreSQL reserved capacity
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Select **All services** > **Reservations**.
+3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
+4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL servers that get the discount depend on the scope and quantity selected.
++++
+The following table describes required fields.
+
+| Field | Description |
+| : | :- |
+| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
+| Scope | The vCore reservationΓÇÖs scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for PostgreSQL servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br>**Management group**, the reservation discount is applied to Azure Database for PostgreSQL running in any subscriptions that are a part of both the management group and billing scope.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for PostgreSQL servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for PostgreSQL servers in the selected subscription and the selected resource group within that subscription.
+| Region | The Azure region thatΓÇÖs covered by the Azure Database for PostgreSQL reserved capacity reservation.
+| Deployment Type | The Azure Database for PostgreSQL resource type that you want to buy the reservation for.
+| Performance Tier | The service tier for the Azure Database for PostgreSQL servers.
+| Term | One year
+| Quantity | The amount of compute resources being purchased within the Azure Database for PostgreSQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you are running or planning to run an Azure Database for PostgreSQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
+
+## Reserved instances API support
+
+Use Azure APIs to programmatically get information for your organization about Azure service or software reservations. For example, use the APIs to:
+
+- Find reservations to buy
+- Buy a reservation
+- View purchased reservations
+- View and manage reservation access
+- Split or merge reservations
+- Change the scope of reservations
+
+For more information, see [APIs for Azure reservation automation](../../cost-management-billing/reservations/reservation-apis.md).
+
+## vCore size flexibility
+
+vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit. If you scale to higher vCores than your reserved capacity, you will be billed for the excess vCores using pay-as-you-go pricing.
+
+## How to view reserved instance purchase details
+
+You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations). For more information, see [How a reservation discount is applied to Azure Database for PostgreSQL](../../cost-management-billing/reservations/understand-reservation-charges-postgresql.md).
+
+## Reserved instance expiration
+
+You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Instances for Azure Database for PostgreSQL](../../cost-management-billing/reservations/understand-reservation-charges-postgresql.md).
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+The vCore reservation discount is applied automatically to the number of Azure Database for PostgreSQL servers that match the Azure Database for PostgreSQL reserved capacity reservation scope and attributes. You can update the scope of the Azure database for PostgreSQL reserved capacity reservation through Azure portal, PowerShell, CLI or through the API.
+
+To learn more about Azure Reservations, see the following articles:
+
+* [What are Azure Reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md)?
+* [Manage Azure Reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+* [Understand Azure Reservations discount](../../cost-management-billing/reservations/understand-reservation-charges.md)
+* [Understand reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reservation-charges-postgresql.md)
+* [Understand reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
+* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-aks.md
+
+ Title: Connect to Azure Kubernetes Service - Azure Database for PostgreSQL - Single Server
+description: Learn about connecting Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Single Server
++++++ Last updated : 07/14/2020++
+# Connecting Azure Kubernetes Service and Azure Database for PostgreSQL - Single Server
+
+Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for PostgreSQL together to create an application.
+
+## Accelerated networking
+Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md).
+
+From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
+
+You can confirm whether your AKS cluster has accelerated networking:
+1. Go to the Azure portal and select your AKS cluster.
+2. Select the Properties tab.
+3. Copy the name of the **Infrastructure Resource Group**.
+4. Use the portal search bar to locate and open the infrastructure resource group.
+5. Select a VM in that resource group.
+6. Go to the VM's **Networking** tab.
+7. Confirm whether **Accelerated networking** is 'Enabled.'
+
+Or through the Azure CLI using the following two commands:
+```azurecli
+az aks show --resource-group myResourceGroup --name myAKSCluster --query "nodeResourceGroup"
+```
+The output will be the generated resource group that AKS creates containing the network interface. Take the "nodeResourceGroup" name and use it in the next command. **EnableAcceleratedNetworking** will either be true or false:
+```azurecli
+az network nic list --resource-group nodeResourceGroup -o table
+```
+
+## Connection pooling
+A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
+
+There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
+
+## Next steps
+
+Create an AKS cluster [using the Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli), [using Azure PowerShell](/azure/aks/learn/quick-kubernetes-deploy-powershell), or [using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal).
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-audit.md
+
+ Title: Audit logging in Azure Database for PostgreSQL - Single Server
+description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 01/28/2020++
+# Audit logging in Azure Database for PostgreSQL - Single Server
+
+Audit logging of database activities in Azure Database for PostgreSQL - Single Server is available through the PostgreSQL Audit extension, [pgAudit](https://www.pgaudit.org/). The pgAudit extension provides detailed session and object audit logging.
+
+> [!NOTE]
+> The pgAudit extension is in preview on Azure Database for PostgreSQL. It can be enabled on general purpose and memory-optimized servers only.
+
+If you want Azure resource-level logs for operations like compute and storage scaling, see [Overview of Azure platform logs](../../azure-monitor/essentials/platform-logs-overview.md).
+
+## Usage considerations
+
+By default, pgAudit log statements are emitted along with your regular log statements by using the Postgres standard logging facility. In Azure Database for PostgreSQL, these .log files can be downloaded through the Azure portal or the Azure CLI. The maximum storage for the collection of files is 1 GB. Each file is available for a maximum of seven days. The default is three days. This service is a short-term storage option.
+
+Alternatively, you can configure all logs to be sent to the Azure Monitor Logs store for later analytics in Log Analytics. If you enable Monitor resource logging, your logs are automatically sent in JSON format to Azure Storage, Azure Event Hubs, or Monitor Logs, depending on your choice.
+
+Enabling pgAudit generates a large volume of logging on a server, which affects performance and log storage. We recommend that you use Monitor Logs, which offers longer-term storage options and analysis and alerting features. Turn off standard logging to reduce the performance impact of additional logging:
+
+ 1. Set the parameter `logging_collector` to **OFF**.
+ 1. Restart the server to apply this change.
+
+To learn how to set up logging to Storage, Event Hubs, or Monitor Logs, see the resource logs section of [Logs in Azure Database for PostgreSQL - Single Server](concepts-server-logs.md).
+
+## Install pgAudit
+
+To install pgAudit, you need to include it in the server's shared preloaded libraries. A change to the Postgres `shared_preload_libraries` parameter requires a server restart to take effect. You can change parameters by using the [portal](how-to-configure-server-parameters-using-portal.md), the [CLI](how-to-configure-server-parameters-using-cli.md), or the [REST API](/rest/api/postgresql/singleserver/configurations/createorupdate).
+
+To use the [portal](https://portal.azure.com):
+
+ 1. Select your Azure Database for PostgreSQL server.
+ 1. On the left, under **Settings**, select **Server parameters**.
+ 1. Search for **shared_preload_libraries**.
+ 1. Select **PGAUDIT**.
+
+ :::image type="content" source="./media/concepts-audit/share-preload-parameter.png" alt-text="Screenshot that shows Azure Database for PostgreSQL enabling shared_preload_libraries for PGAUDIT.":::
+
+ 1. Restart the server to apply the change.
+ 1. Check that `pgaudit` is loaded in `shared_preload_libraries` by executing the following query in psql:
+
+ ```SQL
+ show shared_preload_libraries;
+ ```
+ You should see `pgaudit` in the query result that will return `shared_preload_libraries`.
+
+ 1. Connect to your server by using a client like psql, and enable the pgAudit extension:
+
+ ```SQL
+ CREATE EXTENSION pgaudit;
+ ```
+
+> [!TIP]
+> If you see an error, confirm that you restarted your server after you saved `shared_preload_libraries`.
+
+## pgAudit settings
+
+By using pgAudit, you can configure session or object audit logging. [Session audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#session-audit-logging) emits detailed logs of executed statements. [Object audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#object-audit-logging) is audit scoped to specific relations. You can choose to set up one or both types of logging.
+
+> [!NOTE]
+> The pgAudit settings are specified globally and can't be specified at a database or role level.
+
+After you [install pgAudit](#install-pgaudit), you can configure its parameters to start logging.
+
+To configure pgAudit, in the [portal](https://portal.azure.com):
+
+ 1. Select your Azure Database for PostgreSQL server.
+ 1. On the left, under **Settings**, select **Server parameters**.
+ 1. Search for the **pgaudit** parameters.
+ 1. Select appropriate settings parameters to edit. For example, to start logging, set **pgaudit.log** to **WRITE**.
+
+ :::image type="content" source="./media/concepts-audit/pgaudit-config.png" alt-text="Screenshot that shows Azure Database for PostgreSQL configuring logging with pgAudit.":::
+ 1. Select **Save** to save your changes.
+
+The [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#settings) provides the definition of each parameter. Test the parameters first, and confirm that you're getting the expected behavior. For example:
+
+- When the setting **pgaudit.log_client** is turned on, it redirects logs to a client process like psql instead of being written to a file. In general, leave this setting disabled.
+- The parameter **pgaudit.log_level** is only enabled when **pgaudit.log_client** is on.
+
+> [!NOTE]
+> In Azure Database for PostgreSQL, **pgaudit.log** can't be set by using a minus-sign shortcut (`-`) as described in the pgAudit documentation. All required statement classes, such as READ and WRITE, should be individually specified.
+
+### Audit log format
+
+Each audit entry is indicated by `AUDIT:` near the beginning of the log line. The format of the rest of the entry is detailed in the [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format).
+
+If you need any other fields to satisfy your audit requirements, use the Postgres parameter `log_line_prefix`. The string `log_line_prefix` is output at the beginning of every Postgres log line. For example, the following `log_line_prefix` setting provides timestamp, username, database name, and process ID:
+
+```
+t=%m u=%u db=%d pid=[%p]:
+```
+
+To learn more about `log_line_prefix`, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LINE-PREFIX).
+
+### Get started
+
+To quickly get started, set **pgaudit.log** to **WRITE**. Then open your logs to review the output.
+
+## View audit logs
+
+If you're using .log files, your audit logs are included in the same file as your PostgreSQL error logs. You can download log files from the [portal](how-to-configure-server-logs-in-portal.md) or the [CLI](how-to-configure-server-logs-using-cli.md).
+
+If you're using Azure resource logging, the way you access the logs depends on which endpoint you choose. For Storage, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage). For Event Hubs, also see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
+
+For Monitor Logs, the logs are sent to the workspace you selected. The Postgres logs use the `AzureDiagnostics` collection mode, so they can be queried from the `AzureDiagnostics` table, as shown. To learn more about querying and alerting, see [Log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+
+Use this query to get started. You can configure alerts based on queries.
+
+Search for all Postgres logs for a particular server in the last day:
+
+```
+AzureDiagnostics
+| where LogicalServerName_s == "myservername"
+| where TimeGenerated > ago(1d)
+| where Message contains "AUDIT:"
+```
+
+## Next steps
+
+- [Learn about logging in Azure Database for PostgreSQL](concepts-server-logs.md).
+- Learn how to set parameters by using the [Azure portal](how-to-configure-server-parameters-using-portal.md), the [Azure CLI](how-to-configure-server-parameters-using-cli.md), or the [REST API](/rest/api/postgresql/singleserver/configurations/createorupdate).
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-ad-authentication.md
+
+ Title: Active Directory authentication - Azure Database for PostgreSQL - Single Server
+description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for PostgreSQL - Single Server
+++++ Last updated : 07/23/2020++
+# Use Azure Active Directory for authenticating with PostgreSQL
+
+Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for PostgreSQL using identities defined in Azure AD.
+With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
+
+Benefits of using Azure AD include:
+
+- Authentication of users across Azure Services in a uniform way
+- Management of password policies and password rotation in a single place
+- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords
+- Customers can manage database permissions using external (Azure AD) groups.
+- Azure AD authentication uses PostgreSQL database roles to authenticate identities at the database level
+- Support of token-based authentication for applications connecting to Azure Database for PostgreSQL
+
+To configure and use Azure Active Directory authentication, use the following process:
+
+1. Create and populate Azure Active Directory with user identities as needed.
+2. Optionally associate or change the Active Directory currently associated with your Azure subscription.
+3. Create an Azure AD administrator for the Azure Database for PostgreSQL server.
+4. Create database users in your database mapped to Azure AD identities.
+5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
+
+> [!NOTE]
+> To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for PostgreSQL, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](how-to-configure-sign-in-azure-ad-authentication.md).
+
+## Architecture
+
+The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for PostgreSQL. The arrows indicate communication pathways.
+
+![authentication flow][1]
+
+## Administrator structure
+
+When using Azure AD authentication, there are two Administrator accounts for the PostgreSQL server; the original PostgreSQL administrator and the Azure AD administrator. Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the PostgreSQL server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the PostgreSQL server. Only one Azure AD administrator (a user or group) can be configured at any time.
+
+![admin structure][2]
+
+## Permissions
+
+To create new users that can authenticate with Azure AD, you must have the `azure_ad_admin` role in the database. This role is assigned by configuring the Azure AD Administrator account for a specific Azure Database for PostgreSQL server.
+
+To create a new Azure AD database user, you must connect as the Azure AD administrator. This is demonstrated in [Configure and Login with Azure AD for Azure Database for PostgreSQL](how-to-configure-sign-in-azure-ad-authentication.md).
+
+Any Azure AD authentication is only possible if the Azure AD admin was created for Azure Database for PostgreSQL. If the Azure Active Directory admin was removed from the server, existing Azure Active Directory users created previously can no longer connect to the database using their Azure Active Directory credentials.
+
+## Connecting using Azure AD identities
+
+Azure Active Directory authentication supports the following methods of connecting to a database using Azure AD identities:
+
+- Azure Active Directory Password
+- Azure Active Directory Integrated
+- Azure Active Directory Universal with MFA
+- Using Active Directory Application certificates or client secrets
+- [Managed Identity](how-to-connect-with-managed-identity.md)
+
+Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in.
+
+Please note that management operations, such as adding new users, are only supported for Azure AD user roles at this point.
+
+> [!NOTE]
+> For more details on how to connect with an Active Directory token, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](how-to-configure-sign-in-azure-ad-authentication.md).
+
+## Additional considerations
+
+- To enhance manageability, we recommend you provision a dedicated Azure AD group as an administrator.
+- Only one Azure AD administrator (a user or group) can be configured for an Azure Database for PostgreSQL server at any time.
+- Only an Azure AD administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users.
+- If a user is deleted from Azure AD, that user will no longer be able to authenticate with Azure AD, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching role will still be in the database, it will not be possible to connect to the server with that role.
+> [!NOTE]
+> Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for PostgreSQL this access will be revoked immediately.
+- If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will reenable Azure AD logins.
+- Azure Database for PostgreSQL matches access tokens to the Azure Database for PostgreSQL role using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for PostgreSQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing role. To allow that, the Azure Database for PostgreSQL Azure AD admin must revoke and then grant the role ΓÇ£azure_ad_userΓÇ¥ to the user to refresh the Azure AD user ID.
+
+## Next steps
+
+- To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for PostgreSQL, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](how-to-configure-sign-in-azure-ad-authentication.md).
+- For an overview of logins, users, and database roles Azure Database for PostgreSQL, see [Create users in Azure Database for PostgreSQL - Single Server](how-to-create-users.md).
+
+<!--Image references-->
+
+[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
+[2]: ./media/concepts-azure-ad-authentication/admin-structure.png
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-advisor-recommendations.md
+
+ Title: Azure Advisor for PostgreSQL
+description: Learn about Azure Advisor recommendations for PostgreSQL.
+++++ Last updated : 04/08/2021+
+# Azure Advisor for PostgreSQL
+Learn about how Azure Advisor is applied to Azure Database for PostgreSQL and get answers to common questions.
+## What is Azure Advisor for PostgreSQL?
+The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your PostgreSQL database.
+Advisor recommendations are split among our PostgreSQL database offerings:
+* Azure Database for PostgreSQL - Single Server
+* Azure Database for PostgreSQL - Flexible Server
+* Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations.
+## Where can I view my recommendations?
+Recommendations are available from the **Overview** navigation sidebar in the Azure portal. A preview will appear as a banner notification, and details can be viewed in the **Notifications** section located just below the resource usage graphs.
++
+## Recommendation types
+Azure Database for PostgreSQL prioritize the following types of recommendations:
+* **Performance**: To improve the speed of your PostgreSQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../../advisor/advisor-performance-recommendations.md).
+* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, connection limits, and hyperscale data distribution recommendations. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-high-availability-recommendations.md).
+* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../../advisor/advisor-cost-recommendations.md).
+
+## Understanding your recommendations
+* **Daily schedule**: For Azure PostgreSQL databases, we check server telemetry and issue recommendations on a daily schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry on the following day.
+* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
+
+## Next steps
+For more information, see [Azure Advisor Overview](../../advisor/advisor-overview.md).
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-backup.md
+
+ Title: Backup and restore - Azure Database for PostgreSQL - Single Server
+description: Learn about automatic backups and restoring your Azure Database for PostgreSQL server - Single Server.
+++++ Last updated : 11/08/2021++
+# Backup and restore in Azure Database for PostgreSQL - Single Server
+
+Azure Database for PostgreSQL automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion.
+
+## Backups
+
+Azure Database for PostgreSQL takes backups of the data files and the transaction log. Depending on the supported maximum storage size, we either take full and differential backups (4-TB max storage servers) or snapshot backups (up to 16-TB max storage servers). These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can optionally configure it up to 35 days. All backups are encrypted using AES 256-bit encryption.
+
+These backup files cannot be exported. The backups can only be used for restore operations in Azure Database for PostgreSQL. You can use [pg_dump](how-to-migrate-using-dump-and-restore.md) to copy a database.
+
+### Backup frequency
+
+#### Servers with up to 4-TB storage
+
+For servers which support up to 4-TB maximum storage, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes.
++
+#### Servers with up to 16-TB storage
+
+In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only. Differential snapshot backups do not occur on a fixed schedule. In a day, three differential snapshot backups are performed. Transaction log backups occur every five minutes.
+
+> [!NOTE]
+> Automatic backups are performed for [replica servers](./concepts-read-replicas.md) that are configured with up to 4TB storage configuration.
+
+### Backup retention
+
+Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./how-to-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./how-to-restore-server-cli.md#set-backup-configuration).
+
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example - if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days:
+- Servers with up to 4-TB storage will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.
+- Servers with up to 16-TB storage will retain the full database snapshot, all the differential snapshots and transaction log backups in last 8 days.
+
+### Backup redundancy options
+
+Azure Database for PostgreSQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../../availability-zones/cross-region-replication-azure.md). This provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
+
+> [!IMPORTANT]
+> Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option.
+
+### Backup storage cost
+
+Azure Database for PostgreSQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/).
+
+You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available in the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
+
+The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups.
+
+## Restore
+
+In Azure Database for PostgreSQL, performing a restore creates a new server from the original server's backups.
+
+There are two types of restore available:
+
+- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server.
+- **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region.
+
+The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time varies depending on the the last data backup and the amount of recovery needs to be performed. It is usually less than 12 hours.
+
+> [!NOTE]
+> If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
+
+> [!NOTE]
+> If you want to restore a deleted PostgreSQL server, follow the procedure documented [here](how-to-restore-dropped-server.md).
+
+### Point-in-time restore
+
+Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option.
+
+Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect.
+
+You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes.
+
+If you want to restore a dropped table,
+1. Restore source server using Point-in-time method.
+2. Dump the table using `pg_dump` from restored server.
+3. Rename source table on original server.
+4. Import table using psql command line on original server.
+5. You can optionally delete the restored server.
+
+>[!Note]
+> It is recommended not to create multiple restores for the same server at the same time.
+
+### Geo-restore
+
+You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups. Servers that support up to 4 TB of storage can be restored to the geo-paired region, or to any region that supports up to 16 TB of storage. For servers that support up to 16 TB of storage, geo-backups can be restored in any region that support 16 TB servers as well. Review [Azure Database for PostgreSQL pricing tiers](concepts-pricing-tiers.md) for the list of supported regions.
+
+Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
+
+During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size is not supported.
+
+> [!NOTE]
+> If your source server uses infrastructure double encryption, for restoring the server, there are limitations including available regions. Please see the [infrastructure double encryption](concepts-infrastructure-double-encryption.md) for more details.
+
+### Perform post-restore tasks
+
+After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running:
+
+- To access the restored server, since it has a different name than the original server, please change the servername to the restored server name and the user name to `username@new-restored-server-name` in your connection string.
+- If the new server is meant to replace the original server, redirect clients and client applications to the new server.
+- Ensure appropriate server-level firewall and VNet rules are in place for users to connect. These rules are not copied over from the original server.
+- Ensure appropriate logins and database level permissions are in place
+- Configure alerts, as appropriate
+
+## Next steps
+
+- Learn how to restore usingΓÇ»[the Azure portal](how-to-restore-server-portal.md).
+- Learn how to restore usingΓÇ»[the Azure CLI](how-to-restore-server-cli.md).
+- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-business-continuity.md
+
+ Title: Business continuity - Azure Database for PostgreSQL - Single Server
+description: This article describes business continuity (point in time restore, data center outage, geo-restore, replicas) when using Azure Database for PostgreSQL.
+++++ Last updated : 08/07/2020++
+# Overview of business continuity with Azure Database for PostgreSQL - Single Server
+
+This overview describes the capabilities that Azure Database for PostgreSQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance.
+
+## Features that you can use to provide business continuity
+
+As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after the disruptive event - this is your Recovery Time Objective (RTO). You also need to understand the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event - this is your Recovery Point Objective (RPO).
+
+Azure Database for PostgreSQL provides business continuity features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in a different region. Each has different characteristics for the recovery time and the potential data loss. With [Geo-restore](concepts-backup.md) feature, a new server is created using the backup data that is replicated from another region. The overall time it takes to restore and recover depends on the size of the database and the amount of logs to recover. The overall time to establish the server varies from few minutes to few hours. With [read replicas](concepts-read-replicas.md), transaction logs from the primary are asynchronously streamed to the replica. In the event of a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss.
+
+> [!NOTE]
+> The lag between the primary and the replica depends on the latency between the sites, the amount of data to be transmitted and most importantly on the write workload of the primary server. Heavy write workloads can generate significant lag.
+>
+> Because of asynchronous nature of replication used for read-replicas, they **should not** be considered as a High Availability (HA) solution since the higher lags can mean higher RTO and RPO. Only for workloads where the lag remains smaller through the peak and non-peak times of the workload, read replicas can act as a HA alternative. Otherwise read replicas are intended for true read-scale for ready heavy workloads and for (Disaster Recovery) DR scenarios.
+
+The following table compares RTO and RPO in a **typical workload** scenario:
+
+| **Capability** | **Basic** | **General Purpose** | **Memory optimized** |
+| :: | :-: | :--: | :: |
+| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
+| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h |
+| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
+
+ \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
+
+## Recover a server after a user or application error
+
+You can use the serviceΓÇÖs backups to recover a server from various disruptive events. A user may accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data due to an application defect, and so on.
+
+You can perform a **point-in-time-restore** to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server.
+
+We recommend that you use [Azure resource lock](../../azure-resource-manager/management/lock-resources.md) to help prevent accidental deletion of your server. If you accidentally deleted your server, you might be able to restore it if the deletion happened within the last 5 days. For more information, see [Restore a dropped Azure Database for PostgreSQL server](how-to-restore-dropped-server.md).
+
+## Recover from an Azure data center outage
+
+Although rare, an Azure data center can have an outage. When an outage occurs, it causes a business disruption that might only last a few minutes, but could last for hours.
+
+One option is to wait for your server to come back online when the data center outage is over. This works for applications that can afford to have the server offline for some period of time, for example a development environment. When a data center has an outage, you do not know how long the outage might last, so this option only works if you don't need your server for a while.
+
+## Geo-restore
+
+The geo-restore feature restores the server using geo-redundant backups. The backups are hosted in your server's [paired region](../../availability-zones/cross-region-replication-azure.md). You can restore from these backups to any other region. The geo-restore creates a new server with the data from the backups. Learn more about geo-restore from the [backup and restore concepts article](concepts-backup.md).
+
+> [!IMPORTANT]
+> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using pg_dump of your existing server and restore it to a newly created server configured with geo-redundant backups.
+
+## Cross-region read replicas
+You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
+
+## FAQ
+### Where does Azure Database for PostgreSQL store customer data?
+By default, Azure Database for PostgreSQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
++
+## Next steps
+- Learn more about the [automated backups in Azure Database for PostgreSQL](concepts-backup.md).
+- Learn how to restore using [the Azure portal](how-to-restore-server-portal.md) or [the Azure CLI](how-to-restore-server-cli.md).
+- Learn about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
+
+ Title: Certificate rotation for Azure Database for PostgreSQL Single server
+description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for PostgreSQL Single server
++++++ Last updated : 09/02/2020++
+# Understanding the changes in the Root CA change for Azure Database for PostgreSQL Single server
+
+Azure Database for PostgreSQL Single Server successfully completed the root certificate change on **February 15, 2021 (02/15/2021)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
+
+## Why root certificate update is required?
+
+Azure database for PostgreSQL users can only use the predefined certificate to connect to their PostgreSQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
+
+As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
+
+The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
+
+## What change was performed on February 15, 2021 (02/15/2021)?
+
+On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers do not need to change anything and there is no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
+
+## Do I need to make any changes on my client to maintain connectivity?
+
+There is no change required on client side. if you followed our previous recommendation below, you will still be able to continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **We recommend to not remove the BaltimoreCyberTrustRoot from your combined CA certificate until further notice to maintain connectivity.**
+
+### Previous Recommendation
+
+* Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from links below:
+ * https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem
+ * https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem
+
+* Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
+ * For Java (PostgreSQL JDBC) users using DefaultJavaSSLFactory, execute:
+
+ ```console
+ keytool -importcert -alias PostgreSQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
+ ```
+
+ ```console
+ keytool -importcert -alias PostgreSQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
+ ```
+
+ Then replace the original keystore file with the new generated one:
+ * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
+ * System.setProperty("javax.net.ssl.trustStorePassword","password");
+
+ * For .NET (Npgsql) users on Windows, make sure **Baltimore CyberTrust Root** and **DigiCert Global Root G2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates do not exist, import the missing certificate.
+
+ ![Azure Database for PostgreSQL .net cert](media/overview/netconnecter-cert.png)
+
+ * For .NET (Npgsql) users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates do not exist, create the missing certificate file.
+
+ * For other PostgreSQL client users, you can merge two CA certificate files like this format below
+
+ </br>--BEGIN CERTIFICATE--
+ </br>(Root CA1: BaltimoreCyberTrustRoot.crt.pem)
+ </br>--END CERTIFICATE--
+ </br>--BEGIN CERTIFICATE--
+ </br>(Root CA2: DigiCertGlobalRootG2.crt.pem)
+ </br>--END CERTIFICATE--
+
+* Replace the original root CA pem file with the combined root CA file and restart your application/client.
+* In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
+
+> [!NOTE]
+> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
+
+## Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
+
+We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
+
+Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
+
+## What if we removed the BaltimoreCyberTrustRoot certificate?
+
+You will start to connectivity errors while connecting to your Azure Database for PostgreSQL server. You will need to configure SSL with [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
++
+## Frequently asked questions
+
+### 1. If I am not using SSL/TLS, do I still need to update the root CA?
+No actions required if you are not using SSL/TLS.
+
+### 2. If I am using SSL/TLS, do I need to restart my database server to update the root CA?
+No, you do not need to restart the database server to start using the new certificate. This is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
+
+### 3. How do I know if I'm using SSL/TLS with root certificate verification?
+
+You can identify whether your connections verify the root certificate by reviewing your connection string.
+- If your connection string includes `sslmode=verify-ca` or `sslmode=verify-full`, you need to update the certificate.
+- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates.
+- If your connection string does not specify sslmode, you do not need to update certificates.
+
+If you are using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. To understand PostgreSQL sslmode review the [SSL mode descriptions](https://www.postgresql.org/docs/11/libpq-ssl.html#ssl-mode-descriptions) in PostgreSQL documentation.
+
+### 4. What is the impact if using App Service with Azure Database for PostgreSQL?
+For Azure app services, connecting to Azure Database for PostgreSQL, we can have two possible scenarios and it depends on how on you are using SSL with your application.
+* This new certificate has been added to App Service at platform level. If you are using the SSL certificates included on App Service platform in your application, then no action is needed.
+* If you are explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress)
+
+### 5. What is the impact if using Azure Kubernetes Services (AKS) with Azure Database for PostgreSQL?
+If you are trying to connect to the Azure Database for PostgreSQL using Azure Kubernetes Services (AKS), it is similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-own-tls.md).
+
+### 6. What is the impact if using Azure Data Factory to connect to Azure Database for PostgreSQL?
+For connector using Azure Integration Runtime, the connector leverage certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates and therefore no action is needed.
+
+For connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you will need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
+
+### 7. Do I need to plan a database server maintenance downtime for this change?
+No. Since the change here is only on the client side to connect to the database server, there is no maintenance downtime needed for the database server for this change.
+
+### 8. If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
+For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL.
+
+### 9. How often does Microsoft update their certificates or what is the expiry policy?
+These certificates used by Azure Database for PostgreSQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times.
+
+### 10. If I am using read replicas, do I need to perform this update only on the primary server or the read replicas?
+Since this update is a client-side change, if the client used to read data from the replica server, you will need to apply the changes for those clients as well.
+
+### 11. Do we have server-side query to verify if SSL is being used?
+To verify if you are using SSL connection to connect to the server refer [SSL verification](concepts-ssl-connection-security.md#applications-that-require-certificate-verification-for-tls-connectivity).
+
+### 12. Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
+No. There is no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
+
+### 13. What if you are using docker image of PgBouncer sidecar provided by Microsoft?
+A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting February 15, 2021.
+
+### 14. What if I have further questions?
+If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com)
postgresql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connection-libraries.md
+
+ Title: Connection libraries - Azure Database for PostgreSQL - Single Server
+description: This article describes several libraries and drivers that you can use when coding applications to connect and query Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019++
+# Connection libraries for Azure Database for PostgreSQL - Single Server
+This article lists libraries and drivers that developers can use to develop applications to connect to and query Azure Database for PostgreSQL.
+
+## Client interfaces
+Most language client libraries used to connect to PostgreSQL server are external projects and are distributed independently. The libraries listed are supported on the Windows, Linux, and Mac platforms, for connecting to Azure Database for PostgreSQL. Several quickstart examples are listed in the Next steps section.
+
+| **Language** | **Client interface** | **Additional information** | **Download** |
+|--|-|-|--|
+| Python | [psycopg](http://initd.org/psycopg/) | DB API 2.0-compliant | [Download](http://initd.org/psycopg/download/) |
+| PHP | [php-pgsql](https://secure.php.net/manual/en/book.pgsql.php) | Database extension | [Install](https://secure.php.net/manual/en/pgsql.installation.php) |
+| Node.js | [Pg npm package](https://www.npmjs.com/package/pg) | Pure JavaScript non-blocking client | [Install](https://www.npmjs.com/package/pg) |
+| Java | [JDBC](https://jdbc.postgresql.org/) | Type 4 JDBC driver | [Download](https://jdbc.postgresql.org/download.html)  |
+| Ruby | [Pg gem](https://deveiate.org/code/pg/) | Ruby Interface | [Download](https://rubygems.org/downloads/pg-0.20.0.gem) |
+| Go | [Package pq](https://godoc.org/github.com/lib/pq) | Pure Go postgres driver | [Install](https://github.com/lib/pq/blob/master/README.md) |
+| C\#/ .NET | [Npgsql](https://www.npgsql.org/) | ADO.NET Data Provider | [Download](https://dotnet.microsoft.com/download) |
+| ODBC | [psqlODBC](https://odbc.postgresql.org/) | ODBC Driver | [Download](https://www.postgresql.org/ftp/odbc/versions/) |
+| C | [libpq](https://www.postgresql.org/docs/9.6/static/libpq.html) | Primary C language interface | Included |
+| C++ | [libpqxx](http://pqxx.org/) | New-style C++ interface | [Download](http://pqxx.org/download/software/) |
+
+## Next steps
+Read these quickstarts on how to connect to and query Azure Database for PostgreSQL by using your language of choice:
+
+[Python](./connect-python.md) | [Node.JS](./connect-nodejs.md) | [Java](./connect-java.md) | [Ruby](./connect-ruby.md) | [PHP](./connect-php.md) | [.NET (C#)](./connect-csharp.md) | [Go](./connect-go.md)
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
+
+ Title: Connectivity architecture - Azure Database for PostgreSQL - Single Server
+description: Describes the connectivity architecture of your Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 10/15/2021++
+# Connectivity architecture in Azure Database for PostgreSQL
+This article explains the Azure Database for PostgreSQL connectivity architecture as well as how the traffic is directed to your Azure Database for PostgreSQL database instance from clients both within and outside Azure.
+
+## Connectivity architecture
+Connection to your Azure Database for PostgreSQL is established through a gateway that is responsible for routing incoming connections to the physical location of your server in our clusters. The following diagram illustrates the traffic flow.
+++
+As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 5432. Inside the database cluster, traffic is forwarded to appropriate Azure Database for PostgreSQL. Therefore, in order to connect to your server, such as from corporate networks, it is necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
+
+## Azure Database for PostgreSQL gateway IP addresses
+
+The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for PostgreSQL server.
+
+As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
+
+* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**.You should use fully qualified domain name (FQDN) of your server in the format `<servername>.postgres.database.azure.com`, in the connection string for your application.
+* You do not update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
+
+The following table lists the gateway IP addresses of the Azure Database for PostgreSQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
+
+* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you are provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
+* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you are provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we have not decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you are expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there is no interruptions in connectivity to your server.
+* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
++
+| **Region name** | **Gateway IP addresses** |**Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
+|:-|:-|:-|:|
+| Australia Central| 20.36.105.0 | | |
+| Australia Central2 | 20.36.113.0 | | |
+| Australia East | 13.75.149.87, 40.79.161.1 | | |
+| Australia South East |13.77.48.10, 13.77.49.32, 13.73.109.251 | | |
+| Brazil South |191.233.201.8, 191.233.200.16 | | 104.41.11.5|
+| Canada Central |40.85.224.249, 52.228.35.221 | | |
+| Canada East | 40.86.226.166, 52.242.30.154 | | |
+| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | |
+| China East | 139.219.130.35 | | |
+| China East 2 | 40.73.82.1, 52.130.120.89 |
+| China East 3 | 52.131.155.192 |
+| China North | 139.219.15.17 | | |
+| China North 2 | 40.73.50.0 | | |
+| China North 3 | 52.131.27.192 | | |
+| East Asia | 13.75.33.20, 52.175.33.150, 13.75.33.20, 13.75.33.21 | | |
+| East US |40.71.8.203, 40.71.83.113 |40.121.158.30|191.238.6.43 |
+| East US 2 | 40.70.144.38, 52.167.105.38 | 52.177.185.181 | |
+| France Central | 40.79.137.0, 40.79.129.1 | | |
+| France South | 40.79.177.0 | | |
+| Germany Central | 51.4.144.100 | | |
+| Germany North | 51.116.56.0 | |
+| Germany North East | 51.5.144.179 | | |
+| Germany West Central | 51.116.152.0 | |
+| India Central | 104.211.96.159 | | |
+| India South | 104.211.224.146 | | |
+| India West | 104.211.160.80 | | |
+| Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | |
+| Japan West | 104.214.148.156, 40.74.96.6, 40.74.96.7 | 104.214.148.156 | |
+| Korea Central | 52.231.17.13 | 52.231.32.42 | |
+| Korea South | 52.231.145.3 | 52.231.151.97 | |
+| North Central US | 52.162.104.35, 52.162.104.36 | 23.96.178.199 | |
+| North Europe | 52.138.224.6, 52.138.224.7 | 40.113.93.91 |191.235.193.75 |
+| South Africa North | 102.133.152.0 | | |
+| South Africa West | 102.133.24.0 | | |
+| South Central US |104.214.16.39, 20.45.120.0 |13.66.62.124 |23.98.162.75 |
+| South East Asia | 40.78.233.2, 23.98.80.12 | 104.43.15.0 | |
+| Switzerland North | 51.107.56.0 ||
+| Switzerland West | 51.107.152.0| ||
+| UAE Central | 20.37.72.64 | | |
+| UAE North | 65.52.248.0 | | |
+| UK South | 51.140.184.11, 51.140.144.32, 51.105.64.0 | | |
+| UK West | 51.141.8.11 | | |
+| West Central US | 13.78.145.25, 52.161.100.158 | | |
+| West Europe |13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 |
+| West US |13.86.216.212, 13.86.217.212 |104.42.238.205 | 23.99.34.75|
+| West US 2 | 13.66.226.202, 13.66.136.192,13.66.136.195 | | |
+| West US 3 | 20.150.184.2 | | |
+||||
+
+## Frequently asked questions
+
+### What you need to know about this planned maintenance?
+This is a DNS change only which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
+
+### What are we decommissioning?
+Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We are decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
+
+### How can you validate if your connections are going to old gateway nodes or new gateway nodes?
+Ping your server's FQDN, for example ``ping xxx.postgres.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway.
+
+You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses
+
+### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
+You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
+
+### What do I do if my client applications are still connecting to old gateway server ?
+This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code.
+
+### Is there any impact for my application connections?
+This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+This maintenance operation will not drop the existing connections. It only makes the new connection requests go to new gateway ring.
+
+### Can I request for a specific time window for the maintenance?
+As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for majority of users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+
+### I am using private link, will my connections get affected?
+No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
+++
+## Next steps
+
+* [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md)
+* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)
postgresql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity.md
+
+ Title: Handle transient connectivity errors - Azure Database for PostgreSQL - Single Server
+description: Learn how to handle transient connectivity errors for Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019++
+# Handling transient connectivity errors for Azure Database for PostgreSQL - Single Server
+
+This article describes how to handle transient errors connecting to Azure Database for PostgreSQL.
+
+## Transient errors
+
+A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens. Another reason could be a new version of a PaaS service that is being rolled out. Most of these events are automatically mitigated by the system in less than 60 seconds. A best practice for designing and developing applications in the cloud is to expect transient errors. Assume they can happen in any component at any time and to have the appropriate logic in place to handle these situations.
+
+## Handling transient errors
+
+Transient errors should be handled using retry logic. Situations that must be considered:
+
+* An error occurs when you try to open a connection
+* An idle connection is dropped on the server side. When you try to issue a command, it can't be executed
+* An active connection that currently is executing a command is dropped.
+
+The first and second cases are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for PostgreSQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
+
+* Wait for 5 seconds before your first retry.
+* For each following retry, the increase the wait exponentially, up to 60 seconds.
+* Set a max number of retries at which point your application considers the operation failed.
+
+When a connection with an active transaction fails, it is more difficult to handle the recovery correctly. There are two cases: If the transaction was read-only in nature, it is safe to reopen the connection and to retry the transaction. If however if the transaction was also writing to the database, you must determine if the transaction was rolled back, or if it succeeded before the transient error happened. In that case, you might just not have received the commit acknowledgment from the database server.
+
+One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully.
+
+When your program communicates with Azure Database for PostgreSQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors.
+
+Make sure to test your retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for PostgreSQL server. Your application should handle the brief downtime that is encountered during this operation without any problems.
+
+## Next steps
+
+* [Troubleshoot connection issues to Azure Database for PostgreSQL](how-to-troubleshoot-common-connection-issues.md)
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-private-link.md
+
+ Title: Private Link - Azure Database for PostgreSQL - Single server
+description: Learn how Private link works for Azure Database for PostgreSQL - Single server.
++++++ Last updated : 03/10/2020++
+# Private Link for Azure Database for PostgreSQL-Single server
+
+Private Link allows you to create private endpoints for Azure Database for PostgreSQL - Single server to bring it inside your Virtual Network (VNet). The private endpoint exposes a private IP within a subnet that you can use to connect to your database server just like any other resource in the VNet.
+
+For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../../virtual-network/virtual-networks-overview.md) and Subnet.
+
+> [!NOTE]
+> The private link feature is only available for Azure Database for PostgreSQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
+
+## Data exfiltration prevention
+
+Data ex-filtration in Azure Database for PostgreSQL Single server is when an authorized user, such as a database admin, is able to extract data from one system and move it to another location or system outside the organization. For example, the user moves the data to a storage account owned by a third party.
+
+Consider a scenario with a user running PGAdmin inside an Azure Virtual Machine (VM) that is connecting to an Azure Database for PostgreSQL Single server provisioned in West US. The example below shows how to limit access with public endpoints on Azure Database for PostgreSQL Single server using network access controls.
+
+* Disable all Azure service traffic to Azure Database for PostgreSQL Single server via the public endpoint by setting *Allow Azure Services* to OFF. Ensure no IP addresses or ranges are allowed to access the server either via [firewall rules](./concepts-firewall-rules.md) or [virtual network service endpoints](./concepts-data-access-and-security-vnet.md).
+
+* Only allow traffic to the Azure Database for PostgreSQL Single server using the Private IP address of the VM. For more information, see the articles on [Service Endpoint](concepts-data-access-and-security-vnet.md) and [VNet firewall rules.](how-to-manage-vnet-using-portal.md)
+
+* On the Azure VM, narrow down the scope of outgoing connection by using Network Security Groups (NSGs) and Service Tags as follows
+
+ * Specify an NSG rule to allow traffic for *Service Tag = SQL.WestUS* - only allowing connection to Azure Database for PostgreSQL Single server in West US
+ * Specify an NSG rule (with a higher priority) to deny traffic for *Service Tag = SQL* - denying connections to PostgreSQL Database in all regions</br></br>
+
+At the end of this setup, the Azure VM can connect only to Azure Database for PostgreSQL Single server in the West US region. However, the connectivity isn't restricted to a single Azure Database for PostgreSQL Single server. The VM can still connect to any Azure Database for PostgreSQL Single server in the West US region, including the databases that aren't part of the subscription. While we've reduced the scope of data exfiltration in the above scenario to a specific region, we haven't eliminated it altogether.</br>
+
+With Private Link, you can now set up network access controls like NSGs to restrict access to the private endpoint. Individual Azure PaaS resources are then mapped to specific private endpoints. A malicious insider can only access the mapped PaaS resource (for example an Azure Database for PostgreSQL Single server) and no other resource.
+
+## On-premises connectivity over private peering
+
+When you connect to the public endpoint from on-premises machines, your IP address needs to be added to the IP-based firewall using a Server-level firewall rule. While this model works well for allowing access to individual machines for dev or test workloads, it's difficult to manage in a production environment.
+
+With Private Link, you can enable cross-premises access to the private endpoint using [Express Route](https://azure.microsoft.com/services/expressroute/) (ER), private peering or [VPN tunnel](../../vpn-gateway/index.yml). They can subsequently disable all access via public endpoint and not use the IP-based firewall.
+
+> [!NOTE]
+> In some cases the Azure Database for PostgreSQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
+> - Make sure that both the subscription has the **Microsoft.DBforPostgreSQL** resource provider registered. For more information, refer to [resource-manager-registration][resource-manager-portal]
+
+## Configure Private Link for Azure Database for PostgreSQL Single server
+
+### Creation Process
+
+Private endpoints are required to enable Private Link. This can be done using the following how-to guides.
+
+* [Azure portal](./how-to-configure-privatelink-portal.md)
+* [CLI](./how-to-configure-privatelink-cli.md)
+
+### Approval Process
+Once the network admin creates the private endpoint (PE), the PostgreSQL admin can manage the private endpoint Connection (PEC) to Azure Database for PostgreSQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for PostgreSQL connectivity.
+
+* Navigate to the Azure Database for PostgreSQL server resource in the Azure portal.
+ * Select the private endpoint connections in the left pane
+ * Shows a list of all private endpoint Connections (PECs)
+ * Corresponding private endpoint (PE) created
++
+* Select an individual PEC from the list by selecting it.
++
+* The PostgreSQL server admin can choose to approve or reject a PEC and optionally add a short text response.
++
+* After approval or rejection, the list will reflect the appropriate state along with the response text
++
+## Use cases of Private Link for Azure Database for PostgreSQL
++
+Clients can connect to the private endpoint from the same VNet, [peered VNet](../../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases.
++
+### Connecting from an Azure VM in Peered Virtual Network (VNet)
+Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for PostgreSQL - Single server from an Azure VM in a peered VNet.
+
+### Connecting from an Azure VM in VNet-to-VNet environment
+Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for PostgreSQL - Single server from an Azure VM in a different region or subscription.
+
+### Connecting from an on-premises environment over VPN
+To establish connectivity from an on-premises environment to the Azure Database for PostgreSQL - Single server, choose and implement one of the options:
+
+* [Point-to-Site connection](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
+* [Site-to-Site VPN connection](../../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md)
+* [ExpressRoute circuit](../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)
+
+## Private Link combined with firewall rules
+
+The following situations and outcomes are possible when you use Private Link in combination with firewall rules:
+
+* If you don't configure any firewall rules, then by default, no traffic will be able to access the Azure Database for PostgreSQL Single server.
+
+* If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule.
+
+* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for PostgreSQL Single server is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for PostgreSQL Single server.
+
+## Deny public access for Azure Database for PostgreSQL Single server
+
+If you want to rely only on private endpoints for accessing their Azure Database for PostgreSQL Single server, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
+
+When this setting is set to *YES* only connections via private endpoints are allowed to your Azure Database for PostgreSQL. When this setting is set to *NO* clients can connect to your Azure Database for PostgreSQL based on your firewall or VNet service endpoint setting. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
+
+> [!Note]
+> This feature is available in all Azure regions where Azure Database for PostgreSQL - Single server supports General Purpose and Memory Optimized pricing tiers.
+>
+> This setting does not have any impact on the SSL and TLS configurations for your Azure Database for PostgreSQL Single server.
+
+To learn how to set the **Deny Public Network Access** for your Azure Database for PostgreSQL Single server from Azure portal, refer to [How to configure Deny Public Network Access](how-to-deny-public-network-access.md).
+
+## Next steps
+
+To learn more about Azure Database for PostgreSQL Single server security features, see the following articles:
+
+* To configure a firewall for Azure Database for PostgreSQL Single server, see [Firewall support](./concepts-firewall-rules.md).
+
+* To learn how to configure a virtual network service endpoint for your Azure Database for PostgreSQL Single server, see [Configure access from virtual networks](./concepts-data-access-and-security-vnet.md).
+
+* For an overview of Azure Database for PostgreSQL Single server connectivity, see [Azure Database for PostgreSQL Connectivity Architecture](./concepts-connectivity-architecture.md)
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-vnet.md
+
+ Title: Virtual network rules - Azure Database for PostgreSQL - Single Server
+description: Learn how to use virtual network (vnet) service endpoints to connect to Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 07/17/2020++
+# Use Virtual Network service endpoints and rules for Azure Database for PostgreSQL - Single Server
+
+*Virtual network rules* are one firewall security feature that controls whether your Azure Database for PostgreSQL server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for PostgreSQL server.
+
+To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for PostgreSQL:
++
+> [!NOTE]
+> This feature is available in all regions of Azure public cloud where Azure Database for PostgreSQL is deployed for General Purpose and Memory Optimized servers.
+> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
+
+You can also consider using [Private Link](concepts-data-access-and-security-private-link.md) for connections. Private Link provides a private IP address in your VNet for the Azure Database for PostgreSQL server.
+
+<a name="anch-terminology-and-description-82f"></a>
+## Terminology and description
+
+**Virtual network:** You can have virtual networks associated with your Azure subscription.
+
+**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) within the VNet is assigned to a subnet. A subnet can contain multiple VMs and/or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access.
+
+**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for Azure Database
+
+**Virtual network rule:** A virtual network rule for your Azure Database for PostgreSQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for PostgreSQL server. To be in the ACL for your Azure Database for PostgreSQL server, the subnet must contain the **Microsoft.Sql** type name.
+
+A virtual network rule tells your Azure Database for PostgreSQL server to accept communications from every node that is on the subnet.
+
+<a name="anch-details-about-vnet-rules-38q"></a>
+
+## Benefits of a virtual network rule
+
+Until you take action, the VMs in your subnet(s) cannot communicate with your Azure Database for PostgreSQL server. One action that establishes the communication is the creation of a virtual network rule. The rationale for choosing the VNet rule approach requires a compare-and-contrast discussion involving the competing security options offered by the firewall.
+
+### Allow access to Azure services
+
+The Connection security pane has an **ON/OFF** button that is labeled **Allow access to Azure services**. The **ON** setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This **ON** setting is probably more open than you want your Azure Database for PostgreSQL Database to be. The virtual network rule feature offers much finer granular control.
+
+### IP rules
+
+The Azure Database for PostgreSQL firewall allows you to specify IP address ranges from which communications are accepted into the Azure Database for PostgreSQL Database. This approach is fine for stable IP addresses that are outside the Azure private network. But many nodes inside the Azure private network are configured with *dynamic* IP addresses. Dynamic IP addresses might change, such as when your VM is restarted. It would be folly to specify a dynamic IP address in a firewall rule, in a production environment.
+
+You can salvage the IP option by obtaining a *static* IP address for your VM. For details, see [Configure private IP addresses for a virtual machine by using the Azure portal][vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w].
+
+However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage.
++
+<a name="anch-details-about-vnet-rules-38q"></a>
+
+## Details about virtual network rules
+
+This section describes several details about virtual network rules.
+
+### Only one geographic region
+
+Each Virtual Network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet.
+
+Any virtual network rule is limited to the region that its underlying endpoint applies to.
+
+### Server-level, not database-level
+
+Each virtual network rule applies to your whole Azure Database for PostgreSQL server, not just to one particular database on the server. In other words, virtual network rule applies at the server-level, not at the database-level.
+
+#### Security administration roles
+
+There is a separation of security roles in the administration of Virtual Network service endpoints. Action is required from each of the following roles:
+
+- **Network Admin:** &nbsp; Turn on the endpoint.
+- **Database Admin:** &nbsp; Update the access control list (ACL) to add the given subnet to the Azure Database for PostgreSQL server.
+
+*Azure RBAC alternative:*
+
+The roles of Network Admin and Database Admin have more capabilities than are needed to manage virtual network rules. Only a subset of their capabilities is needed.
+
+You have the option of using [Azure role-based access control (Azure RBAC)][rbac-what-is-813s] in Azure to create a single custom role that has only the necessary subset of capabilities. The custom role could be used instead of involving either the Network Admin or the Database Admin. The surface area of your security exposure is lower if you add a user to a custom role, versus adding the user to the other two major administrator roles.
+
+> [!NOTE]
+> In some cases the Azure Database for PostgreSQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
+> - Both subscriptions must be in the same Azure Active Directory tenant.
+> - The user has the required permissions to initiate operations, such as enabling service endpoints and adding a VNet-subnet to the given Server.
+> - Make sure that both the subscription have the **Microsoft.Sql** and **Microsoft.DBforPostgreSQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+## Limitations
+
+For Azure Database for PostgreSQL, the virtual network rules feature has the following limitations:
+
+- A Web App can be mapped to a private IP in a VNet/subnet. Even if service endpoints are turned ON from the given VNet/subnet, connections from the Web App to the server will have an Azure public IP source, not a VNet/subnet source. To enable connectivity from a Web App to a server that has VNet firewall rules, you must Allow Azure services to access server on the server.
+
+- In the firewall for your Azure Database for PostgreSQL, each virtual network rule references a subnet. All these referenced subnets must be hosted in the same geographic region that hosts the Azure Database for PostgreSQL.
+
+- Each Azure Database for PostgreSQL server can have up to 128 ACL entries for any given virtual network.
+
+- Virtual network rules apply only to Azure Resource Manager virtual networks; and not to [classic deployment model][arm-deployment-model-568f] networks.
+
+- Turning ON virtual network service endpoints to Azure Database for PostgreSQL using the **Microsoft.Sql** service tag also enables the endpoints for all Azure Database
+
+- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
+
+- If **Microsoft.Sql** is enabled in a subnet, it indicates that you only want to use VNet rules to connect. [Non-VNet firewall rules](concepts-firewall-rules.md) of resources in that subnet will not work.
+
+- On the firewall, IP address ranges do apply to the following networking items, but virtual network rules do not:
+ - [Site-to-Site (S2S) virtual private network (VPN)][vpn-gateway-indexmd-608y]
+ - On-premises via [ExpressRoute][expressroute-indexmd-744v]
+
+## ExpressRoute
+
+If your network is connected to the Azure network through use of [ExpressRoute][expressroute-indexmd-744v], each circuit is configured with two public IP addresses at the Microsoft Edge. The two IP addresses are used to connect to Microsoft Services, such as to Azure Storage, by using Azure Public Peering.
+
+To allow communication from your circuit to Azure Database for PostgreSQL, you must create IP network rules for the public IP addresses of your circuits. In order to find the public IP addresses of your ExpressRoute circuit, open a support ticket with ExpressRoute by using the Azure portal.
+
+## Adding a VNET Firewall rule to your server without turning on VNET Service Endpoints
+
+Merely setting a VNet firewall rule does not help secure the server to the VNet. You must also turn VNet service endpoints **On** for the security to take effect. When you turn service endpoints **On**, your VNet-subnet experiences downtime until it completes the transition from **Off** to **On**. This is especially true in the context of large VNets. You can use the **IgnoreMissingServiceEndpoint** flag to reduce or eliminate the downtime during transition.
+
+You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal.
+
+## Related articles
+- [Azure virtual networks][vm-virtual-network-overview]
+- [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d]
+
+## Next steps
+For articles on creating VNet rules, see:
+- [Create and manage Azure Database for PostgreSQL VNet rules using the Azure portal](how-to-manage-vnet-using-portal.md)
+- [Create and manage Azure Database for PostgreSQL VNet rules using Azure CLI](how-to-manage-vnet-using-cli.md)
++
+<!-- Link references, to text, Within this same GitHub repo. -->
+[arm-deployment-model-568f]: ../../azure-resource-manager/management/deployment-models.md
+
+[vm-virtual-network-overview]: ../../virtual-network/virtual-networks-overview.md
+
+[vm-virtual-network-service-endpoints-overview-649d]: ../../virtual-network/virtual-network-service-endpoints-overview.md
+
+[vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]: ../../virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md
+
+[rbac-what-is-813s]: ../../role-based-access-control/overview.md
+
+[vpn-gateway-indexmd-608y]: ../../vpn-gateway/index.yml
+
+[expressroute-indexmd-744v]: ../../expressroute/index.yml
+
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql Concepts Data Encryption Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-encryption-postgresql.md
+
+ Title: Data encryption with customer-managed key - Azure Database for PostgreSQL - Single server
+description: Azure Database for PostgreSQL Single server data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.
++++++ Last updated : 01/13/2020++
+# Azure Database for PostgreSQL Single server data encryption with a customer-managed key
+
+Azure PostgreSQL leverages [Azure Storage encryption](../../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it is a very similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control on access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you are responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
+
+Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../../key-vault/general/security-features.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article.
+
+Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key, but does provide services of encryption and decryption to authorized entities. Key Vault can generate the key, imported it, or [have it transferred from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md).
+
+> [!NOTE]
+> This feature is available in all Azure regions where Azure Database for PostgreSQL Single server supports "General Purpose" and "Memory Optimized" pricing tiers. For other limitations, refer to the [limitation](concepts-data-encryption-postgresql.md#limitations) section.
+
+## Benefits
+
+Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server provides the following benefits:
+
+* Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible.
+* Full control over the key-lifecycle, including rotation of the key to align with corporate policies.
+* Central management and organization of keys in Azure Key Vault.
+* Enabling encryption does not have any additional performance impact with or without customers managed key (CMK) as PostgreSQL relies on Azure storage layer for data encryption in both the scenarios ,the only difference is when CMK is used **Azure Storage Encryption Key** which performs actual data encryption is encrypted using CMK.
+* Ability to implement separation of duties between security officers, and DBA and system administrators.
++
+## Terminology and description
+
+**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
+
+**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK.
+
+The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../../security/fundamentals/encryption-atrest.md).
+
+## How data encryption with a customer-managed key work
++
+For a PostgreSQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server:
+
+* **get**: For retrieving the public part and properties of the key in the key vault.
+* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for PostgreSQL.
+* **unwrapKey**: To be able to decrypt the DEK. Azure Database for PostgreSQL needs the decrypted DEK to encrypt/decrypt the data
+
+The key vault administrator can also [enable logging of Key Vault audit events](../../azure-monitor/insights/key-vault-insights-overview.md), so they can be audited later.
+
+When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
+
+## Requirements for configuring data encryption for Azure Database for PostgreSQL Single server
+
+The following are requirements for configuring Key Vault:
+
+* Key Vault and Azure Database for PostgreSQL Single server must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving the Key Vault resource afterwards requires you to reconfigure the data encryption.
+* The key vault must be set with 90 days for 'Days to retain deleted vaults'. If the existing key vault has been configured with a lower number, you will need to create a new key vault as it cannot be modified after creation.
+* Enable the soft-delete feature on the key vault, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days, unless the user recovers or purges them in the meantime. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal).
+* Enable Purge protection to enforce a mandatory retention period for deleted vaults and vault objects
+* Grant the Azure Database for PostgreSQL Single server access to the key vault with the get, wrapKey, and unwrapKey permissions by using its unique managed identity. In the Azure portal, the unique 'Service' identity is automatically created when data encryption is enabled on the PostgreSQL Single server. See [Data encryption for Azure Database for PostgreSQL Single server by using the Azure portal](how-to-data-encryption-portal.md) for detailed, step-by-step instructions when you're using the Azure portal.
+
+The following are requirements for configuring the customer-managed key:
+
+* The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048.
+* The key activation date (if set) must be a date and time in the past. The expiration date (if set) must be a future date and time.
+* The key must be in the *Enabled* state.
+* If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key) into the key vault, make sure to provide it in the supported file formats (`.pfx`, `.byok`, `.backup`).
+
+## Recommendations
+
+When you're using data encryption by using a customer-managed key, here are recommendations for configuring Key Vault:
+
+* Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion.
+* Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated.
+* Ensure that Key Vault and Azure Database for PostgreSQL Single server reside in the same region, to ensure a faster access for DEK wrap, and unwrap operations.
+* Lock down the Azure KeyVault to only **private endpoint and selected networks** and allow only *trusted Microsoft* services to secure the resources.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/keyvault-trusted-service.png" alt-text="trusted-service-with-AKV":::
+
+Here are recommendations for configuring a customer-managed key:
+
+* Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service.
+
+* If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey).
+
+## Inaccessible customer-managed key condition
+
+When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*. Some of the reason why the server can reach this state are:
+
+* If we create a Point In Time Restore server for your Azure Database for PostgreSQL Single server, which has data encryption enabled, the newly created server will be in *Inaccessible* state. You can fix the server state through [Azure portal](how-to-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](how-to-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
+* If we create a read replica for your Azure Database for PostgreSQL Single server, which has data encryption enabled, the replica server will be in *Inaccessible* state. You can fix the server state through [Azure portal](how-to-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](how-to-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
+* If you delete the KeyVault, the Azure Database for PostgreSQL Single server will be unable to access the key and will move to *Inaccessible* state. Recover the [Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
+* If we delete the key from the KeyVault, the Azure Database for PostgreSQL Single server will be unable to access the key and will move to *Inaccessible* state. Recover the [Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
+* If the key stored in the Azure KeyVault expires, the key will become invalid and the Azure Database for PostgreSQL Single server will transition into *Inaccessible* state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the server *Available*.
+
+### Accidental key access revocation from Key Vault
+
+It might happen that someone with sufficient access rights to Key Vault accidentally disables server access to the key by:
+
+* Revoking the key vault's get, wrapKey, and unwrapKey permissions from the server.
+* Deleting the key.
+* Deleting the key vault.
+* Changing the key vault's firewall rules.
+
+* Deleting the managed identity of the server in Azure AD.
+
+## Monitor the customer-managed key in Key Vault
+
+To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features:
+
+* [Azure Resource Health](../../service-health/resource-health-overview.md): An inaccessible database that has lost access to the customer key shows as "Inaccessible" after the first connection to the database has been denied.
+* [Activity log](../../service-health/alerts-activity-log-service-notifications-portal.md): When access to the customer key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible, if you create alerts for these events.
+
+* [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences.
+
+## Restore and replicate with a customer's managed key in Key Vault
+
+After Azure Database for PostgreSQL Single server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through read replicas. However, the copy can be changed to reflect a new customer's managed key for encryption. When the customer-managed key is changed, old backups of the server start using the latest key.
+
+To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the primary and restored/replica servers:
+
+* Initiate the restore or read replica creation process from the primary Azure Database for PostgreSQL Single server.
+* Keep the newly created server (restored/replica) in an inaccessible state, because its unique identity hasn't yet been given permissions to Key Vault.
+* On the restored/replica server, revalidate the customer-managed key in the data encryption settings. This ensures that the newly created server is given wrap and unwrap permissions to the key stored in Key Vault.
+
+## Limitations
+
+For Azure Database for PostgreSQL, the support for encryption of data at rest using customers managed key (CMK) has few limitations -
+
+* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
+* This feature is only supported in regions and servers which support storage up to 16TB. For the list of Azure regions supporting storage up to 16TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
+
+ > [!NOTE]
+ > - All new PostgreSQL servers created in the regions listed above, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
+ > - To validate if your provisioned server supports up to 16TB, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server may not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforPostgreSQL@service.microsoft.com if you have any questions.
+
+* Encryption is only supported with RSA 2048 cryptographic key.
+
+## Next steps
+
+Learn how to [set up data encryption with a customer-managed key for your Azure database for PostgreSQL Single server by using the Azure portal](how-to-data-encryption-portal.md).
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-extensions.md
+
+ Title: Extensions - Azure Database for PostgreSQL - Single Server
+description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Single Server
+++++ Last updated : 03/25/2021+
+# PostgreSQL extensions in Azure Database for PostgreSQL - Single Server
+PostgreSQL provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions function like built-in features.
+
+## How to use PostgreSQL extensions
+PostgreSQL extensions must be installed in your database before you can use them. To install a particular extension, run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command from psql tool to load the packaged objects into your database.
+
+Azure Database for PostgreSQL supports a subset of key extensions as listed below. This information is also available by running `SELECT * FROM pg_available_extensions;`. Extensions beyond the ones listed are not supported. You cannot create your own extension in Azure Database for PostgreSQL.
+
+## Postgres 11 extensions
+
+The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 11.
+
+> [!div class="mx-tableFixed"]
+> | **Extension**| **Extension version** | **Description** |
+> ||||
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
+> |[btree_gin](https://www.postgresql.org/docs/11/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
+> |[btree_gist](https://www.postgresql.org/docs/11/btree-gist.html) | 1.5 | support for indexing common datatypes in GiST|
+> |[citext](https://www.postgresql.org/docs/11/citext.html) | 1.5 | data type for case-insensitive character strings|
+> |[cube](https://www.postgresql.org/docs/11/cube.html) | 1.4 | data type for multidimensional cubes|
+> |[dblink](https://www.postgresql.org/docs/11/dblink.html) | 1.2 | connect to other PostgreSQL databases from within a database|
+> |[dict_int](https://www.postgresql.org/docs/11/dict-int.html) | 1.0 | text search dictionary template for integers|
+> |[earthdistance](https://www.postgresql.org/docs/11/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth|
+> |[fuzzystrmatch](https://www.postgresql.org/docs/11/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
+> |[hstore](https://www.postgresql.org/docs/11/hstore.html) | 1.5 | data type for storing sets of (key, value) pairs|
+> |[hypopg](https://hypopg.readthedocs.io/en/latest/) | 1.1.2 | Hypothetical indexes for PostgreSQL|
+> |[intarray](https://www.postgresql.org/docs/11/intarray.html) | 1.2 | functions, operators, and index support for 1-D arrays of integers|
+> |[isn](https://www.postgresql.org/docs/11/isn.html) | 1.2 | data types for international product numbering standards|
+> |[ltree](https://www.postgresql.org/docs/11/ltree.html) | 1.1 | data type for hierarchical tree-like structures|
+> |[orafce](https://github.com/orafce/orafce) | 3.7 | Functions and operators that emulate a subset of functions and packages from commercial RDBMS|
+> |[pgaudit](https://www.pgaudit.org/) | 1.3.1 | provides auditing functionality|
+> |[pgcrypto](https://www.postgresql.org/docs/11/pgcrypto.html) | 1.3 | cryptographic functions|
+> |[pgrouting](https://pgrouting.org/) | 2.6.2 | pgRouting Extension|
+> |[pgrowlocks](https://www.postgresql.org/docs/11/pgrowlocks.html) | 1.2 | show row-level locking information|
+> |[pgstattuple](https://www.postgresql.org/docs/11/pgstattuple.html) | 1.5 | show tuple-level statistics|
+> |[pg_buffercache](https://www.postgresql.org/docs/11/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
+> |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.0.0 | Extension to manage partitioned tables by time or ID|
+> |[pg_prewarm](https://www.postgresql.org/docs/11/pgprewarm.html) | 1.2 | prewarm relation data|
+> |[pg_stat_statements](https://www.postgresql.org/docs/11/pgstatstatements.html) | 1.6 | track execution statistics of all SQL statements executed|
+> |[pg_trgm](https://www.postgresql.org/docs/11/pgtrgm.html) | 1.4 | text similarity measurement and index searching based on trigrams|
+> |[plpgsql](https://www.postgresql.org/docs/11/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
+> |[plv8](https://plv8.github.io/) | 2.3.11 | PL/JavaScript (v8) trusted procedural language|
+> |[postgis](https://www.postgis.net/) | 2.5.1 | PostGIS geometry, geography, and raster spatial types and functions|
+> |[postgis_sfcgal](https://www.postgis.net/) | 2.5.1 | PostGIS SFCGAL functions|
+> |[postgis_tiger_geocoder](https://www.postgis.net/) | 2.5.1 | PostGIS tiger geocoder and reverse geocoder|
+> |[postgis_topology](https://postgis.net/docs/Topology.html) | 2.5.1 | PostGIS topology spatial types and functions|
+> |[postgres_fdw](https://www.postgresql.org/docs/11/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
+> |[tablefunc](https://www.postgresql.org/docs/11/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
+> |[timescaledb](https://docs.timescale.com/timescaledb/latest/) |1.7.4 | Enables scalable inserts and complex queries for time-series data|
+> |[unaccent](https://www.postgresql.org/docs/11/unaccent.html) | 1.1 | text search dictionary that removes accents|
+> |[uuid-ossp](https://www.postgresql.org/docs/11/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
+
+## Postgres 10 extensions
+
+The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 10.
+
+> [!div class="mx-tableFixed"]
+> | **Extension**| **Extension version** | **Description** |
+> ||||
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example|
+> |[btree_gin](https://www.postgresql.org/docs/10/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
+> |[btree_gist](https://www.postgresql.org/docs/10/btree-gist.html) | 1.5 | support for indexing common datatypes in GiST|
+> |[chkpass](https://www.postgresql.org/docs/10/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
+> |[citext](https://www.postgresql.org/docs/10/citext.html) | 1.4 | data type for case-insensitive character strings|
+> |[cube](https://www.postgresql.org/docs/10/cube.html) | 1.2 | data type for multidimensional cubes|
+> |[dblink](https://www.postgresql.org/docs/10/dblink.html) | 1.2 | connect to other PostgreSQL databases from within a database|
+> |[dict_int](https://www.postgresql.org/docs/10/dict-int.html) | 1.0 | text search dictionary template for integers|
+> |[earthdistance](https://www.postgresql.org/docs/10/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth|
+> |[fuzzystrmatch](https://www.postgresql.org/docs/10/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
+> |[hstore](https://www.postgresql.org/docs/10/hstore.html) | 1.4 | data type for storing sets of (key, value) pairs|
+> |[hypopg](https://hypopg.readthedocs.io/en/latest/) | 1.1.1 | Hypothetical indexes for PostgreSQL|
+> |[intarray](https://www.postgresql.org/docs/10/intarray.html) | 1.2 | functions, operators, and index support for 1-D arrays of integers|
+> |[isn](https://www.postgresql.org/docs/10/isn.html) | 1.1 | data types for international product numbering standards|
+> |[ltree](https://www.postgresql.org/docs/10/ltree.html) | 1.1 | data type for hierarchical tree-like structures|
+> |[orafce](https://github.com/orafce/orafce) | 3.7 | Functions and operators that emulate a subset of functions and packages from commercial RDBMS|
+> |[pgaudit](https://www.pgaudit.org/) | 1.2 | provides auditing functionality|
+> |[pgcrypto](https://www.postgresql.org/docs/10/pgcrypto.html) | 1.3 | cryptographic functions|
+> |[pgrouting](https://pgrouting.org/) | 2.5.2 | pgRouting Extension|
+> |[pgrowlocks](https://www.postgresql.org/docs/10/pgrowlocks.html) | 1.2 | show row-level locking information|
+> |[pgstattuple](https://www.postgresql.org/docs/10/pgstattuple.html) | 1.5 | show tuple-level statistics|
+> |[pg_buffercache](https://www.postgresql.org/docs/10/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
+> |[pg_partman](https://github.com/pgpartman/pg_partman) | 2.6.3 | Extension to manage partitioned tables by time or ID|
+> |[pg_prewarm](https://www.postgresql.org/docs/10/pgprewarm.html) | 1.1 | prewarm relation data|
+> |[pg_stat_statements](https://www.postgresql.org/docs/10/pgstatstatements.html) | 1.6 | track execution statistics of all SQL statements executed|
+> |[pg_trgm](https://www.postgresql.org/docs/10/pgtrgm.html) | 1.3 | text similarity measurement and index searching based on trigrams|
+> |[plpgsql](https://www.postgresql.org/docs/10/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
+> |[plv8](https://plv8.github.io/) | 2.1.0 | PL/JavaScript (v8) trusted procedural language|
+> |[postgis](https://www.postgis.net/) | 2.4.3 | PostGIS geometry, geography, and raster spatial types and functions|
+> |[postgis_sfcgal](https://www.postgis.net/) | 2.4.3 | PostGIS SFCGAL functions|
+> |[postgis_tiger_geocoder](https://www.postgis.net/) | 2.4.3 | PostGIS tiger geocoder and reverse geocoder|
+> |[postgis_topology](https://postgis.net/docs/Topology.html) | 2.4.3 | PostGIS topology spatial types and functions|
+> |[postgres_fdw](https://www.postgresql.org/docs/10/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
+> |[tablefunc](https://www.postgresql.org/docs/10/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
+> |[timescaledb](https://docs.timescale.com/timescaledb/latest/) | 1.7.4 | Enables scalable inserts and complex queries for time-series data|
+> |[unaccent](https://www.postgresql.org/docs/10/unaccent.html) | 1.1 | text search dictionary that removes accents|
+> |[uuid-ossp](https://www.postgresql.org/docs/10/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
+
+## Postgres 9.6 extensions
+
+The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 9.6.
+
+> [!div class="mx-tableFixed"]
+> | **Extension**| **Extension version** | **Description** |
+> ||||
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.2 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.2 | Address Standardizer US dataset example|
+> |[btree_gin](https://www.postgresql.org/docs/9.6/btree-gin.html) | 1.0 | support for indexing common datatypes in GIN|
+> |[btree_gist](https://www.postgresql.org/docs/9.6/btree-gist.html) | 1.2 | support for indexing common datatypes in GiST|
+> |[chkpass](https://www.postgresql.org/docs/9.6/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
+> |[citext](https://www.postgresql.org/docs/9.6/citext.html) | 1.3 | data type for case-insensitive character strings|
+> |[cube](https://www.postgresql.org/docs/9.6/cube.html) | 1.2 | data type for multidimensional cubes|
+> |[dblink](https://www.postgresql.org/docs/9.6/dblink.html) | 1.2 | connect to other PostgreSQL databases from within a database|
+> |[dict_int](https://www.postgresql.org/docs/9.6/dict-int.html) | 1.0 | text search dictionary template for integers|
+> |[earthdistance](https://www.postgresql.org/docs/9.6/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth|
+> |[fuzzystrmatch](https://www.postgresql.org/docs/9.6/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
+> |[hstore](https://www.postgresql.org/docs/9.6/hstore.html) | 1.4 | data type for storing sets of (key, value) pairs|
+> |[hypopg](https://hypopg.readthedocs.io/en/latest/) | 1.1.1 | Hypothetical indexes for PostgreSQL|
+> |[intarray](https://www.postgresql.org/docs/9.6/intarray.html) | 1.2 | functions, operators, and index support for 1-D arrays of integers|
+> |[isn](https://www.postgresql.org/docs/9.6/isn.html) | 1.1 | data types for international product numbering standards|
+> |[ltree](https://www.postgresql.org/docs/9.6/ltree.html) | 1.1 | data type for hierarchical tree-like structures|
+> |[orafce](https://github.com/orafce/orafce) | 3.7 | Functions and operators that emulate a subset of functions and packages from commercial RDBMS|
+> |[pgaudit](https://www.pgaudit.org/) | 1.1.2 | provides auditing functionality|
+> |[pgcrypto](https://www.postgresql.org/docs/9.6/pgcrypto.html) | 1.3 | cryptographic functions|
+> |[pgrouting](https://pgrouting.org/) | 2.3.2 | pgRouting Extension|
+> |[pgrowlocks](https://www.postgresql.org/docs/9.6/pgrowlocks.html) | 1.2 | show row-level locking information|
+> |[pgstattuple](https://www.postgresql.org/docs/9.6/pgstattuple.html) | 1.4 | show tuple-level statistics|
+> |[pg_buffercache](https://www.postgresql.org/docs/9.6/pgbuffercache.html) | 1.2 | examine the shared buffer cache|
+> |[pg_partman](https://github.com/pgpartman/pg_partman) | 2.6.3 | Extension to manage partitioned tables by time or ID|
+> |[pg_prewarm](https://www.postgresql.org/docs/9.6/pgprewarm.html) | 1.1 | prewarm relation data|
+> |[pg_stat_statements](https://www.postgresql.org/docs/9.6/pgstatstatements.html) | 1.4 | track execution statistics of all SQL statements executed|
+> |[pg_trgm](https://www.postgresql.org/docs/9.6/pgtrgm.html) | 1.3 | text similarity measurement and index searching based on trigrams|
+> |[plpgsql](https://www.postgresql.org/docs/9.6/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
+> |[plv8](https://plv8.github.io/) | 2.1.0 | PL/JavaScript (v8) trusted procedural language|
+> |[postgis](https://www.postgis.net/) | 2.3.2 | PostGIS geometry, geography, and raster spatial types and functions|
+> |[postgis_sfcgal](https://www.postgis.net/) | 2.3.2 | PostGIS SFCGAL functions|
+> |[postgis_tiger_geocoder](https://www.postgis.net/) | 2.3.2 | PostGIS tiger geocoder and reverse geocoder|
+> |[postgis_topology](https://postgis.net/docs/Topology.html) | 2.3.2 | PostGIS topology spatial types and functions|
+> |[postgres_fdw](https://www.postgresql.org/docs/9.6/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
+> |[tablefunc](https://www.postgresql.org/docs/9.6/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
+> |[timescaledb](https://docs.timescale.com/timescaledb/latest/) | 1.7.4 | Enables scalable inserts and complex queries for time-series data|
+> |[unaccent](https://www.postgresql.org/docs/9.6/unaccent.html) | 1.1 | text search dictionary that removes accents|
+> |[uuid-ossp](https://www.postgresql.org/docs/9.6/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
+
+## Postgres 9.5 extensions
+
+>[!NOTE]
+> PostgreSQL version 9.5 has been retired.
+
+The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 9.5.
+
+> [!div class="mx-tableFixed"]
+> | **Extension**| **Extension version** | **Description** |
+> ||||
+> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.0 | Used to parse an address into constituent elements. |
+> |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.3.0 | Address Standardizer US dataset example|
+> |[btree_gin](https://www.postgresql.org/docs/9.5/btree-gin.html) | 1.0 | support for indexing common datatypes in GIN|
+> |[btree_gist](https://www.postgresql.org/docs/9.5/btree-gist.html) | 1.1 | support for indexing common datatypes in GiST|
+> |[chkpass](https://www.postgresql.org/docs/9.5/chkpass.html) | 1.0 | data type for auto-encrypted passwords|
+> |[citext](https://www.postgresql.org/docs/9.5/citext.html) | 1.1 | data type for case-insensitive character strings|
+> |[cube](https://www.postgresql.org/docs/9.5/cube.html) | 1.0 | data type for multidimensional cubes|
+> |[dblink](https://www.postgresql.org/docs/9.5/dblink.html) | 1.1 | connect to other PostgreSQL databases from within a database|
+> |[dict_int](https://www.postgresql.org/docs/9.5/dict-int.html) | 1.0 | text search dictionary template for integers|
+> |[earthdistance](https://www.postgresql.org/docs/9.5/earthdistance.html) | 1.0 | calculate great-circle distances on the surface of the Earth|
+> |[fuzzystrmatch](https://www.postgresql.org/docs/9.5/fuzzystrmatch.html) | 1.0 | determine similarities and distance between strings|
+> |[hstore](https://www.postgresql.org/docs/9.5/hstore.html) | 1.3 | data type for storing sets of (key, value) pairs|
+> |[hypopg](https://hypopg.readthedocs.io/en/latest/) | 1.1.1 | Hypothetical indexes for PostgreSQL|
+> |[intarray](https://www.postgresql.org/docs/9.5/intarray.html) | 1.0 | functions, operators, and index support for 1-D arrays of integers|
+> |[isn](https://www.postgresql.org/docs/9.5/isn.html) | 1.0 | data types for international product numbering standards|
+> |[ltree](https://www.postgresql.org/docs/9.5/ltree.html) | 1.0 | data type for hierarchical tree-like structures|
+> |[orafce](https://github.com/orafce/orafce) | 3.7 | Functions and operators that emulate a subset of functions and packages from commercial RDBMS|
+> |[pgaudit](https://www.pgaudit.org/) | 1.0.7 | provides auditing functionality|
+> |[pgcrypto](https://www.postgresql.org/docs/9.5/pgcrypto.html) | 1.2 | cryptographic functions|
+> |[pgrouting](https://pgrouting.org/) | 2.3.0 | pgRouting Extension|
+> |[pgrowlocks](https://www.postgresql.org/docs/9.5/pgrowlocks.html) | 1.1 | show row-level locking information|
+> |[pgstattuple](https://www.postgresql.org/docs/9.5/pgstattuple.html) | 1.3 | show tuple-level statistics|
+> |[pg_buffercache](https://www.postgresql.org/docs/9.5/pgbuffercache.html) | 1.1 | examine the shared buffer cache|
+> |[pg_partman](https://github.com/pgpartman/pg_partman) | 2.6.3 | Extension to manage partitioned tables by time or ID|
+> |[pg_prewarm](https://www.postgresql.org/docs/9.5/pgprewarm.html) | 1.0 | prewarm relation data|
+> |[pg_stat_statements](https://www.postgresql.org/docs/9.5/pgstatstatements.html) | 1.3 | track execution statistics of all SQL statements executed|
+> |[pg_trgm](https://www.postgresql.org/docs/9.5/pgtrgm.html) | 1.1 | text similarity measurement and index searching based on trigrams|
+> |[plpgsql](https://www.postgresql.org/docs/9.5/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
+> |[postgis](https://www.postgis.net/) | 2.3.0 | PostGIS geometry, geography, and raster spatial types and functions|
+> |[postgis_sfcgal](https://www.postgis.net/) | 2.3.0 | PostGIS SFCGAL functions|
+> |[postgis_tiger_geocoder](https://www.postgis.net/) | 2.3.0 | PostGIS tiger geocoder and reverse geocoder|
+> |[postgis_topology](https://postgis.net/docs/Topology.html) | 2.3.0 | PostGIS topology spatial types and functions|
+> |[postgres_fdw](https://www.postgresql.org/docs/9.5/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers|
+> |[tablefunc](https://www.postgresql.org/docs/9.5/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
+> |[unaccent](https://www.postgresql.org/docs/9.5/unaccent.html) | 1.0 | text search dictionary that removes accents|
+> |[uuid-ossp](https://www.postgresql.org/docs/9.5/uuid-ossp.html) | 1.0 | generate universally unique identifiers (UUIDs)|
++
+## pg_stat_statements
+The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL server to provide you a means of tracking execution statistics of SQL statements.
+The setting `pg_stat_statements.track`, which controls what statements are counted by the extension, defaults to `top`, meaning all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter through the [Azure portal](./how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](./how-to-configure-server-parameters-using-cli.md).
+
+There is a tradeoff between the query execution information pg_stat_statements provides and the impact on server performance as it logs each SQL statement. If you are not actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Note that some third party monitoring services may rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not.
+
+## dblink and postgres_fdw
+[dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) and [postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allow you to connect from one PostgreSQL server to another, or to another database in the same server. The receiving server needs to allow connections from the sending server through its firewall. When using these extensions to connect between Azure Database for PostgreSQL servers, this can be done by setting "Allow access to Azure services" to ON. This is also needed if you want to use the extensions to loop back to the same server. The "Allow access to Azure services" setting can be found in the Azure portal page for the Postgres server, under Connection Security. Turning "Allow access to Azure services" ON puts all Azure IPs on the allow list.
+
+> [!NOTE]
+> Currently, outbound connections from Azure Database for PostgreSQL via foreign data wrapper extensions such as postgres_fdw are not supported, except for connections to other Azure Database for PostgreSQL servers in the same Azure region.
+
+## uuid
+If you are planning to use `uuid_generate_v4()` from the [uuid-ossp extension](https://www.postgresql.org/docs/current/uuid-ossp.html), consider comparing with `gen_random_uuid()` from the [pgcrypto extension](https://www.postgresql.org/docs/current/pgcrypto.html) for performance benefits.
+
+## pgAudit
+The [pgAudit extension](https://github.com/pgaudit/pgaudit/blob/master/README.md) provides session and object audit logging. To learn how to use this extension in Azure Database for PostgreSQL, visit the [auditing concepts article](concepts-audit.md).
+
+## pg_prewarm
+The pg_prewarm extension loads relational data into cache. Prewarming your caches means that your queries have better response times on their first run after a restart. In Postgres 10 and below, prewarming is done manually using the [prewarm function](https://www.postgresql.org/docs/10/pgprewarm.html).
+
+In Postgres 11 and above, you can configure prewarming to happen [automatically](https://www.postgresql.org/docs/current/pgprewarm.html). You need to include pg_prewarm in your `shared_preload_libraries` parameter's list and restart the server to apply the change. Parameters can be set from the [Azure portal](how-to-configure-server-parameters-using-portal.md), [CLI](how-to-configure-server-parameters-using-cli.md), REST API, or ARM template.
+
+## TimescaleDB
+TimescaleDB is a time-series database that is packaged as an extension for PostgreSQL. TimescaleDB provides time-oriented analytical functions, optimizations, and scales Postgres for time-series workloads.
+
+[Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of [Timescale, Inc.](https://www.timescale.com/). Azure Database for PostgreSQL provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses).
+
+### Installing TimescaleDB
+To install TimescaleDB, you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
+
+Using the [Azure portal](https://portal.azure.com/):
+
+1. Select your Azure Database for PostgreSQL server.
+
+2. On the sidebar, select **Server Parameters**.
+
+3. Search for the `shared_preload_libraries` parameter.
+
+4. Select **TimescaleDB**.
+
+5. Select **Save** to preserve your changes. You get a notification once the change is saved.
+
+6. After the notification, **restart** the server to apply these changes. To learn how to restart a server, see [Restart an Azure Database for PostgreSQL server](how-to-restart-server-portal.md).
++
+You can now enable TimescaleDB in your Postgres database. Connect to the database and issue the following command:
+```sql
+CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
+```
+> [!TIP]
+> If you see an error, confirm that you [restarted your server](how-to-restart-server-portal.md) after saving shared_preload_libraries.
+
+You can now create a TimescaleDB hypertable [from scratch](https://docs.timescale.com/getting-started/creating-hypertables) or migrate [existing time-series data in PostgreSQL](https://docs.timescale.com/getting-started/migrating-data).
+
+### Restoring a Timescale database using pg_dump and pg_restore
+To restore a Timescale database using pg_dump and pg_restore, you need to run two helper procedures in the destination database: `timescaledb_pre_restore()` and `timescaledb_post restore()`.
+
+First prepare the destination database:
+
+```SQL
+--create the new database where you'll perform the restore
+CREATE DATABASE tutorial;
+\c tutorial --connect to the database
+CREATE EXTENSION timescaledb;
+
+SELECT timescaledb_pre_restore();
+```
+
+Now you can run pg_dump on the original database and then do pg_restore. After the restore, be sure to run the following command in the restored database:
+
+```SQL
+SELECT timescaledb_post_restore();
+```
+For more details on restore method wiith Timescale enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup)
++
+### Restoring a Timescale database using timescaledb-backup
+
+ While running `SELECT timescaledb_post_restore()` procedure listed above you may get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
+ To do so you should do following
+ 1. Install tools as detailed [here](https://github.com/timescale/timescaledb-backup#installing-timescaledb-backup)
+ 2. Create target Azure Database for PostgreSQL server and database
+ 3. Enable Timescale extension as shown above
+ 4. Grant azure_pg_admin [role](https://www.postgresql.org/docs/11/database-roles.html) to user that will be used by [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore)
+ 5. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database
+
+ More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
+> [!NOTE]
+> When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
+
+## Next steps
+If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
+
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-firewall-rules.md
+
+ Title: Firewall rules - Azure Database for PostgreSQL - Single Server
+description: This article describes how to use firewall rules to connect to Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 07/17/2020++
+# Firewall rules in Azure Database for PostgreSQL - Single Server
+Azure Database for PostgreSQL server is secure by default preventing all access to your database server until you specify which IP hosts are allowed to access it. The firewall grants access to the server based on the originating IP address of each request.
+To configure your firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
+
+**Firewall rules:** These rules enable clients to access your entire Azure Database for PostgreSQL Server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or using Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
+
+## Firewall overview
+All access to your Azure Database for PostgreSQL server is blocked by the firewall by default. To access your server from another computer/client or application, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify allowed public IP address ranges. Access to the Azure portal website itself is not impacted by the firewall rules.
+Connection attempts from the internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram:
++
+## Connecting from the Internet
+Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL server.
+If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted otherwise it is rejected. For example, if your application connects with JDBC driver for PostgreSQL, you may encounter this error attempting to connect when the firewall is blocking the connection.
+> java.util.concurrent.ExecutionException: java.lang.RuntimeException:
+> org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL
+
+## Connecting from Azure
+It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
+
+If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is rejected by firewall rules, it does not reach the Azure Database for PostgreSQL server.
+
+> [!IMPORTANT]
+> The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+>
++
+## Connecting from a VNet
+To connect securely to your Azure Database for PostgreSQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
+
+## Programmatically managing firewall rules
+In addition to the Azure portal, firewall rules can be managed programmatically using Azure CLI.
+See also [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)
+
+## Troubleshooting firewall issues
+Consider the following points when access to the Microsoft Azure Database for PostgreSQL Server service does not behave as you expect:
+
+* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for PostgreSQL Server firewall configuration to take effect.
+
+* **The login is not authorized or an incorrect password was used:** If a login does not have permissions on the Azure Database for PostgreSQL server or the password used is incorrect, the connection to the Azure Database for PostgreSQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must still provide the necessary security credentials.
+
+ For example, using a JDBC client, the following error may appear.
+ > java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "yourusername"
+
+* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you could try one of the following solutions:
+
+ * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for PostgreSQL Server, and then add the IP address range as a firewall rule.
+
+ * Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
+
+* **Server's IP appears to be public:** Connections to the Azure Database for PostgreSQL server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
+
+* **Cannot connect from Azure resource with allowed IP:** Check whether the **Microsoft.Sql** service endpoint is enabled for the subnet you are connecting from. If **Microsoft.Sql** is enabled, it indicates that you only want to use [VNet service endpoint rules](concepts-data-access-and-security-vnet.md) on that subnet.
+
+ For example, you may see the following error if you are connecting from an Azure VM in a subnet that has **Microsoft.Sql** enabled but has no corresponding VNet rule:
+ `FATAL: Client from Azure Virtual Networks is not allowed to access the server`
+
+* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error.
++
+## Next steps
+* [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](how-to-manage-firewall-using-portal.md)
+* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)
+* [VNet service endpoints in Azure Database for PostgreSQL](./concepts-data-access-and-security-vnet.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-high-availability.md
+
+ Title: High availability - Azure Database for PostgreSQL - Single Server
+description: This article provides information on high availability in Azure Database for PostgreSQL - Single Server
+++++ Last updated : 6/15/2020++
+# High availability in Azure Database for PostgreSQL ΓÇô Single Server
+The Azure Database for PostgreSQL ΓÇô Single Server service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) uptime. Azure Database for PostgreSQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for PostgreSQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
+
+Azure Database for PostgreSQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
+
+## Components in Azure Database for PostgreSQL ΓÇô Single Server
+
+| **Component** | **Description**|
+| | -- |
+| <b>PostgreSQL Database Server | Azure Database for PostgreSQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in seconds. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (WAL) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://www.postgresql.org/docs/11/sql-checkpoint.html) process, data pages from the database server memory are also flushed to the storage. |
+| <b>Remote Storage | All PostgreSQL physical data files and WAL files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within few seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
+| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. |
+
+## Planned downtime mitigation
+Azure Database for PostgreSQL is architected to provide high availability during planned downtime operations.
++
+1. Scale up and down PostgreSQL database servers in seconds
+2. Gateway that acts as a proxy to route client connects to the proper database server
+3. Scaling up of storage can be performed without any downtime. Remote storage enables fast detach/re-attach after the failover.
+Here are some planned maintenance scenarios:
+
+| **Scenario** | **Description**|
+| | -- |
+| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.|
+| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.|
+| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
+| <b>Minor version upgrades | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
++
+## Unplanned downtime mitigation
+
+Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for PostgreSQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
+++
+1. Azure PostgreSQL servers with fast-scaling capabilities.
+2. Gateway that acts as a proxy to route client connections to the proper database server
+3. Azure storage with three copies for reliability, availability, and redundancy.
+4. Remote storage also enables fast detach/re-attach after the server failover.
+
+### Unplanned downtime: failure scenarios and service recovery
+Here are some failure scenarios and how Azure Database for PostgreSQL automatically recovers:
+
+| **Scenario** | **Automatic recovery** |
+| - | - |
+| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. |
+| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
+<B>Compute failure | Compute failures are rare event. In the event of compute failure a new compute container is provisioned and the storage with data files is mapped to it, PostgreSQL database engine is then brought online on the new container and gateway service ensures transparent failover without any need of application changes.Please also note that compute layer has built in Availability Zone resiliency and a new compute is spin up in different Availability zone in the event of AZ compute failure.
+
+Here are some failure scenarios that require user action to recover:
+
+| **Scenario** | **Recovery plan** |
+| - | - |
+| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](./how-to-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
+| <b> Availability zone failure | Failure of a Availability zone is also a rare event. However, if you need protection from a Availability zone failure, you can configure one or more read replicas or consider using our [Flexible Server](../flexible-server/concepts-high-availability.md) offering which provides zone redundant high availability.
+| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](./concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/11/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/11/app-pgrestore.html) to restore those tables into your database. |
+++
+## Summary
+
+Azure Database for PostgreSQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for PostgreSQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/postgresql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications.
+
+## Next steps
+- Learn about [Azure regions](../../availability-zones/az-overview.md)
+- Learn about [handling transient connectivity errors](concepts-connectivity.md)
+- Learn how to [replicate your data with read replicas](how-to-read-replicas-portal.md)
postgresql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-infrastructure-double-encryption.md
+
+ Title: Infrastructure double encryption - Azure Database for PostgreSQL
+description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service-managed keys.
++++++ Last updated : 6/30/2020++
+# Azure Database for PostgreSQL Infrastructure double encryption
+
+Azure Database for PostgreSQL uses storage [encryption of data at-rest](concepts-security.md#at-rest) for data using Microsoft's managed keys. Data, including backups, are encrypted on disk and this encryption is always on and can't be disabled. The encryption uses FIPS 140-2 validated cryptographic module and an AES 256-bit cipher for the Azure storage encryption.
+
+Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for PostgreSQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact.
+
+> [!NOTE]
+> This feature is only supported for "General Purpose" and "Memory Optimized" pricing tiers in Azure Database for PostgreSQL.
+
+Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for PostgreSQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-postgresql.md) for the provisioned PostgreSQL server.
+
+Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
+
+> [!NOTE]
+> Using Infrastructure double encryption will have performance impact on the Azure Database for PostgreSQL server due to the additional encryption process.
+
+## Benefits
+
+Infrastructure double encryption for Azure Database for PostgreSQL provides the following benefits:
+
+1. **Additional diversity of crypto implementation** - The planned move to hardware-based encryption will further diversify the implementations by providing a hardware-based implementation in addition to the software-based implementation.
+2. **Implementation errors** - Two layers of encryption at infrastructure layer protects against any errors in caching or memory management in higher layers that exposes plaintext data. Additionally, the two layers also ensures against errors in the implementation of the encryption in general.
+
+The combination of these provides strong protection against common threats and weaknesses used to attack cryptography.
+
+## Supported scenarios with infrastructure double encryption
+
+The encryption capabilities that are provided by Azure Database for PostgreSQL can be used together. Below is a summary of the various scenarios that you can use:
+
+| ## | Default encryption | Infrastructure double encryption | Data encryption using Customer-managed keys |
+|:|::|:--:|:--:|
+| 1 | *Yes* | *No* | *No* |
+| 2 | *Yes* | *Yes* | *No* |
+| 3 | *Yes* | *No* | *Yes* |
+| 4 | *Yes* | *Yes* | *Yes* |
+| | | | |
+
+> [!Important]
+> - Scenario 2 and 4 will have performance impact on the Azure Database for PostgreSQL server due to the additional layer of infrastructure encryption.
+> - Configuring Infrastructure double encryption for Azure Database for PostgreSQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption. However, you can still enable Data encryption using customer-managed keys for the server created with/without infrastructure double encryption.
+
+## Limitations
+
+For Azure Database for PostgreSQL, the support for infrastructure double encryption using service-managed key has the following limitations:
+
+* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
+* This feature is only supported in regions and servers, which support storage up to 16 TB. For the list of Azure regions supporting storage up to 16 TB, refer to the [storage documentation](concepts-pricing-tiers.md#storage).
+
+ > [!NOTE]
+ > - All **new** PostgreSQL servers created in the regions listed above also support data encryption with customer manager keys. In this case, servers created through point-in-time restore (PITR) or read replicas do not qualify as "new".
+ > - To validate if your provisioned server supports up to 16 TB, you can go to the pricing tier blade in the portal and see if the storage slider can be moved up to 16 TB. If you can only move the slider up to 4 TB, your server may not support encryption with customer managed keys; however, the data is encrypted using service-managed keys at all times. Please reach out to AskAzureDBforPostgreSQL@service.microsoft.com if you have any questions.
+
+## Next steps
+
+Learn how to [set up Infrastructure double encryption for Azure database for PostgreSQL](how-to-double-encryption.md).
postgresql Concepts Known Issues Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-known-issues-limitations.md
+
+ Title: Known issues and limitations for Azure Database for PostgreSQL - Single Server and Flexible Server
+description: Lists the known issues that customers should be aware of.
+++++ Last updated : 11/30/2021++
+# Azure Database for PostgreSQL - Known issues and limitations
+
+This page provides a list of known issues in Azure Database for PostgreSQL that could impact your application. It also lists any mitigation and recommendations to work around the issue.
+
+## Intelligent Performance - Query Store
+
+Applicable to Azure Database for PostgreSQL - Single Server.
+
+| Applicable | Cause | Remediation|
+| -- | | - |
+| PostgreSQL 9.6, 10, 11 | Turning on the server parameter `pg_qs.replace_parameter_placeholders` might lead to a server shutdown in some rare scenarios. | Through Azure Portal, Server Parameters section, turn the parameter `pg_qs.replace_parameter_placeholders` value to `OFF` and save. |
++
+## Next steps
+- See Query Store [best practices](./concepts-query-store-best-practices.md)
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-limits.md
+
+ Title: Limits - Azure Database for PostgreSQL - Single Server
+description: This article describes limits in Azure Database for PostgreSQL - Single Server, such as number of connection and storage engine options.
+++++ Last updated : 01/28/2020++
+# Limits in Azure Database for PostgreSQL - Single Server
+The following sections describe capacity and functional limits in the database service. If you'd like to learn about resource (compute, memory, storage) tiers, see the [pricing tiers](concepts-pricing-tiers.md) article.
++
+## Maximum connections
+The maximum number of connections per pricing tier and vCores are shown below. The Azure system requires five connections to monitor the Azure Database for PostgreSQL server.
+
+|**Pricing Tier**| **vCore(s)**| **Max Connections** | **Max User Connections** |
+|||||
+|Basic| 1| 55 | 50|
+|Basic| 2| 105 | 100|
+|General Purpose| 2| 150| 145|
+|General Purpose| 4| 250| 245|
+|General Purpose| 8| 480| 475|
+|General Purpose| 16| 950| 945|
+|General Purpose| 32| 1500| 1495|
+|General Purpose| 64| 1900| 1895|
+|Memory Optimized| 2| 300| 295|
+|Memory Optimized| 4| 500| 495|
+|Memory Optimized| 8| 960| 955|
+|Memory Optimized| 16| 1900| 1895|
+|Memory Optimized| 32| 1987| 1982|
+
+When connections exceed the limit, you may receive the following error:
+> FATAL: sorry, too many clients already
+
+> [!IMPORTANT]
+> For best experience, we recommend that you use a connection pooler like pgBouncer to efficiently manage connections.
+
+A PostgreSQL connection, even idle, can occupy about 10MB of memory. Also, creating new connections takes time. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. A connection pooler that decreases idle connections and reuses existing connections will help avoid this. To learn more, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+
+## Functional limitations
+### Scale operations
+- Dynamic scaling to and from the Basic pricing tiers is currently not supported.
+- Decreasing server storage size is currently not supported.
+
+### Server version upgrades
+- Automated migration between major database engine versions is currently not supported. If you would like to upgrade to the next major version, take a [dump and restore](./how-to-migrate-using-dump-and-restore.md) it to a server that was created with the new engine version.
+
+> Note that prior to PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number (for example, 9.5 to 9.6 was considered a _major_ version upgrade).
+> As of version 10, only a change in the first number is considered a major version upgrade (for example, 10.0 to 10.1 is a _minor_ version upgrade, and 10 to 11 is a _major_ version upgrade).
+
+### VNet service endpoints
+- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
+
+### Restoring a server
+- When using the PITR feature, the new server is created with the same pricing tier configurations as the server it is based on.
+- The new server created during a restore does not have the firewall rules that existed on the original server. Firewall rules need to be set up separately for this new server.
+- Restoring a deleted server is not supported.
+
+### UTF-8 characters on Windows
+- In some scenarios, UTF-8 characters are not supported fully in open source PostgreSQL on Windows, which affects Azure Database for PostgreSQL. Please see the thread on [Bug #15476 in the postgresql-archive](https://www.postgresql.org/message-id/2101.1541220270%40sss.pgh.pa.us) for more information.
+
+### GSS error
+If you see an error related to **GSS**, you are likely using a newer client/driver version which Azure Postgres Single Server does not yet fully support. This error is known to affect [JDBC driver versions 42.2.15 and 42.2.16](https://github.com/pgjdbc/pgjdbc/issues/1868).
+ - We plan to complete the update by the end of November. Consider using a working driver version in the meantime.
+ - Or, consider disabling the GSS request. Use a connection parameter like `gssEncMode=disable`.
+
+### Storage size reduction
+Storage size cannot be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](./how-to-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server.
+
+## Next steps
+- Understand [whatΓÇÖs available in each pricing tier](concepts-pricing-tiers.md)
+- Learn about [Supported PostgreSQL Database Versions](concepts-supported-versions.md)
+- Review [how to back up and restore a server in Azure Database for PostgreSQL using the Azure portal](how-to-restore-server-portal.md)
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-logical.md
+
+ Title: Logical decoding - Azure Database for PostgreSQL - Single Server
+description: Describes logical decoding and wal2json for change data capture in Azure Database for PostgreSQL - Single Server
+++++ Last updated : 12/09/2020++
+# Logical decoding
+
+[Logical decoding in PostgreSQL](https://www.postgresql.org/docs/current/logicaldecoding.html) allows you to stream data changes to external consumers. Logical decoding is popularly used for event streaming and change data capture scenarios.
+
+Logical decoding uses an output plugin to convert PostgresΓÇÖs write ahead log (WAL) into a readable format. Azure Database for PostgreSQL provides the output plugins [wal2json](https://github.com/eulerto/wal2json), [test_decoding](https://www.postgresql.org/docs/current/test-decoding.html) and pgoutput. pgoutput is made available by PostgreSQL from PostgreSQL version 10 and up.
+
+For an overview of how Postgres logical decoding works, [visit our blog](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/change-data-capture-in-postgres-how-to-use-logical-decoding-and/ba-p/1396421).
+
+> [!NOTE]
+> Logical replication using PostgreSQL publication/subscription is not supported with Azure Database for PostgreSQL - Single Server.
++
+## Set up your server
+Logical decoding and [read replicas](concepts-read-replicas.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas.
+
+To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
+
+* **Off** - Puts the least information in the WAL. This setting is not available on most Azure Database for PostgreSQL servers.
+* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers.
+* **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting.
++
+### Using Azure CLI
+
+1. Set azure.replication_support to `logical`.
+ ```azurecli-interactive
+ az postgres server configuration set --resource-group mygroup --server-name myserver --name azure.replication_support --value logical
+ ```
+
+2. Restart the server to apply the change.
+ ```azurecli-interactive
+ az postgres server restart --resource-group mygroup --name myserver
+ ```
+3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. To create a new firewall rule on the server, run the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command.
+
+### Using Azure portal
+
+1. Set Azure replication support to **logical**. Select **Save**.
+
+ :::image type="content" source="./media/concepts-logical/replication-support.png" alt-text="Azure Database for PostgreSQL - Replication - Azure replication support":::
+
+2. Restart the server to apply the change by selecting **Yes**.
+
+ :::image type="content" source="./media/concepts-logical/confirm-restart.png" alt-text="Azure Database for PostgreSQL - Replication - Confirm restart":::
+
+3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. Then click **Save**.
+
+ :::image type="content" source="./media/concepts-logical/client-replrule-firewall.png" alt-text="Azure Database for PostgreSQL - Replication - Add firewall rule":::
+
+## Start logical decoding
+
+Logical decoding can be consumed via streaming protocol or SQL interface. Both methods use [replication slots](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS). A slot represents a stream of changes from a single database.
+
+Using a replication slot requires Postgres's replication privileges. At this time, the replication privilege is only available for the server's admin user.
+
+### Streaming protocol
+Consuming changes using the streaming protocol is often preferable. You can create your own consumer / connector, or use a tool like [Debezium](https://debezium.io/).
+
+Visit the wal2json documentation for [an example using the streaming protocol with pg_recvlogical](https://github.com/eulerto/wal2json#pg_recvlogical).
++
+### SQL interface
+In the example below, we use the SQL interface with the wal2json plugin.
+
+1. Create a slot.
+ ```SQL
+ SELECT * FROM pg_create_logical_replication_slot('test_slot', 'wal2json');
+ ```
+
+2. Issue SQL commands. For example:
+ ```SQL
+ CREATE TABLE a_table (
+ id varchar(40) NOT NULL,
+ item varchar(40),
+ PRIMARY KEY (id)
+ );
+
+ INSERT INTO a_table (id, item) VALUES ('id1', 'item1');
+ DELETE FROM a_table WHERE id='id1';
+ ```
+
+3. Consume the changes.
+ ```SQL
+ SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'pretty-print', '1');
+ ```
+
+ The output will look like:
+ ```
+ {
+ "change": [
+ ]
+ }
+ {
+ "change": [
+ {
+ "kind": "insert",
+ "schema": "public",
+ "table": "a_table",
+ "columnnames": ["id", "item"],
+ "columntypes": ["character varying(40)", "character varying(40)"],
+ "columnvalues": ["id1", "item1"]
+ }
+ ]
+ }
+ {
+ "change": [
+ {
+ "kind": "delete",
+ "schema": "public",
+ "table": "a_table",
+ "oldkeys": {
+ "keynames": ["id"],
+ "keytypes": ["character varying(40)"],
+ "keyvalues": ["id1"]
+ }
+ }
+ ]
+ }
+ ```
+
+4. Drop the slot once you are done using it.
+ ```SQL
+ SELECT pg_drop_replication_slot('test_slot');
+ ```
++
+## Monitoring slots
+
+You must monitor logical decoding. Any unused replication slot must be dropped. Slots hold on to Postgres WAL logs and relevant system catalogs until changes have been read by a consumer. If your consumer fails or has not been properly configured, the unconsumed logs will pile up and fill your storage. Also, unconsumed logs increase the risk of transaction ID wraparound. Both situations can cause the server to become unavailable. Therefore, it is critical that logical replication slots are consumed continuously. If a logical replication slot is no longer used, drop it immediately.
+
+The 'active' column in the pg_replication_slots view will indicate whether there is a consumer connected to a slot.
+```SQL
+SELECT * FROM pg_replication_slots;
+```
+
+[Set alerts](how-to-alert-on-metric.md) on *Storage used* and *Max lag across replicas* metrics to notify you when the values increase past normal thresholds.
+
+> [!IMPORTANT]
+> You must drop unused replication slots. Failing to do so can lead to server unavailability.
+
+## How to drop a slot
+If you are not actively consuming a replication slot you should drop it.
+
+To drop a replication slot called `test_slot` using SQL:
+```SQL
+SELECT pg_drop_replication_slot('test_slot');
+```
+
+> [!IMPORTANT]
+> If you stop using logical decoding, change azure.replication_support back to `replica` or `off`. The WAL details retained by `logical` are more verbose, and should be disabled when logical decoding is not in use.
+
+
+## Next steps
+
+* Visit the Postgres documentation to [learn more about logical decoding](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html).
+* Reach out to [our team](mailto:AskAzureDBforPostgreSQL@service.microsoft.com) if you have questions about logical decoding.
+* Learn more about [read replicas](concepts-read-replicas.md).
+
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-monitoring.md
+
+ Title: Monitor and tune - Azure Database for PostgreSQL - Single Server
+description: This article describes monitoring and tuning features in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 10/21/2020++
+# Monitor and tune Azure Database for PostgreSQL - Single Server
+Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for PostgreSQL provides various monitoring options to provide insight into the behavior of your server.
+
+## Metrics
+Azure Database for PostgreSQL provides various metrics that give insight into the behavior of the resources supporting the PostgreSQL server. Each metric is emitted at a one-minute frequency, and has up to [93 days of history](../../azure-monitor/essentials/data-platform-metrics.md#retention-of-metrics). You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](how-to-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md).
+
+### List of metrics
+These metrics are available for Azure Database for PostgreSQL:
+
+|Metric|Metric Display Name|Unit|Description|
+|||||
+|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
+|memory_percent|Memory percent|Percent|The percentage of memory in use.|
+|io_consumption_percent|IO percent|Percent|The percentage of IO in use. (Not applicable for Basic tier servers.)|
+|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
+|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
+|storage_limit|Storage limit|Bytes|The maximum storage for this server.|
+|serverlog_storage_percent|Server Log storage percent|Percent|The percentage of server log storage used out of the server's maximum server log storage.|
+|serverlog_storage_usage|Server Log storage used|Bytes|The amount of server log storage in use.|
+|serverlog_storage_limit|Server Log storage limit|Bytes|The maximum server log storage for this server.|
+|active_connections|Active Connections|Count|The number of active connections to the server.|
+|connections_failed|Failed Connections|Count|The number of established connections that failed.|
+|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
+|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
+|backup_storage_used|Backup Storage Used|Bytes|The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.|
+|pg_replica_log_delay_in_bytes|Max Lag Across Replicas|Bytes|The lag in bytes between the primary and the most-lagging replica. This metric is available on the primary server only.|
+|pg_replica_log_delay_in_seconds|Replica Lag|Seconds|The time since the last replayed transaction. This metric is available for replica servers only.|
+
+## Server logs
+You can enable logging on your server. These resource logs can be sent to [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md), Event Hubs, and a Storage Account. To learn more about logging, visit the [server logs](concepts-server-logs.md) page.
+
+## Query Store
+[Query Store](concepts-query-store.md) keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in a system database named **azure_sys** under the query_store schema. You can control the collection and storage of data via various configuration knobs.
+
+## Query Performance Insight
+[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible from the **Support + troubleshooting** section of your Azure Database for PostgreSQL server's portal page.
+
+## Performance Recommendations
+The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
+
+## Planned maintenance notification
+
+[Planned maintenance notifications](./concepts-planned-maintenance-notification.md) allow you to receive alerts for upcoming planned maintenance to your Azure Database for PostgreSQL - Single Server. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 hours before the event.
+
+Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document.
+
+## Next steps
+- See [how to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric.
+- For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md)
+- Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-postgresql-monitoring/).
+- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for PostgreSQL - Single Server.
postgresql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-performance-recommendations.md
+
+ Title: Performance Recommendations - Azure Database for PostgreSQL - Single Server
+description: This article describes the Performance Recommendation feature in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 08/21/2019+
+# Performance Recommendations in Azure Database for PostgreSQL - Single Server
+
+**Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6, 10, 11
+
+The Performance Recommendations feature analyses your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
+
+## Permissions
+**Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature.
+
+## Performance recommendations
+The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance.
+
+Open **Performance Recommendations** from the **Intelligent Performance** section of the menu bar on the Azure portal page for your PostgreSQL server.
++
+Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, th analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods.
+
+The **Recommendations** window will show a list of recommendations if any were found.
++
+Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation.
+
+## Recommendation types
+
+Currently, two types of recommendations are supported: *Create Index* and *Drop Index*.
+
+### Create Index recommendations
+*Create Index* recommendations suggest new indexes to speed up the most frequently run or time-consuming queries in the workload. This recommendation type requires [Query Store](concepts-query-store.md) to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation.
+
+### Drop Index recommendations
+Besides detecting missing indexes, Azure Database for PostgreSQL analyzes the performance of existing indexes. If an index is either rarely used or redundant, the analyzer recommends dropping it.
+
+## Considerations
+* Performance Recommendations is not available for [read replicas](concepts-read-replicas.md).
+## Next steps
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
+
postgresql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-planned-maintenance-notification.md
+
+ Title: Planned maintenance notification - Azure Database for PostgreSQL - Single Server
+description: This article describes the Planned maintenance notification feature in Azure Database for PostgreSQL - Single Server
+++++ Last updated : 2/17/2022++
+# Planned maintenance notification in Azure Database for PostgreSQL - Single Server
+
+Learn how to prepare for planned maintenance events on your Azure Database for PostgreSQL.
+
+## What is a planned maintenance?
+
+Azure Database for PostgreSQL service performs automated patching of the underlying hardware, OS, and database engine. The patch includes new service features, security, and software updates. For PostgreSQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patch is tested extensively and rolled out using safe deployment practices.
+
+A planned maintenance is a maintenance window when these service updates are deployed to servers in a given Azure region. During planned maintenance, a notification event is created to inform customers when the service update is deployed in the Azure region hosting their servers. Minimum duration between two planned maintenance is 30 days. You receive a notification of the next maintenance window 72 hours in advance.
+
+## Planned maintenance - duration and customer impact
+
+A planned maintenance for a given Azure region is typically expected to complete within 15 hours. This time-window also includes buffer time to execute a rollback plan if necessary. Azure Database for PostgreSQL servers are running in containers so database server restarts typically take 60-120 seconds to complete but there is no deterministic way to know when within this 15 hours window your server will be impacted. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failover time is dependent on database recovery, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events.
+
+In summary, while the planned maintenance event runs for 15 hours, the individual server impact generally lasts 60 seconds depending on the transactional activity on the server. A notification is sent 72 calendar hours before planned maintenance starts and another one while maintenance is in progress for a given region.
+
+## How can I get notified of planned maintenance?
+
+You can utilize the planned maintenance notifications feature to receive alerts for an upcoming planned maintenance event. You will receive the notification about the upcoming maintenance 72 calendar hours before the event and another one while maintenance is in-progress for a given region.
+
+### Planned maintenance notification
++
+**Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for PostgreSQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event.
+
+We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted.
+
+You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
+
+### Check planned maintenance notification from Azure portal
+
+1. In the [Azure portal](https://portal.azure.com), select **Service Health**.
+2. Select **Planned Maintenance** tab
+3. Select **Subscription**, **Region, and **Service** for which you want to check the planned maintenance notification.
+
+### To receive planned maintenance notification
+
+1. In the [portal](https://portal.azure.com), select **Service Health**.
+2. In the **Alerts** section, select **Health alerts**.
+3. Select **+ Add service health alert** and fill in the fields.
+4. Fill out the required fields.
+5. Choose the **Event type**, select **Planned maintenance** or **Select all**
+6. In **Action groups** define how you would like to receive the alert (get an email, trigger a logic app etc.)
+7. Ensure Enable rule upon creation is set to Yes.
+8. Select **Create alert rule** to complete your alert
+
+For detailed steps on how to create **service health alerts**, refer to [Create activity log alerts on service notifications](../../service-health/alerts-activity-log-service-notifications-portal.md).
+
+## Can I cancel or postpone planned maintenance?
+
+Maintenance is needed to keep your server secure, stable, and up-to-date. The planned maintenance event cannot be canceled or postponed. Once the notification is sent to a given Azure region, the patching schedule changes cannot be made for any individual server in that region. The patch is rolled out for entire region at once. Azure Database for PostgreSQL - Single server service is designed for cloud native application that doesn't require granular control or customization of the service. If you are looking to have ability to schedule maintenance for your servers, we recommend you consider [Flexible servers](../flexible-server/overview.md).
+
+## Are all the Azure regions patched at the same time?
+
+No, all the Azure regions are patched during the deployment wise window timings. The deployment wise window generally stretches from 5 PM - 8 AM local time next day, in a given Azure region. Geo-paired Azure regions are patched on different days. For high availability and business continuity of database servers, leveraging [cross region read replicas](./concepts-read-replicas.md#cross-region-replication) is recommended.
+
+## Retry logic
+
+A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors).
++
+## Next steps
+
+- For any questions or suggestions you might have about working with Azure Database for PostgreSQL, send an email to the Azure Database for PostgreSQL Team at AskAzureDBforPostgreSQL@service.microsoft.com
+- See [How to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric.
+- [Troubleshoot connection issues to Azure Database for PostgreSQL - Single Server](how-to-troubleshoot-common-connection-issues.md)
+- [Handle transient errors and connect efficiently to Azure Database for PostgreSQL - Single Server](concepts-connectivity.md)
postgresql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-pricing-tiers.md
+
+ Title: Pricing tiers - Azure Database for PostgreSQL - Single Server
+description: This article describes the compute and storage options in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 10/14/2020+++
+# Pricing tiers in Azure Database for PostgreSQL - Single Server
+
+You can create an Azure Database for PostgreSQL server in one of three different pricing tiers: Basic, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the PostgreSQL server level. A server can have one or many databases.
+
+| Resource / Tier | **Basic** | **General Purpose** | **Memory Optimized** |
+|:|:-|:--|:|
+| Compute generation | Gen 4, Gen 5 | Gen 4, Gen 5 | Gen 5 |
+| vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 |
+| Memory per vCore | 2 GB | 5 GB | 10 GB |
+| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB |
+| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days |
+
+To choose a pricing tier, use the following table as a starting point.
+
+| Pricing tier | Target workloads |
+|:-|:--|
+| Basic | Workloads that require light compute and I/O performance. Examples include servers used for development or testing or small-scale infrequently used applications. |
+| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.|
+| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
+
+After you create a server, the number of vCores, hardware generation, and pricing tier (except to and from Basic) can be changed up or down within seconds. You also can independently adjust the amount of storage up and the backup retention period up or down with no application downtime. You can't change the backup storage type after a server is created. For more information, see the [Scale resources](#scale-resources) section.
+
+## Compute generations and vCores
+
+Compute resources are provided as vCores, which represent the logical CPU of the underlying hardware. China East 1, China North 1, US DoD Central, and US DoD East utilize Gen 4 logical CPUs that are based on Intel E5-2673 v3 (Haswell) 2.4-GHz processors. All other regions utilize Gen 5 logical CPUs that are based on Intel E5-2673 v4 (Broadwell) 2.3-GHz processors.
+
+## Storage
+
+The storage you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and the PostgreSQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.
+
+| Storage attributes | **Basic** | **General Purpose** | **Memory Optimized** |
+|:|:-|:--|:|
+| Storage type | Basic Storage | General Purpose Storage | General Purpose Storage |
+| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB |
+| Storage increment size | 1 GB | 1 GB | 1 GB |
+| IOPS | Variable |3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS | 3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS |
+
+> [!NOTE]
+> Storage up to 16TB and 20,000 IOPS is supported in the following regions: Australia East, Australia South East, Brazil South, Canada Central, Canada East, Central US, China East 2, China North 2, East Asia, East US, East US 1, East US 2, France Central, India Central, India South, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, Switzerland North, Switzerland West, US Gov East, US Gov SouthCentral, US Gov SouthWest, UK South, UK West, West Europe, West Central US, West US, and West US 2.
+>
+> All other regions support up to 4TB of storage and 6000 IOPS.
+>
+
+You can add additional storage capacity during and after the creation of the server, and allow the system to grow storage automatically based on the storage consumption of your workload.
+
+>[!NOTE]
+> Storage can only be scaled up, not down.
+
+The Basic tier does not provide an IOPS guarantee. In the General Purpose and Memory Optimized pricing tiers, the IOPS scale with the provisioned storage size in a 3:1 ratio.
+
+You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and IO percent](concepts-monitoring.md).
+
+### Reaching the storage limit
+
+Servers with less than equal to 100 GB provisioned storage are marked read-only if the free storage is less than 512MB or 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB.
+
+For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 512 MB.
+
+When the server is set to read-only, all existing sessions are disconnected and uncommitted transactions are rolled back. Any subsequent write operations and transaction commits fail. All subsequent read queries will work uninterrupted.
+
+You can either increase the amount of provisioned storage to your server or start a new session in read-write mode and drop data to reclaim free storage. Running `SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE;` sets the current session to read write mode. In order to avoid data corruption, do not perform any write operations when the server is still in read-only status.
+
+We recommend that you turn on storage auto-grow or to set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on [how to set up an alert](how-to-alert-on-metric.md).
+
+### Storage auto-grow
+
+Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below the greater of 10 GB or 5% of the provisioned storage size. Maximum storage limits as specified above apply.
+
+For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 950 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free.
+
+Remember that storage can only be scaled up, not down.
+
+## Backup storage
+
+Azure Database for PostgreSQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any backup storage you use in excess of this amount is charged in GB per month. For example, if you provision a server with 250 GB of storage, youΓÇÖll have 250 GB of additional storage available for server backups at no charge. Storage for backups in excess of the 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/). To understand factors influencing backup storage usage, monitoring and controlling backup storage cost, you can refer to the [backup documentation](concepts-backup.md).
+
+## Scale resources
+
+After you create your server, you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI. For an example of scaling by using Azure CLI, see [Monitor and scale an Azure Database for PostgreSQL server by using Azure CLI](../scripts/sample-scale-server-up-or-down.md).
+
+> [!NOTE]
+> The storage size can only be increased. You cannot go back to a smaller storage size after the increase.
+
+When you change the number of vCores, the hardware generation, or the pricing tier, a copy of the original server is created with the new compute allocation. After the new server is up and running, connections are switched over to the new server. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This window varies, but in most cases, is less than a minute.
+
+Scaling storage and changing the backup retention period are true online operations. There is no downtime, and your application isn't affected. As IOPS scale with the size of the provisioned storage, you can increase the IOPS available to your server by scaling up storage.
+
+## Pricing
+
+For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/PostgreSQL/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for PostgreSQL** to customize the options.
+
+## Next steps
+
+- Learn how to [create a PostgreSQL server in the portal](tutorial-design-database-using-azure-portal.md).
+- Learn about [service limits](concepts-limits.md).
+- Learn how to [scale out with read replicas](how-to-read-replicas-portal.md).
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-performance-insight.md
+
+ Title: Query Performance Insight - Azure Database for PostgreSQL - Single Server
+description: This article describes the Query Performance Insight feature in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 08/21/2019++
+# Query Performance Insight
+
+**Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6, 10, 11
+
+Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them.
+
+## Prerequisites
+For Query Performance Insight to function, data must exist in the [Query Store](concepts-query-store.md).
+
+## Viewing performance insights
+The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
+
+In the portal page of your Azure Database for PostgreSQL server, select **Query performance Insight** under the **Intelligent Performance** section of the menu bar. **Query Text is no longer supported** is shown. However, the query text can still be viewed by connecting to azure_sys and querying 'query_store.query_texts_view'.
++
+The **Long running queries** tab shows the top five queries by average duration per execution, aggregated in 15-minute intervals. You can view more queries by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
+
+You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger period of time respectively.
+
+The table below the chart gives more details about the long-running queries in that time window.
+
+Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server.
++
+## Considerations
+* Query Performance Insight is not available for [read replicas](concepts-read-replicas.md).
+
+## Next steps
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
++
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store-best-practices.md
+
+ Title: Query Store best practices in Azure Database for PostgreSQL - Single Server
+description: This article describes best practices for the Query Store in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019++
+# Best practices for Query Store
+
+**Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6, 10, 11
+
+This article outlines best practices for using Query Store in Azure Database for PostgreSQL.
+
+## Set the optimal query capture mode
+Let Query Store capture the data that matters to you.
+
+|**pg_qs.query_capture_mode** | **Scenario**|
+|||
+|_All_ |Analyze your workload thoroughly in terms of all queries and their execution frequencies and other statistics. Identify new queries in your workload. Detect if ad hoc queries are used to identify opportunities for user or auto parameterization. _All_ comes with an increased resource consumption cost. |
+|_Top_ |Focus your attention on top queries - those issued by clients.
+|_None_ |You've already captured a query set and time window that you want to investigate and you want to eliminate the distractions that other queries may introduce. _None_ is suitable for testing and bench-marking environments. _None_ should be used with caution as you might miss the opportunity to track and optimize important new queries. You can't recover data on those past time windows. |
+
+Query Store also includes a store for wait statistics. There is an additional capture mode query that governs wait statistics: **pgms_wait_sampling.query_capture_mode** can be set to _none_ or _all_.
+
+> [!NOTE]
+> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is _none_, the pgms_wait_sampling.query_capture_mode setting has no effect.
++
+## Keep the data you need
+The **pg_qs.retention_period_in_days** parameter specifies in days the data retention period for Query Store. Older query and statistics data is deleted. By default, Query Store is configured to retain the data for 7 days. Avoid keeping historical data you do not plan to use. Increase the value if you need to keep data longer.
++
+## Set the frequency of wait stats sampling
+The **pgms_wait_sampling.history_period** parameter specifies how often (in milliseconds) wait events are sampled. The shorter the period, the more frequent the sampling. More information is retrieved, but that comes with the cost of greater resource consumption. Increase this period if the server is under load or you don't need the granularity
++
+## Get quick insights into Query Store
+You can use [Query Performance Insight](concepts-query-performance-insight.md) in the Azure portal to get quick insights into the data in Query Store. The visualizations surface the longest running queries and longest wait events over time.
+
+## Next steps
+- Learn how to get or set parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store-scenarios.md
+
+ Title: Query Store scenarios - Azure Database for PostgreSQL - Single Server
+description: This article describes some scenarios for the Query Store in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019+
+# Usage scenarios for Query Store
+
+**Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6, 10, 11
+
+You can use Query Store in a wide variety of scenarios in which tracking and maintaining predictable workload performance is critical. Consider the following examples:
+- Identifying and tuning top expensive queries
+- A/B testing
+- Keeping performance stable during upgrades
+- Identifying and improving ad hoc workloads
+
+## Identify and tune expensive queries
+
+### Identify longest running queries
+Use the [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal to quickly identify the longest running queries. These queries typically tend to consume a significant amount resources. Optimizing your longest running questions can improve performance by freeing up resources for use by other queries running on your system.
+
+### Target queries with performance deltas
+Query Store slices the performance data into time windows, so you can track a query's performance over time. This helps you identify exactly which queries are contributing to an increase in overall time spent. As a result you can do targeted troubleshooting of your workload.
+
+### Tuning expensive queries
+When you identify a query with suboptimal performance, the action you take depends on the nature of the problem:
+- Use [Performance Recommendations](concepts-performance-recommendations.md) to determine if there are any suggested indexes. If yes, create the index, and then use Query Store to evaluate query performance after creating the index.
+- Make sure that the statistics are up-to-date for the underlying tables used by the query.
+- Consider rewriting expensive queries. For example, take advantage of query parameterization and reduce use of dynamic SQL. Implement optimal logic when reading data like applying data filtering on database side, not on application side.
++
+## A/B testing
+Use Query Store to compare workload performance before and after an application change you plan to introduce. Examples of scenarios for using Query Store to assess the impact of the environment or application change to workload performance:
+- Rolling out a new version of an application.
+- Adding additional resources to the server.
+- Creating missing indexes on tables referenced by expensive queries.
+
+In any of these scenarios, apply the following workflow:
+1. Run your workload with Query Store before the planned change to generate a performance baseline.
+2. Apply application change(s) at the controlled moment in time.
+3. Continue running the workload long enough to generate performance image of the system after the change.
+4. Compare results from before and after the change.
+5. Decide whether to keep the change or rollback.
++
+## Identify and improve ad hoc workloads
+Some workloads do not have dominant queries that you can tune to improve overall application performance. Those workloads are typically characterized with a relatively large number of unique queries, each of them consuming a portion of system resources. Each unique query is executed infrequently, so individually their runtime consumption is not critical. On the other hand, given that the application is generating new queries all the time, a significant portion of system resources is spent on query compilation, which is not optimal. Usually, this situation happens if your application generates queries (instead of using stored procedures or parameterized queries) or if it relies on object-relational mapping frameworks that generate queries by default.
+
+If you are in control of the application code, you may consider rewriting the data access layer to use stored procedures or parameterized queries. However, this situation can be also improved without application changes by forcing query parameterization for the entire database (all queries) or for the individual query templates with the same query hash.
+
+## Next steps
+- Learn more about the [best practices for using Query Store](concepts-query-store-best-practices.md)
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store.md
+
+ Title: Query Store - Azure Database for PostgreSQL - Single Server
+description: This article describes the Query Store feature in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 07/01/2020++
+# Monitor performance with the Query Store
+
+**Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6 and above
+
+The Query Store feature in Azure Database for PostgreSQL provides a way to track query performance over time. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in a database named **azure_sys** in the Azure Database for PostgreSQL instance.
+
+> [!IMPORTANT]
+> Do not modify the **azure_sys** database or its schemas. Doing so will prevent Query Store and related performance features from functioning correctly.
+
+## Enabling Query Store
+Query Store is an opt-in feature, so it isn't active by default on a server. The store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database.
+
+### Enable Query Store using the Azure portal
+1. Sign in to the Azure portal and select your Azure Database for PostgreSQL server.
+2. Select **Server Parameters** in the **Settings** section of the menu.
+3. Search for the `pg_qs.query_capture_mode` parameter.
+4. Set the value to `TOP` and **Save**.
+
+To enable wait statistics in your Query Store:
+1. Search for the `pgms_wait_sampling.query_capture_mode` parameter.
+1. Set the value to `ALL` and **Save**.
++
+Alternatively you can set these parameters using the Azure CLI.
+```azurecli-interactive
+az postgres server configuration set --name pg_qs.query_capture_mode --resource-group myresourcegroup --server mydemoserver --value TOP
+az postgres server configuration set --name pgms_wait_sampling.query_capture_mode --resource-group myresourcegroup --server mydemoserver --value ALL
+```
+
+Allow up to 20 minutes for the first batch of data to persist in the azure_sys database.
+
+## Information in Query Store
+Query Store has two stores:
+- A runtime stats store for persisting the query execution statistics information.
+- A wait stats store for persisting wait statistics information.
+
+Common scenarios for using Query Store include:
+- Determining the number of times a query was executed in a given time window
+- Comparing the average execution time of a query across time windows to see large deltas
+- Identifying longest running queries in the past X hours
+- Identifying top N queries that are waiting on resources
+- Understanding wait nature for a particular query
+
+To minimize space usage, the runtime execution statistics in the runtime stats store are aggregated over a fixed, configurable time window. The information in these stores is visible by querying the query store views.
+
+## Access Query Store information
+
+Query Store data is stored in the azure_sys database on your Postgres server.
+
+The following query returns information about queries in Query Store:
+```sql
+SELECT * FROM query_store.qs_view;
+```
+
+Or this query for wait stats:
+```sql
+SELECT * FROM query_store.pgms_wait_sampling_view;
+```
+
+## Finding wait queries
+Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics.
+
+Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store:
+
+| **Observation** | **Action** |
+|||
+|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level.|
+| High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, in order to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries.|
+| High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries.|
+
+## Configuration options
+When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window.
+
+The following options are available for configuring Query Store parameters.
+
+| **Parameter** | **Description** | **Default** | **Range**|
+|||||
+| pg_qs.query_capture_mode | Sets which statements are tracked. | none | none, top, all |
+| pg_qs.max_query_text_length | Sets the maximum query length that can be saved. Longer queries will be truncated. | 6000 | 100 - 10K |
+| pg_qs.retention_period_in_days | Sets the retention period. | 7 | 1 - 30 |
+| pg_qs.track_utility | Sets whether utility commands are tracked | on | on, off |
+
+The following options apply specifically to wait statistics.
+
+| **Parameter** | **Description** | **Default** | **Range**|
+|||||
+| pgms_wait_sampling.query_capture_mode | Sets which statements are tracked for wait stats. | none | none, all|
+| Pgms_wait_sampling.history_period | Set the frequency, in milliseconds, at which wait events are sampled. | 100 | 1-600000 |
+
+> [!NOTE]
+> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is NONE, the pgms_wait_sampling.query_capture_mode setting has no effect.
++
+Use the [Azure portal](how-to-configure-server-parameters-using-portal.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md) to get or set a different value for a parameter.
+
+## Views and functions
+View and manage Query Store using the following views and functions. Anyone in the PostgreSQL public role can use these views to see the data in Query Store. These views are only available in the **azure_sys** database.
+
+Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash.
+
+### query_store.qs_view
+This view returns query text data in Query Store. There is one row for each distinct query_text. The data isn't available via the Intelligent Performance section in the portal, APIs, or the CLI - but It can be found by connecting to azure_sys and querying 'query_store.query_texts_view'.
+
+|**Name** |**Type** | **References** | **Description**|
+|||||
+|runtime_stats_entry_id |bigint | | ID from the runtime_stats_entries table|
+|user_id |oid |pg_authid.oid |OID of user who executed the statement|
+|db_id |oid |pg_database.oid |OID of database in which the statement was executed|
+|query_id |bigint || Internal hash code, computed from the statement's parse tree|
+|query_sql_text |Varchar(10000) || Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster.|
+|plan_id |bigint | |ID of the plan corresponding to this query, not available yet|
+|start_time |timestamp || Queries are aggregated by time buckets - the time span of a bucket is 15 minutes by default. This is the start time corresponding to the time bucket for this entry.|
+|end_time |timestamp || End time corresponding to the time bucket for this entry.|
+|calls |bigint || Number of times the query executed|
+|total_time |double precision || Total query execution time, in milliseconds|
+|min_time |double precision || Minimum query execution time, in milliseconds|
+|max_time |double precision || Maximum query execution time, in milliseconds|
+|mean_time |double precision || Mean query execution time, in milliseconds|
+|stddev_time| double precision || Standard deviation of the query execution time, in milliseconds |
+|rows |bigint || Total number of rows retrieved or affected by the statement|
+|shared_blks_hit| bigint || Total number of shared block cache hits by the statement|
+|shared_blks_read| bigint || Total number of shared blocks read by the statement|
+|shared_blks_dirtied| bigint || Total number of shared blocks dirtied by the statement |
+|shared_blks_written| bigint || Total number of shared blocks written by the statement|
+|local_blks_hit| bigint || Total number of local block cache hits by the statement|
+|local_blks_read| bigint || Total number of local blocks read by the statement|
+|local_blks_dirtied| bigint || Total number of local blocks dirtied by the statement|
+|local_blks_written| bigint || Total number of local blocks written by the statement|
+|temp_blks_read |bigint || Total number of temp blocks read by the statement|
+|temp_blks_written| bigint || Total number of temp blocks written by the statement|
+|blk_read_time |double precision || Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)|
+|blk_write_time |double precision || Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)|
+
+### query_store.query_texts_view
+This view returns query text data in Query Store. There is one row for each distinct query_text.
+
+| **Name** | **Type** | **Description** |
+|--|--|--|
+| query_text_id | bigint | ID for the query_texts table |
+| query_sql_text | Varchar(10000) | Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster. |
+
+### query_store.pgms_wait_sampling_view
+This view returns query text data in Query Store. There is one row for each distinct query_text. The data isn't available via the Intelligent Performance section in the portal, APIs, or the CLI - but It can be found by connecting to azure_sys and querying 'query_store.query_texts_view'.
+
+| **Name** | **Type** | **References** | **Description** |
+|--|--|--|--|
+| user_id | oid | pg_authid.oid | OID of user who executed the statement |
+| db_id | oid | pg_database.oid | OID of database in which the statement was executed |
+| query_id | bigint | | Internal hash code, computed from the statement's parse tree |
+| event_type | text | | The type of event for which the backend is waiting |
+| event | text | | The wait event name if backend is currently waiting |
+| calls | Integer | | Number of the same event captured |
+
+### Functions
+
+Query_store.qs_reset() returns void
+
+`qs_reset` discards all statistics gathered so far by Query Store. This function can only be executed by the server admin role.
+
+Query_store.staging_data_reset() returns void
+
+`staging_data_reset` discards all statistics gathered in memory by Query Store (that is, the data in memory that has not been flushed yet to the database). This function can only be executed by the server admin role.
++
+## Azure Monitor
+Azure Database for PostgreSQL is integrated with [Azure Monitor diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md). Diagnostic settings allows you to send your Postgres logs in JSON format to [Azure Monitor Logs](../../azure-monitor/logs/log-query-overview.md) for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
+
+>[!IMPORTANT]
+> This diagnostic feature for is only available in the General Purpose and Memory Optimized pricing tiers.
+
+### Configure diagnostic settings
+You can enable diagnostic settings for your Postgres server using the Azure portal, CLI, REST API, and PowerShell. The log categories to configure are **QueryStoreRuntimeStatistics** and **QueryStoreWaitStatistics**.
+
+To enable resource logs using the Azure portal:
+
+1. In the portal, go to Diagnostic Settings in the navigation menu of your Postgres server.
+2. Select Add Diagnostic Setting.
+3. Name this setting.
+4. Select your preferred endpoint (storage account, event hub, log analytics).
+5. Select the log types **QueryStoreRuntimeStatistics** and **QueryStoreWaitStatistics**.
+6. Save your setting.
+
+To enable this setting using PowerShell, CLI, or REST API, visit the [diagnostic settings article](../../azure-monitor/essentials/diagnostic-settings.md).
+
+### JSON log format
+The following tables describes the fields for the two log types. Depending on the output endpoint you choose, the fields included and the order in which they appear may vary.
+
+#### QueryStoreRuntimeStatistics
+|**Field** | **Description** |
+|||
+| TimeGenerated [UTC] | Time stamp when the log was recorded in UTC |
+| ResourceId | Postgres server's Azure resource URI |
+| Category | `QueryStoreRuntimeStatistics` |
+| OperationName | `QueryStoreRuntimeStatisticsEvent` |
+| LogicalServerName_s | Postgres server name |
+| runtime_stats_entry_id_s | ID from the runtime_stats_entries table |
+| user_id_s | OID of user who executed the statement |
+| db_id_s | OID of database in which the statement was executed |
+| query_id_s | Internal hash code, computed from the statement's parse tree |
+| end_time_s | End time corresponding to the time bucket for this entry |
+| calls_s | Number of times the query executed |
+| total_time_s | Total query execution time, in milliseconds |
+| min_time_s | Minimum query execution time, in milliseconds |
+| max_time_s | Maximum query execution time, in milliseconds |
+| mean_time_s | Mean query execution time, in milliseconds |
+| ResourceGroup | The resource group |
+| SubscriptionId | Your subscription ID |
+| ResourceProvider | `Microsoft.DBForPostgreSQL` |
+| Resource | Postgres server name |
+| ResourceType | `Servers` |
++
+#### QueryStoreWaitStatistics
+|**Field** | **Description** |
+|||
+| TimeGenerated [UTC] | Time stamp when the log was recorded in UTC |
+| ResourceId | Postgres server's Azure resource URI |
+| Category | `QueryStoreWaitStatistics` |
+| OperationName | `QueryStoreWaitEvent` |
+| user_id_s | OID of user who executed the statement |
+| db_id_s | OID of database in which the statement was executed |
+| query_id_s | Internal hash code of the query |
+| calls_s | Number of the same event captured |
+| event_type_s | The type of event for which the backend is waiting |
+| event_s | The wait event name if the backend is currently waiting |
+| start_time_t | Event start time |
+| end_time_s | Event end time |
+| LogicalServerName_s | Postgres server name |
+| ResourceGroup | The resource group |
+| SubscriptionId | Your subscription ID |
+| ResourceProvider | `Microsoft.DBForPostgreSQL` |
+| Resource | Postgres server name |
+| ResourceType | `Servers` |
+
+## Limitations and known issues
+- If a PostgreSQL server has the parameter default_transaction_read_only on, Query Store cannot capture data.
+- Query Store functionality can be interrupted if it encounters long Unicode queries (>= 6000 bytes).
+- [Read replicas](concepts-read-replicas.md) replicate Query Store data from the primary server. This means that a read replica's Query Store does not provide statistics about queries run on the read replica.
++
+## Next steps
+- Learn more about [scenarios where Query Store can be especially helpful](concepts-query-store-scenarios.md).
+- Learn more about [best practices for using Query Store](concepts-query-store-best-practices.md).
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-read-replicas.md
+
+ Title: Read replicas - Azure Database for PostgreSQL - Single Server
+description: This article describes the read replica feature in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 05/29/2021++
+# Read replicas in Azure Database for PostgreSQL - Single Server
+
+The read replica feature allows you to replicate data from an Azure Database for PostgreSQL server to a read-only server. Replicas are updated **asynchronously** with the PostgreSQL engine native physical replication technology. You can replicate from the primary server to up to five replicas.
+
+Replicas are new servers that you manage similar to regular Azure Database for PostgreSQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
+
+Learn how to [create and manage replicas](how-to-read-replicas-portal.md).
+
+## When to use a read replica
+The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the primary. Read replicas can also be deployed on a different region and can be promoted to be a read/write server in the event of a disaster recovery.
+
+A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
+
+Because replicas are read-only, they don't directly reduce write-capacity burdens on the primary.
+
+### Considerations
+The feature is meant for scenarios where the lag is acceptable and meant for offloading queries. It isn't meant for synchronous replication scenarios where the replica data is expected to be up-to-date. There will be a measurable delay between the primary and the replica. This can be in minutes or even hours depending on the workload and the latency between the primary and the replica. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
+
+> [!NOTE]
+> For most workloads read replicas offer near-real-time updates from the primary. However, with persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica. If this situation persists, deleting and recreating the read replica after the write-intensive workloads completes is the option to bring the replica back to a good state with respect to lag.
+> Asynchronous read replicas are not suitable for such heavy write workloads. When evaluating read replicas for your application, monitor the lag on the replica for a full app work load cycle thru its peak and non-peak times to access the possible lag and the expected RTO/RPO at various points of the workload cycle.
+
+> [!NOTE]
+> Automatic backups are performed for replica servers that are configured with up to 4TB storage configuration.
+
+## Cross-region replication
+You can create a read replica in a different region from your primary server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
+
+>[!NOTE]
+> Basic tier servers only support same-region replication.
+
+You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can have a replica in its paired region or the universal replica regions. The picture below shows which replica regions are available depending on your primary region.
+
+[ :::image type="content" source="media/concepts-read-replica/read-replica-regions.png" alt-text="Read replica regions":::](media/concepts-read-replica/read-replica-regions.png#lightbox)
+
+### Universal replica regions
+You can always create a read replica in any of the following regions, regardless of where your primary server is located. These are the universal replica regions:
+
+Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East Asia, East US, East US 2, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, UK South, UK West, West Europe, West US, West US 2, West Central US.
+
+### Paired regions
+In addition to the universal replica regions, you can create a read replica in the Azure paired region of your primary server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../../availability-zones/cross-region-replication-azure.md).
+
+If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
+
+There are limitations to consider:
+
+* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South.
+ This means that a primary server in West India can create a replica in South India. However, a primary server in South India cannot create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region is not West India.
++
+## Create a replica
+When you start the create replica workflow, a blank Azure Database for PostgreSQL server is created. The new server is filled with the data that was on the primary server. The creation time depends on the amount of data on the primary and the time since the last weekly full backup. The time can range from a few minutes to several hours.
+
+Every replica is enabled for storage [auto-grow](concepts-pricing-tiers.md#storage-auto-grow). The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent a break in replication caused by out of storage errors.
+
+The read replica feature uses PostgreSQL physical replication, not logical replication. Streaming replication by using replication slots is the default operation mode. When necessary, log shipping is used to catch up.
+
+Learn how to [create a read replica in the Azure portal](how-to-read-replicas-portal.md).
+
+If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
+
+## Connect to a replica
+When you create a replica, it doesn't inherit the firewall rules or VNet service endpoint of the primary server. These rules must be set up independently for the replica.
+
+The replica inherits the admin account from the primary server. All user accounts on the primary server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the primary server.
+
+You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for PostgreSQL server. For a server named **my replica** with the admin username **myadmin**, you can connect to the replica by using psql:
+
+```bash
+psql -h myreplica.postgres.database.azure.com -U myadmin@myreplica -d postgres
+```
+
+At the prompt, enter the password for the user account.
+
+## Monitor replication
+Azure Database for PostgreSQL provides two metrics for monitoring replication. The two metrics are **Max Lag Across Replicas** and **Replica Lag**. To learn how to view these metrics, see the **Monitor a replica** section of the [read replica how-to article](how-to-read-replicas-portal.md).
+
+The **Max Lag Across Replicas** metric shows the lag in bytes between the primary and the most-lagging replica. This metric is applicable and available on the primary server only, and will be available only if at least one of the read replica is connected to the primary and the primary is in streaming replication mode. The lag information does not show details when the replica is in the process of catching up with the primary using the archived logs of the primary in a file-shipping replication mode.
+
+The **Replica Lag** metric shows the time since the last replayed transaction. If there are no transactions occurring on your primary server, the metric reflects this time lag. This metric is applicable and available for replica servers only. Replica Lag is calculated from the `pg_stat_wal_receiver` view:
+
+```SQL
+SELECT EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp());
+```
+
+Set an alert to inform you when the replica lag reaches a value that isnΓÇÖt acceptable for your workload.
+
+For additional insight, query the primary server directly to get the replication lag in bytes on all replicas.
+
+> [!NOTE]
+> If a primary server or read replica restarts, the time it takes to restart and catch up is reflected in the Replica Lag metric.
+
+## Stop replication / Promote replica
+You can stop the replication between a primary and a replica at any time. The stop action causes the replica to restart and promotes the replica to be an independent, standalone read-writeable server. The data in the standalone server is the data that was available on the replica server at the time the replication is stopped. Any subsequent updates at the primary are not propagated to the replica. However, replica server may have accumulated logs that are not applied yet. As part of the restart process, the replica applies all the pending logs before accepting client connections.
+
+>[!NOTE]
+> Resetting admin password on replica server is currently not supported. Additionally, updating admin password along with promote replica operation in the same request is also not supported. If you wish to do this you must first promote the replica server then update the password on the newly promoted server separately.
+
+### Considerations
+- Before you stop replication on a read replica, check for the replication lag to ensure the replica has all the data that you require.
+- As the read replica has to apply all pending logs before it can be made a standalone server, RTO can be higher for write heavy workloads when the stop replication happens as there could be a significant delay on the replica. Please pay attention to this when planning to promote a replica.
+- The promoted replica server cannot be made into a replica again.
+- If you promote a replica to be the primary server, you cannot establish replication back to the old primary server. If you want to go back to the old primary region, you can either establish a new replica server with a new name (or) delete the old primary and create a replica using the old primary name.
+- If you have multiple read replicas, and if you promote one of them to be your primary server, other replica servers are still connected to the old primary. You may have to recreate replicas from the new, promoted server.
+
+When you stop replication, the replica loses all links to its previous primary and other replicas.
+
+Learn how to [stop replication to a replica](how-to-read-replicas-portal.md).
+
+## Failover to replica
+
+In the event of a primary server failure, it is **not** automatically failed over to the read replica.
+
+Since replication is asynchronous, there could be a considerable lag between the primary and the replica. The amount of lag is influenced by a number of factors such as the type of workload running on the primary server and the latency between the primary and the replica server. In typical cases with nominal write workload, replica lag is expected between a few seconds to few minutes. However, in cases where the primary runs very heavy write-intensive workload and the replica is not catching up fast enough, the lag can be much higher. You can track the replication lag for each replica using the metric *Replica Lag*. This metric shows the time since the last replayed transaction at the replica. We recommend that you identify the average lag by observing the replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you will be notified to take action.
+
+> [!Tip]
+> If you failover to the replica, the lag at the time you delink the replica from the primary will indicate how much data is lost.
+
+Once you have decided you want to failover to a replica,
+
+1. Stop replication to the replica<br/>
+ This step is necessary to make the replica server to become a standalone server and be able to accept writes. As part of this process, the replica server will restart and be delinked from the primary. Once you initiate stop replication, the backend process typically takes few minutes to apply any residual logs that were not yet applied and to open the database as a read-writeable server. See the [stop replication](#stop-replication--promote-replica) section of this article to understand the implications of this action.
+
+2. Point your application to the (former) replica<br/>
+ Each server has a unique connection string. Update your application connection string to point to the (former) replica instead of the primary.
+
+Once your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 above.
+
+### Disaster recovery
+
+When there is a major disaster event such as availability zone-level or regional failures, you can perform disaster recovery operation by promoting your read replica. From the UI portal, you can navigate to the read replica server. Then click the replication tab, and you can stop the replica to promote it to be an independent server. Alternatively, you can use the [Azure CLI](/cli/azure/postgres/server/replica#az-postgres-server-replica-stop) to stop and promote the replica server.
+
+## Considerations
+
+This section summarizes considerations about the read replica feature.
+
+### Prerequisites
+Read replicas and [logical decoding](concepts-logical.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas.
+
+To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
+
+* **Off** - Puts the least information in the WAL. This setting is not available on most Azure Database for PostgreSQL servers.
+* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers.
+* **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting.
++
+### New replicas
+A read replica is created as a new Azure Database for PostgreSQL server. An existing server can't be made into a replica. You can't create a replica of another read replica.
+
+### Replica configuration
+A replica is created by using the same compute and storage settings as the primary. After a replica is created, several settings can be changed including storage and backup retention period.
+
+Firewall rules, virtual network rules, and parameter settings are not inherited from the primary server to the replica when the replica is created or afterwards.
+
+### Scaling
+Scaling vCores or between General Purpose and Memory Optimized:
+* PostgreSQL requires the `max_connections` setting on a secondary server to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html), otherwise the secondary will not start.
+* In Azure Database for PostgreSQL, the maximum allowed connections for each server is fixed to the compute sku since connections occupy memory. You can learn more about the [mapping between max_connections and compute skus](concepts-limits.md).
+* **Scaling up**: First scale up a replica's compute, then scale up the primary. This order will prevent errors from violating the `max_connections` requirement.
+* **Scaling down**: First scale down the primary's compute, then scale down the replica. If you try to scale the replica lower than the primary, there will be an error since this violates the `max_connections` requirement.
+
+Scaling storage:
+* All replicas have storage auto-grow enabled to prevent replication issues from a storage-full replica. This setting cannot be disabled.
+* You can also manually scale up storage, as you would do on any other server
++
+### Basic tier
+Basic tier servers only support same-region replication.
+
+### max_prepared_transactions
+[PostgreSQL requires](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PREPARED-TRANSACTIONS) the value of the `max_prepared_transactions` parameter on the read replica to be greater than or equal to the primary value; otherwise, the replica won't start. If you want to change `max_prepared_transactions` on the primary, first change it on the replicas.
+
+### Stopped replicas
+If you stop replication between a primary server and a read replica, the replica restarts to apply the change. The stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again.
+
+### Deleted primary and standalone servers
+When a primary server is deleted, all of its read replicas become standalone servers. The replicas are restarted to reflect this change.
+
+## Next steps
+* Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
+* Learn how to [create and manage read replicas in the Azure CLI and REST API](how-to-read-replicas-cli.md).
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-security.md
+
+ Title: Security in Azure Database for PostgreSQL - Single Server
+description: An overview of the security features in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 11/22/2019++
+# Security in Azure Database for PostgreSQL - Single Server
+
+There are multiple layers of security that are available to protect the data on your Azure Database for PostgreSQL server. This article outlines those security options.
+
+## Information protection and encryption
+
+### In-transit
+Azure Database for PostgreSQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default.
+
+### At-rest
+The Azure Database for PostgreSQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
++
+## Network security
+Connections to an Azure Database for PostgreSQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
+
+A newly created Azure Database for PostgreSQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
+
+### IP firewall rules
+IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information.
+
+### Virtual network firewall rules
+Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for PostgreSQL server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-and-security-vnet.md).
+
+### Private IP
+Private Link allows you to connect to your Azure Database for PostgreSQL Single server in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-and-security-private-link.md)
++
+## Access management
+
+While creating the Azure Database for PostgreSQL server, you provide credentials for an administrator role. This administrator role can be used to create additional [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
+
+You can also connect to the server using [Azure Active Directory authentication](concepts-azure-ad-authentication.md).
++
+## Threat protection
+
+You can opt in to [Advanced Threat Protection](/azure/defender-for-cloud/defender-for-databases-introduction) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
+
+[Audit logging](concepts-audit.md) is available to track activity in your databases.
+
+## Migrating from Oracle
+
+Oracle supports Transparent Data Encryption (TDE) to encrypt table and tablespace data. In Azure for PostgreSQL, the data is automatically encrypted at various layers. See the "At-rest" section in this page and also refer to various Security topics, including [customer managed keys](./concepts-data-encryption-postgresql.md) and [Infrastructure double encryption](./concepts-infrastructure-double-encryption.md). You may also consider using [pgcrypto](https://www.postgresql.org/docs/11/pgcrypto.html) extension which is supported in [Azure for PostgreSQL](./concepts-extensions.md).
+
+## Next steps
+- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md)
+- Learn about [Azure Active Directory authentication](concepts-azure-ad-authentication.md) in Azure Database for PostgreSQL
postgresql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-server-logs.md
+
+ Title: Logs - Azure Database for PostgreSQL - Single Server
+description: Describes logging configuration, storage and analysis in Azure Database for PostgreSQL - Single Server
+++++ Last updated : 06/25/2020++
+# Logs in Azure Database for PostgreSQL - Single Server
+
+Azure Database for PostgreSQL allows you to configure and access Postgres's standard logs. The logs can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance. Logging information you can configure and access includes errors, query information, autovacuum records, connections, and checkpoints. (Access to transaction logs is not available).
+
+Audit logging is made available through a PostgreSQL extension, pgaudit. To learn more, visit the [auditing concepts](concepts-audit.md) article.
++
+## Configure logging
+You can configure Postgres standard logging on your server using the logging server parameters. On each Azure Database for PostgreSQL server, `log_checkpoints` and `log_connections` are on by default. There are additional parameters you can adjust to suit your logging needs:
++
+To learn more about Postgres log parameters, visit the [When To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN) and [What To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT) sections of the Postgres documentation. Most, but not all, Postgres logging parameters are available to configure in Azure Database for PostgreSQL.
+
+To learn how to configure parameters in Azure Database for PostgreSQL, see the [portal documentation](how-to-configure-server-parameters-using-portal.md) or the [CLI documentation](how-to-configure-server-parameters-using-cli.md).
+
+> [!NOTE]
+> Configuring a high volume of logs, for example statement logging, can add significant performance overhead.
+
+## Access .log files
+The default log format in Azure Database for PostgreSQL is .log. A sample line from this log looks like:
+
+```
+2019-10-14 17:00:03 UTC-5d773cc3.3c-LOG: connection received: host=101.0.0.6 port=34331 pid=16216
+```
+
+Azure Database for PostgreSQL provides a short-term storage location for the .log files. A new file begins every 1 hour or 100 MB, whichever comes first. Logs are appended to the current file as they are emitted from Postgres.
+
+You can set the retention period for this short-term log storage using the `log_retention_period` parameter. The default value is 3 days; the maximum value is 7 days. The short-term storage location can hold up to 1 GB of log files. After 1 GB, the oldest files, regardless of retention period, will be deleted to make room for new logs.
+
+For longer-term retention of logs and log analysis, you can download the .log files and move them to a third-party service. You can download the files using the [Azure portal](how-to-configure-server-logs-in-portal.md), [Azure CLI](how-to-configure-server-logs-using-cli.md). Alternatively, you can configure Azure Monitor diagnostic settings which automatically emits your logs (in JSON format) to longer-term locations. Learn more about this option in the section below.
+
+You can stop generating .log files by setting the parameter `logging_collector` to OFF. Turning off .log file generation is recommended if you are using Azure Monitor diagnostic settings. This configuration will reduce the performance impact of additional logging.
+> [!NOTE]
+> Restart the server to apply this change.
+
+## Resource logs
+
+Azure Database for PostgreSQL is integrated with Azure Monitor diagnostic settings. Diagnostic settings allows you to send your Postgres logs in JSON format to Azure Monitor Logs for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
+
+> [!IMPORTANT]
+> This diagnostic feature for server logs is only available in the General Purpose and Memory Optimized [pricing tiers](concepts-pricing-tiers.md).
++
+### Configure diagnostic settings
+
+You can enable diagnostic settings for your Postgres server using the Azure portal, CLI, REST API, and PowerShell. The log category to select is **PostgreSQLLogs**. (There are other logs you can configure if you are using [Query Store](concepts-query-store.md).)
+
+To enable resource logs using the Azure portal:
+
+ 1. In the portal, go to *Diagnostic Settings* in the navigation menu of your Postgres server.
+ 2. Select *Add Diagnostic Setting*.
+ 3. Name this setting.
+ 4. Select your preferred endpoint (storage account, event hub, log analytics).
+ 5. Select the log type **PostgreSQLLogs**.
+ 7. Save your setting.
+
+To enable resource logs using PowerShell, CLI, or REST API, visit the [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) article.
+
+### Access resource logs
+
+The way you access the logs depends on which endpoint you choose. For Azure Storage, see the [logs storage account](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) article. For Event Hubs, see the [stream Azure logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs) article.
+
+For Azure Monitor Logs, logs are sent to the workspace you selected. The Postgres logs use the **AzureDiagnostics** collection mode, so they can be queried from the AzureDiagnostics table. The fields in the table are described below. Learn more about querying and alerting in the [Azure Monitor Logs query](../../azure-monitor/logs/log-query-overview.md) overview.
+
+The following are queries you can try to get started. You can configure alerts based on queries.
+
+Search for all Postgres logs for a particular server in the last day
+```
+AzureDiagnostics
+| where LogicalServerName_s == "myservername"
+| where Category == "PostgreSQLLogs"
+| where TimeGenerated > ago(1d)
+```
+
+Search for all non-localhost connection attempts
+```
+AzureDiagnostics
+| where Message contains "connection received" and Message !contains "host=127.0.0.1"
+| where Category == "PostgreSQLLogs" and TimeGenerated > ago(6h)
+```
+The query above will show results over the last 6 hours for any Postgres server logging in this workspace.
+
+### Log format
+
+The following table describes the fields for the **PostgreSQLLogs** type. Depending on the output endpoint you choose, the fields included and the order in which they appear may vary.
+
+|**Field** | **Description** |
+|||
+| TenantId | Your tenant ID |
+| SourceSystem | `Azure` |
+| TimeGenerated [UTC] | Time stamp when the log was recorded in UTC |
+| Type | Type of the log. Always `AzureDiagnostics` |
+| SubscriptionId | GUID for the subscription that the server belongs to |
+| ResourceGroup | Name of the resource group the server belongs to |
+| ResourceProvider | Name of the resource provider. Always `MICROSOFT.DBFORPOSTGRESQL` |
+| ResourceType | `Servers` |
+| ResourceId | Resource URI |
+| Resource | Name of the server |
+| Category | `PostgreSQLLogs` |
+| OperationName | `LogEvent` |
+| errorLevel | Logging level, example: LOG, ERROR, NOTICE |
+| Message | Primary log message |
+| Domain | Server version, example: postgres-10 |
+| Detail | Secondary log message (if applicable) |
+| ColumnName | Name of the column (if applicable) |
+| SchemaName | Name of the schema (if applicable) |
+| DatatypeName | Name of the datatype (if applicable) |
+| LogicalServerName | Name of the server |
+| _ResourceId | Resource URI |
+| Prefix | Log line's prefix |
++
+## Next steps
+- Learn more about accessing logs from the [Azure portal](how-to-configure-server-logs-in-portal.md) or [Azure CLI](how-to-configure-server-logs-using-cli.md).
+- Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+- Learn more about [audit logs](concepts-audit.md)
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-servers.md
+
+ Title: Servers - Azure Database for PostgreSQL - Single Server
+description: This article provides considerations and guidelines for configuring and managing Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019+
+# Azure Database for PostgreSQL - Single Server
+This article provides considerations and guidelines for working with Azure Database for PostgreSQL - Single Server.
+
+## What is an Azure Database for PostgreSQL server?
+A server in the Azure Database for PostgreSQL - Single Server deployment option is a central administrative point for multiple databases. It is the same PostgreSQL server construct that you may be familiar with in the on-premises world. Specifically, the PostgreSQL service is managed, provides performance guarantees, exposes access and features at the server-level.
+
+An Azure Database for PostgreSQL server:
+
+- Is created within an Azure subscription.
+- Is the parent resource for databases.
+- Provides a namespace for databases.
+- Is a container with strong lifetime semantics - delete a server and it deletes the contained databases.
+- Collocates resources in a region.
+- Provides a connection endpoint for server and database access
+- Provides the scope for management policies that apply to its databases: login, firewall, users, roles, configurations, etc.
+- Is available in multiple versions. For more information, see [supported PostgreSQL database versions](concepts-supported-versions.md).
+- Is extensible by users. For more information, see [PostgreSQL extensions](concepts-extensions.md).
+
+Within an Azure Database for PostgreSQL server, you can create one or multiple databases. You can opt to create a single database per server to utilize all the resources, or create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Pricing tiers](./concepts-pricing-tiers.md).
+
+## How do I connect and authenticate to an Azure Database for PostgreSQL server?
+The following elements help ensure safe access to your database:
+
+|Security concept|Description|
+|:--|:--|
+| **Authentication and authorization** | Azure Database for PostgreSQL server supports native PostgreSQL authentication. You can connect and authenticate to server with the server's admin login. |
+| **Protocol** | The service supports a message-based protocol used by PostgreSQL. |
+| **TCP/IP** | The protocol is supported over TCP/IP, and over Unix-domain sockets. |
+| **Firewall** | To help protect your data, a firewall rule prevents all access to your server and to its databases, until you specify which computers have permission. See [Azure Database for PostgreSQL Server firewall rules](concepts-firewall-rules.md). |
+
+## Managing your server
+You can manage Azure Database for PostgreSQL servers by using the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/postgres).
+
+While creating a server, you set up the credentials for your admin user. The admin user is the highest privilege user you have on the server. It belongs to the role azure_pg_admin. This role does not have full superuser permissions.
+
+The PostgreSQL superuser attribute is assigned to the azure_superuser, which belongs to the managed service. You do not have access to this role.
+
+An Azure Database for PostgreSQL server has default databases:
+- **postgres** - A default database you can connect to once your server is created.
+- **azure_maintenance** - This database is used to separate the processes that provide the managed service from user actions. You do not have access to this database.
+- **azure_sys** - A database for the Query Store. This database does not accumulate data when Query Store is off; this is the default setting. For more information, see the [Query Store overview](concepts-query-store.md).
++
+## Server parameters
+The PostgreSQL server parameters determine the configuration of the server. In Azure Database for PostgreSQL, the list of parameters can be viewed and edited using the Azure portal or the Azure CLI.
+
+As a managed service for Postgres, the configurable parameters in Azure Database for PostgreSQL are a subset of the parameters in a local Postgres instance (For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/runtime-config.html)). Your Azure Database for PostgreSQL server is enabled with default values for each parameter on creation. Some parameters that would require a server restart or superuser access for changes to take effect cannot be configured by the user.
++
+## Next steps
+- For an overview of the service, see [Azure Database for PostgreSQL Overview](overview.md).
+- For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](concepts-pricing-tiers.md).
+- For information on connecting to the service, see [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).
+- View and edit server parameters through [Azure portal](how-to-configure-server-parameters-using-portal.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md).
postgresql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-ssl-connection-security.md
+
+ Title: SSL/TLS - Azure Database for PostgreSQL - Single Server
+description: Instructions and information on how to configure TLS connectivity for Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 07/08/2020+
+# Configure TLS connectivity in Azure Database for PostgreSQL - Single Server
+
+Azure Database for PostgreSQL prefers connecting your client applications to the PostgreSQL service using Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL). Enforcing TLS connections between your database server and your client applications helps protect against "man-in-the-middle" attacks by encrypting the data stream between the server and your application.
+
+By default, the PostgreSQL database service is configured to require TLS connection. You can choose to disable requiring TLS if your client application does not support TLS connectivity.
+
+>[!NOTE]
+> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021).
+
+> [!IMPORTANT]
+> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md)
++
+## Enforcing TLS connections
+
+For all Azure Database for PostgreSQL servers provisioned through the Azure portal and CLI, enforcement of TLS connections is enabled by default.
+
+Likewise, connection strings that are pre-defined in the "Connection Strings" settings under your server in the Azure portal include the required parameters for common languages to connect to your database server using TLS. The TLS parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations.
+
+## Configure Enforcement of TLS
+
+You can optionally disable enforcing TLS connectivity. Microsoft Azure recommends to always enable **Enforce SSL connection** setting for enhanced security.
+
+### Using the Azure portal
+
+Visit your Azure Database for PostgreSQL server and click **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting. Then, click **Save**.
++
+You can confirm the setting by viewing the **Overview** page to see the **SSL enforce status** indicator.
+
+### Using Azure CLI
+
+You can enable or disable the **ssl-enforcement** parameter using `Enabled` or `Disabled` values respectively in Azure CLI.
+
+```azurecli
+az postgres server update --resource-group myresourcegroup --name mydemoserver --ssl-enforcement Enabled
+```
+
+## Ensure your application or framework supports TLS connections
+
+Some application frameworks that use PostgreSQL for their database services do not enable TLS by default during installation. If your PostgreSQL server enforces TLS connections but the application is not configured for TLS, the application may fail to connect to your database server. Consult your application's documentation to learn how to enable TLS connections.
+
+## Applications that require certificate verification for TLS connectivity
+
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. The certificate to connect to an Azure Database for PostgreSQL server is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem. Download the certificate file and save it to your preferred location.
+
+See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
+
+### Connect using psql
+
+The following example shows how to connect to your PostgreSQL server using the psql command-line utility. Use the `sslmode=verify-full` connection string setting to enforce TLS/SSL certificate verification. Pass the local certificate file path to the `sslrootcert` parameter.
+
+The following command is an example of the psql connection string:
+
+```shell
+psql "sslmode=verify-full sslrootcert=BaltimoreCyberTrustRoot.crt host=mydemoserver.postgres.database.azure.com dbname=postgres user=myusern@mydemoserver"
+```
+
+> [!TIP]
+> Confirm that the value passed to `sslrootcert` matches the file path for the certificate you saved.
+
+## TLS enforcement in Azure Database for PostgreSQL Single server
+
+Azure Database for PostgreSQL - Single server supports encryption for clients connecting to your database server using Transport Layer Security (TLS). TLS is an industry standard protocol that ensures secure network connections between your database server and client applications, allowing you to adhere to compliance requirements.
+
+### TLS settings
+
+Azure Database for PostgreSQL single server provides the ability to enforce the TLS version for the client connections. To enforce the TLS version, use the **Minimum TLS version** option setting. The following values are allowed for this option setting:
+
+| Minimum TLS setting | Client TLS version supported |
+|:|-:|
+| TLSEnforcementDisabled (default) | No TLS required |
+| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher |
+| TLS1_1 | TLS 1.1, TLS 1.2 and higher |
+| TLS1_2 | TLS version 1.2 and higher |
++
+For example, setting this Minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected.
+
+> [!Note]
+> By default, Azure Database for PostgreSQL does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`).
+>
+> Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement.
+
+To learn how to set the TLS setting for your Azure Database for PostgreSQL Single server, refer to [How to configure TLS setting](how-to-tls-configurations.md).
+
+## Cipher support by Azure Database for PostgreSQL Single server
+
+As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database server. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites doesn't match one of suites listed below, incoming client connections will be rejected.
+
+### Cipher suite supported
+
+* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+
+## Next steps
+
+Review various application connectivity options in [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).
+
+- Learn how to [configure TLS](how-to-tls-configurations.md)
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-supported-versions.md
+
+ Title: Supported versions - Azure Database for PostgreSQL - Single Server
+description: Describes the supported Postgres major and minor versions in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 03/10/2022+
+adobe-target: true
+
+# Supported PostgreSQL major versions
+
+Please see [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for support policy details.
+
+Azure Database for PostgreSQL currently supports the following major versions:
+
+## PostgreSQL version 11
+The current minor release is 11.12. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-12.html) to learn more about improvements and fixes in this minor release.
+
+## PostgreSQL version 10
+The current minor release is 10.17. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/10/static/release-10-17.html) to learn more about improvements and fixes in this minor release.
+
+## PostgreSQL version 9.6 (retired)
+Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.6 as of November 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
+
+## PostgreSQL version 9.5 (retired)
+Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.5 as of February 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
+
+## Managing upgrades
+The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
+
+Automatic in-place upgrades for major versions are not supported. To upgrade to a higher major version, you can
+ * Use one of the methods documented in [major version upgrades using dump and restore](./how-to-upgrade-using-dump-and-restore.md).
+ * Use [pg_dump and pg_restore](./how-to-migrate-using-dump-and-restore.md) to move a database to a server created with the new engine version.
+ * Use [Azure Database Migration service](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) for doing online upgrades.
+
+### Version syntax
+Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number. For example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10, only a change in the first number is considered a major version upgrade. For example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a _major_ version upgrade.
+
+## Next steps
+For information on supported PostgreSQL extensions, see [the extensions document](concepts-extensions.md).
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-version-policy.md
+
+ Title: Versioning policy - Azure Database for PostgreSQL - Single Server and Flexible Server (Preview)
+description: Describes the policy around Postgres major and minor versions in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 12/14/2021++
+# Azure Database for PostgreSQL versioning policy
+
+This page describes the Azure Database for PostgreSQL versioning policy, and is applicable to these deployment modes:
+
+* Single Server
+* Flexible Server
+* Hyperscale (Citus)
+
+## Supported PostgreSQL versions
+
+Azure Database for PostgreSQL supports the following database versions.
+
+| Version | Single Server | Flexible Server | Hyperscale (Citus) |
+| -- | :: | :-: | :-: |
+| PostgreSQL 14 | | | X |
+| PostgreSQL 13 | | X | X |
+| PostgreSQL 12 | | X | X |
+| PostgreSQL 11 | X | X | X |
+| PostgreSQL 10 | X | | |
+| *PostgreSQL 9.6 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | | |
+| *PostgreSQL 9.5 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | | |
+
+## Major version support
+Each major version of PostgreSQL will be supported by Azure Database for PostgreSQL from the date on which Azure begins supporting the version until the version is retired by the PostgreSQL community, as provided in the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
+
+## Minor version support
+Azure Database for PostgreSQL automatically performs minor version upgrades to the Azure preferred PostgreSQL version as part of periodic maintenance.
+
+## Major version retirement policy
+The table below provides the retirement details for PostgreSQL major versions. The dates follow the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
+
+| Version | What's New | Azure support start date | Retirement date|
+| -- | -- | | -- |
+| [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
+| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021
+| [PostgreSQL 10](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022
+| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2023
+| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024
+| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | October 1, 2021 | November 12, 2026
+
+## Retired PostgreSQL engine versions not supported in Azure Database for PostgreSQL
+
+You may continue to run the retired version in Azure Database for PostgreSQL. However, please note the following restrictions after the retirement date for each PostgreSQL database version:
+- As the community will not be releasing any further bug fixes or security fixes, Azure Database for PostgreSQL will not patch the retired database engine for any bugs or security issues or otherwise take security measures with regard to the retired database engine. You may experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
+- If any support issue you may experience relates to the PostgreSQL engine itself, as the community no longer provides the patches, we may not be able to provide you with support. In such cases, you will have to upgrade your database to one of the supported versions.
+- You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
+- New service capabilities developed by Azure Database for PostgreSQL may only be available to supported database server versions.
+- Uptime SLAs will apply solely to Azure Database for PostgreSQL service-related issues and not to any downtime caused by database engine-related bugs.
+- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure may choose to stop your database server to secure the service. In such case, you will be notified to upgrade the server before bringing the server online.
+
+## PostgreSQL version syntax
+Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number. For example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10, only a change in the first number is considered a major version upgrade. For example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a _major_ version upgrade.
+
+## Next steps
+- See Azure Database for PostgreSQL - Single Server [supported versions](./concepts-supported-versions.md)
+- See Azure Database for PostgreSQL - Flexible Server [supported versions](../flexible-server/concepts-supported-versions.md)
+- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](../hyperscale/reference-versions.md)
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-csharp.md
+
+ Title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Single Server'
+description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server."
++++++
+ms.devlang: csharp
+ Last updated : 10/18/2020++
+# Quickstart: Use .NET (C#) to connect and query data in Azure Database for PostgreSQL - Single Server
+
+This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using C#, and that you are new to working with Azure Database for PostgreSQL.
+
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Create an Azure Database for PostgreSQL single server using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md) if you do not have one.
+- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.
+
+ |Action| Connectivity method|How-to guide|
+ |: |: |: |
+ | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)|
+ | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)|
+ | **Configure private link** | Private | [Portal](./how-to-configure-privatelink-portal.md) <br/> [CLI](./how-to-configure-privatelink-cli.md) |
+
+- Install the [.NET SDK for your platform](https://dotnet.microsoft.com/download) (Windows, Ubuntu Linux, or macOS) for your platform.
+- Install [Visual Studio](https://www.visualstudio.com/downloads/) to build your project.
+- Install [Npgsql](https://www.nuget.org/packages/Npgsql/) NuGet package in Visual Studio.
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-csharp/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
+
+## Step 1: Connect and insert data
+Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses NpgsqlCommand class with method:
+- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database.
+- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) sets the CommandText property.
+- [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) method to run the database commands.
+
+> [!IMPORTANT]
+> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using Npgsql;
+
+namespace Driver
+{
+ public class AzurePostgresCreate
+ {
+ // Obtain connection string information from the portal
+ //
+ private static string Host = "mydemoserver.postgres.database.azure.com";
+ private static string User = "mylogin@mydemoserver";
+ private static string DBname = "mypgsqldb";
+ private static string Password = "<server_admin_password>";
+ private static string Port = "5432";
+
+ static void Main(string[] args)
+ {
+ // Build connection string using parameters from portal
+ //
+ string connString =
+ String.Format(
+ "Server={0};Username={1};Database={2};Port={3};Password={4};SSLMode=Prefer",
+ Host,
+ User,
+ DBname,
+ Port,
+ Password);
++
+ using (var conn = new NpgsqlConnection(connString))
+
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+
+ using (var command = new NpgsqlCommand("DROP TABLE IF EXISTS inventory", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished dropping table (if existed)");
+
+ }
+
+ using (var command = new NpgsqlCommand("CREATE TABLE inventory(id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER)", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished creating table");
+ }
+
+ using (var command = new NpgsqlCommand("INSERT INTO inventory (name, quantity) VALUES (@n1, @q1), (@n2, @q2), (@n3, @q3)", conn))
+ {
+ command.Parameters.AddWithValue("n1", "banana");
+ command.Parameters.AddWithValue("q1", 150);
+ command.Parameters.AddWithValue("n2", "orange");
+ command.Parameters.AddWithValue("q2", 154);
+ command.Parameters.AddWithValue("n3", "apple");
+ command.Parameters.AddWithValue("q3", 100);
+
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows inserted={0}", nRows));
+ }
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
+
+## Step 2: Read data
+Use the following code to connect and read the data using a **SELECT** SQL statement. The code uses NpgsqlCommand class with method:
+- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.
+- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.
+- [Read()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_Read) to advance to the record in the results.
+- [GetInt32()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetInt32_System_Int32_) and [GetString()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetString_System_Int32_) to parse the values in the record.
+
+> [!IMPORTANT]
+> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using Npgsql;
+
+namespace Driver
+{
+ public class AzurePostgresRead
+ {
+ // Obtain connection string information from the portal
+ //
+ private static string Host = "mydemoserver.postgres.database.azure.com";
+ private static string User = "mylogin@mydemoserver";
+ private static string DBname = "mypgsqldb";
+ private static string Password = "<server_admin_password>";
+ private static string Port = "5432";
+
+ static void Main(string[] args)
+ {
+ // Build connection string using parameters from portal
+ //
+ string connString =
+ String.Format(
+ "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4};SSLMode=Prefer",
+ Host,
+ User,
+ DBname,
+ Port,
+ Password);
+
+ using (var conn = new NpgsqlConnection(connString))
+ {
+
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
++
+ using (var command = new NpgsqlCommand("SELECT * FROM inventory", conn))
+ {
+
+ var reader = command.ExecuteReader();
+ while (reader.Read())
+ {
+ Console.WriteLine(
+ string.Format(
+ "Reading from table=({0}, {1}, {2})",
+ reader.GetInt32(0).ToString(),
+ reader.GetString(1),
+ reader.GetInt32(2).ToString()
+ )
+ );
+ }
+ reader.Close();
+ }
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
+
+## Step 3: Update data
+Use the following code to connect and update the data using an **UPDATE** SQL statement. The code uses NpgsqlCommand class with method:
+- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.
+- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand), sets the CommandText property.
+- [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) method to run the database commands.
+
+> [!IMPORTANT]
+> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using Npgsql;
+
+namespace Driver
+{
+ public class AzurePostgresUpdate
+ {
+ // Obtain connection string information from the portal
+ //
+ private static string Host = "mydemoserver.postgres.database.azure.com";
+ private static string User = "mylogin@mydemoserver";
+ private static string DBname = "mypgsqldb";
+ private static string Password = "<server_admin_password>";
+ private static string Port = "5432";
+
+ static void Main(string[] args)
+ {
+ // Build connection string using parameters from portal
+ //
+ string connString =
+ String.Format(
+ "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4};SSLMode=Prefer",
+ Host,
+ User,
+ DBname,
+ Port,
+ Password);
+
+ using (var conn = new NpgsqlConnection(connString))
+ {
+
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+
+ using (var command = new NpgsqlCommand("UPDATE inventory SET quantity = @q WHERE name = @n", conn))
+ {
+ command.Parameters.AddWithValue("n", "banana");
+ command.Parameters.AddWithValue("q", 200);
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows updated={0}", nRows));
+ }
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
++
+```
+
+[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
+
+## Step 4: Delete data
+Use the following code to connect and delete data using a **DELETE** SQL statement.
+
+The code uses NpgsqlCommand class with method [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. Then, the code uses the [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) method, sets the CommandText property, and calls the method [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run the database commands.
+
+> [!IMPORTANT]
+> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using Npgsql;
+
+namespace Driver
+{
+ public class AzurePostgresDelete
+ {
+ // Obtain connection string information from the portal
+ //
+ private static string Host = "mydemoserver.postgres.database.azure.com";
+ private static string User = "mylogin@mydemoserver";
+ private static string DBname = "mypgsqldb";
+ private static string Password = "<server_admin_password>";
+ private static string Port = "5432";
+
+ static void Main(string[] args)
+ {
+ // Build connection string using parameters from portal
+ //
+ string connString =
+ String.Format(
+ "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4};SSLMode=Prefer",
+ Host,
+ User,
+ DBname,
+ Port,
+ Password);
+
+ using (var conn = new NpgsqlConnection(connString))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+
+ using (var command = new NpgsqlCommand("DELETE FROM inventory WHERE name = @n", conn))
+ {
+ command.Parameters.AddWithValue("n", "orange");
+
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows deleted={0}", nRows));
+ }
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/>
+
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)
+
+[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-go.md
+
+ Title: 'Quickstart: Connect with Go - Azure Database for PostgreSQL - Single Server'
+description: This quickstart provides a Go programming language sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
++++++
+ms.devlang: golang
+ Last updated : 5/6/2019++
+# Quickstart: Use Go language to connect and query data in Azure Database for PostgreSQL - Single Server
+
+This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using code written in the [Go](https://go.dev/) language (golang). It shows how to use SQL statements to query, insert, update, and delete data in the database. This article assumes you are familiar with development using Go, but that you are new to working with Azure Database for PostgreSQL.
+
+## Prerequisites
+This quickstart uses the resources created in either of these guides as a starting point:
+- [Create DB - Portal](quickstart-create-server-database-portal.md)
+- [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md)
+
+## Install Go and pq connector
+Install [Go](https://go.dev/doc/install) and the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) on your own machine. Depending on your platform, follow the appropriate steps:
+
+### Windows
+1. [Download](https://go.dev/dl/) and install Go for Microsoft Windows according to the [installation instructions](https://go.dev/doc/install).
+2. Launch the command prompt from the start menu.
+3. Make a folder for your project, such as `mkdir %USERPROFILE%\go\src\postgresqlgo`.
+4. Change directory into the project folder, such as `cd %USERPROFILE%\go\src\postgresqlgo`.
+5. Set the environment variable for GOPATH to point to the source code directory. `set GOPATH=%USERPROFILE%\go`.
+6. Install the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) by running the `go get github.com/lib/pq` command.
+
+ In summary, install Go, then run these commands in the command prompt:
+ ```cmd
+ mkdir %USERPROFILE%\go\src\postgresqlgo
+ cd %USERPROFILE%\go\src\postgresqlgo
+ set GOPATH=%USERPROFILE%\go
+ go get github.com/lib/pq
+ ```
+
+### Linux (Ubuntu)
+1. Launch the Bash shell.
+2. Install Go by running `sudo apt-get install golang-go`.
+3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/postgresqlgo/`.
+4. Change directory into the folder, such as `cd ~/go/src/postgresqlgo/`.
+5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
+6. Install the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) by running the `go get github.com/lib/pq` command.
+
+ In summary, run these bash commands:
+ ```bash
+ sudo apt-get install golang-go
+ mkdir -p ~/go/src/postgresqlgo/
+ cd ~/go/src/postgresqlgo/
+ export GOPATH=~/go/
+ go get github.com/lib/pq
+ ```
+
+### Apple macOS
+1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform.
+2. Launch the Bash shell.
+3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/postgresqlgo/`.
+4. Change directory into the folder, such as `cd ~/go/src/postgresqlgo/`.
+5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
+6. Install the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) by running the `go get github.com/lib/pq` command.
+
+ In summary, install Go, then run these bash commands:
+ ```bash
+ mkdir -p ~/go/src/postgresqlgo/
+ cd ~/go/src/postgresqlgo/
+ export GOPATH=~/go/
+ go get github.com/lib/pq
+ ```
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-go/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
+
+## Build and run Go code
+1. To write Golang code, you can use a plain text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE) try [GoLand](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/).
+2. Paste the Golang code from the following sections into text files, and save into your project folder with file extension \*.go, such as Windows path `%USERPROFILE%\go\src\postgresqlgo\createtable.go` or Linux path `~/go/src/postgresqlgo/createtable.go`.
+3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and replace the example values with your own values.
+4. Launch the command prompt or bash shell. Change directory into your project folder. For example, on Windows `cd %USERPROFILE%\go\src\postgresqlgo\`. On Linux `cd ~/go/src/postgresqlgo/`. Some of the IDE environments mentioned offer debug and runtime capabilities without requiring shell commands.
+5. Run the code by typing the command `go run createtable.go` to compile the application and run it.
+6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application.
+
+## Connect and create a table
+Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
+
+The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the PostgreSQL server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
+
+The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several SQL commands. Each time a custom checkError() method checks if an error occurred and panic to exit if an error does occur.
+
+Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
+
+```go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ _ "github.com/lib/pq"
+)
+
+const (
+ // Initialize connection constants.
+ HOST = "mydemoserver.postgres.database.azure.com"
+ DATABASE = "mypgsqldb"
+ USER = "mylogin@mydemoserver"
+ PASSWORD = "<server_admin_password>"
+)
+
+func checkError(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+ // Initialize connection string.
+ var connectionString string = fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
+
+ // Initialize connection object.
+ db, err := sql.Open("postgres", connectionString)
+ checkError(err)
+
+ err = db.Ping()
+ checkError(err)
+ fmt.Println("Successfully created connection to database")
+
+ // Drop previous table of same name if one exists.
+ _, err = db.Exec("DROP TABLE IF EXISTS inventory;")
+ checkError(err)
+ fmt.Println("Finished dropping table (if existed)")
+
+ // Create table.
+ _, err = db.Exec("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
+ checkError(err)
+ fmt.Println("Finished creating table")
+
+ // Insert some data into table.
+ sql_statement := "INSERT INTO inventory (name, quantity) VALUES ($1, $2);"
+ _, err = db.Exec(sql_statement, "banana", 150)
+ checkError(err)
+ _, err = db.Exec(sql_statement, "orange", 154)
+ checkError(err)
+ _, err = db.Exec(sql_statement, "apple", 100)
+ checkError(err)
+ fmt.Println("Inserted 3 rows of data")
+}
+```
+
+## Read data
+Use the following code to connect and read the data using a **SELECT** SQL statement.
+
+The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the PostgreSQL server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
+
+The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The select query is run by calling method [db.Query()](https://go.dev/pkg/database/sql/#DB.Query), and the resulting rows are kept in a variable of type
+[rows](https://go.dev/pkg/database/sql/#Rows). The code reads the column data values in the current row using method [rows.Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) and loops over the rows using the iterator [rows.Next()](https://go.dev/pkg/database/sql/#Rows.Next) until no more rows exist. Each row's column values are printed to the console out. Each time a custom checkError() method is used to check if an error occurred and panic to exit if an error does occur.
+
+Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
+
+```go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ _ "github.com/lib/pq"
+)
+
+const (
+ // Initialize connection constants.
+ HOST = "mydemoserver.postgres.database.azure.com"
+ DATABASE = "mypgsqldb"
+ USER = "mylogin@mydemoserver"
+ PASSWORD = "<server_admin_password>"
+)
+
+func checkError(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+
+ // Initialize connection string.
+ var connectionString string = fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
+
+ // Initialize connection object.
+ db, err := sql.Open("postgres", connectionString)
+ checkError(err)
+
+ err = db.Ping()
+ checkError(err)
+ fmt.Println("Successfully created connection to database")
+
+ // Read rows from table.
+ var id int
+ var name string
+ var quantity int
+
+ sql_statement := "SELECT * from inventory;"
+ rows, err := db.Query(sql_statement)
+ checkError(err)
+ defer rows.Close()
+
+ for rows.Next() {
+ switch err := rows.Scan(&id, &name, &quantity); err {
+ case sql.ErrNoRows:
+ fmt.Println("No rows were returned")
+ case nil:
+ fmt.Printf("Data row = (%d, %s, %d)\n", id, name, quantity)
+ default:
+ checkError(err)
+ }
+ }
+}
+```
+
+## Update data
+Use the following code to connect and update the data using an **UPDATE** SQL statement.
+
+The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the Postgres server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
+
+The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the SQL statement that updates the table. A custom checkError() method is used to check if an error occurred and panic to exit if an error does occur.
+
+Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
+```go
+package main
+
+import (
+ "database/sql"
+ _ "github.com/lib/pq"
+ "fmt"
+)
+
+const (
+ // Initialize connection constants.
+ HOST = "mydemoserver.postgres.database.azure.com"
+ DATABASE = "mypgsqldb"
+ USER = "mylogin@mydemoserver"
+ PASSWORD = "<server_admin_password>"
+)
+
+func checkError(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+
+ // Initialize connection string.
+ var connectionString string =
+ fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
+
+ // Initialize connection object.
+ db, err := sql.Open("postgres", connectionString)
+ checkError(err)
+
+ err = db.Ping()
+ checkError(err)
+ fmt.Println("Successfully created connection to database")
+
+ // Modify some data in table.
+ sql_statement := "UPDATE inventory SET quantity = $2 WHERE name = $1;"
+ _, err = db.Exec(sql_statement, "banana", 200)
+ checkError(err)
+ fmt.Println("Updated 1 row of data")
+}
+```
+
+## Delete data
+Use the following code to connect and delete the data using a **DELETE** SQL statement.
+
+The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the Postgres server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
+
+The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the SQL statement that deletes a row from the table. A custom checkError() method is used to check if an error occurred and panic to exit if an error does occur.
+
+Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
+```go
+package main
+
+import (
+ "database/sql"
+ _ "github.com/lib/pq"
+ "fmt"
+)
+
+const (
+ // Initialize connection constants.
+ HOST = "mydemoserver.postgres.database.azure.com"
+ DATABASE = "mypgsqldb"
+ USER = "mylogin@mydemoserver"
+ PASSWORD = "<server_admin_password>"
+)
+
+func checkError(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+
+ // Initialize connection string.
+ var connectionString string =
+ fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
+
+ // Initialize connection object.
+ db, err := sql.Open("postgres", connectionString)
+ checkError(err)
+
+ err = db.Ping()
+ checkError(err)
+ fmt.Println("Successfully created connection to database")
+
+ // Delete some data from table.
+ sql_statement := "DELETE FROM inventory WHERE name = $1;"
+ _, err = db.Exec(sql_statement, "orange")
+ checkError(err)
+ fmt.Println("Deleted 1 row of data")
+}
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-java.md
+
+ Title: 'Quickstart: Use Java and JDBC with Azure Database for PostgreSQL'
+description: In this quickstart, you learn how to use Java and JDBC with an Azure Database for PostgreSQL.
+++++
+ms.devlang: java
+ Last updated : 08/17/2020++
+# Quickstart: Use Java and JDBC with Azure Database for PostgreSQL
+
+This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for PostgreSQL](./index.yml).
+
+JDBC is the standard Java API to connect to traditional relational databases.
+
+## Prerequisites
+
+- An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).
+- [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.
+- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).
+- The [Apache Maven](https://maven.apache.org/) build tool.
+
+## Prepare the working environment
+
+We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
+
+Set up those environment variables by using the following commands:
+
+```bash
+AZ_RESOURCE_GROUP=database-workshop
+AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+AZ_LOCATION=<YOUR_AZURE_REGION>
+AZ_POSTGRESQL_USERNAME=demo
+AZ_POSTGRESQL_PASSWORD=<YOUR_POSTGRESQL_PASSWORD>
+AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
+```
+
+Replace the placeholders with the following values, which are used throughout this article:
+
+- `<YOUR_DATABASE_NAME>`: The name of your PostgreSQL server. It should be unique across Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.
+- `<YOUR_POSTGRESQL_PASSWORD>`: The password of your PostgreSQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
+- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).
+
+Next, create a resource group by using the following command:
+
+```azurecli
+az group create \
+ --name $AZ_RESOURCE_GROUP \
+ --location $AZ_LOCATION \
+ | jq
+```
+
+> [!NOTE]
+> We use the `jq` utility to display JSON data and make it more readable. This utility is installed by default on [Azure Cloud Shell](https://shell.azure.com/). If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
+
+## Create an Azure Database for PostgreSQL instance
+
+The first thing we'll create is a managed PostgreSQL server.
+
+> [!NOTE]
+> You can read more detailed information about creating PostgreSQL servers in [Create an Azure Database for PostgreSQL server by using the Azure portal](./quickstart-create-server-database-portal.md).
+
+In [Azure Cloud Shell](https://shell.azure.com/), run the following command:
+
+```azurecli
+az postgres server create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME \
+ --location $AZ_LOCATION \
+ --sku-name B_Gen5_1 \
+ --storage-size 5120 \
+ --admin-user $AZ_POSTGRESQL_USERNAME \
+ --admin-password $AZ_POSTGRESQL_PASSWORD \
+ | jq
+```
+
+This command creates a small PostgreSQL server.
+
+### Configure a firewall rule for your PostgreSQL server
+
+Azure Database for PostgreSQL instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
+
+Because you configured your local IP address at the beginning of this article, you can open the server's firewall by running the following command:
+
+```azurecli
+az postgres server firewall-rule create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME-database-allow-local-ip \
+ --server $AZ_DATABASE_NAME \
+ --start-ip-address $AZ_LOCAL_IP_ADDRESS \
+ --end-ip-address $AZ_LOCAL_IP_ADDRESS \
+ | jq
+```
+
+### Configure a PostgreSQL database
+
+The PostgreSQL server that you created earlier is empty. It doesn't have any database that you can use with the Java application. Create a new database called `demo` by using the following command:
+
+```azurecli
+az postgres db create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name demo \
+ --server-name $AZ_DATABASE_NAME \
+ | jq
+```
+
+### Create a new Java project
+
+Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>com.example</groupId>
+ <artifactId>demo</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <name>demo</name>
+
+ <properties>
+ <java.version>1.8</java.version>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>org.postgresql</groupId>
+ <artifactId>postgresql</artifactId>
+ <version>42.2.12</version>
+ </dependency>
+ </dependencies>
+</project>
+```
+
+This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
+
+- Java 8
+- A recent PostgreSQL driver for Java
+
+### Prepare a configuration file to connect to Azure Database for PostgreSQL
+
+Create a *src/main/resources/application.properties* file, and add:
+
+```properties
+url=jdbc:postgresql://$AZ_DATABASE_NAME.postgres.database.azure.com:5432/demo?ssl=true&sslmode=require
+user=demo@$AZ_DATABASE_NAME
+password=$AZ_POSTGRESQL_PASSWORD
+```
+
+- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.
+- Replace the `$AZ_POSTGRESQL_PASSWORD` variable with the value that you configured at the beginning of this article.
+
+> [!NOTE]
+> We append `?ssl=true&sslmode=require` to the configuration property `url`, to tell the JDBC driver to use TLS ([Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security)) when connecting to the database. It is mandatory to use TLS with Azure Database for PostgreSQL, and it is a good security practice.
+
+### Create an SQL file to generate the database schema
+
+We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
+
+```sql
+DROP TABLE IF EXISTS todo;
+CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN);
+```
+
+## Code the application
+
+### Connect to the database
+
+Next, add the Java code that will use JDBC to store and retrieve data from your PostgreSQL server.
+
+Create a *src/main/java/DemoApplication.java* file, that contains:
+
+```java
+package com.example.demo;
+
+import java.sql.*;
+import java.util.*;
+import java.util.logging.Logger;
+
+public class DemoApplication {
+
+ private static final Logger log;
+
+ static {
+ System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
+ log =Logger.getLogger(DemoApplication.class.getName());
+ }
+
+ public static void main(String[] args) throws Exception {
+ log.info("Loading application properties");
+ Properties properties = new Properties();
+ properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
+
+ log.info("Connecting to the database");
+ Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
+ log.info("Database connection test: " + connection.getCatalog());
+
+ log.info("Create database schema");
+ Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
+ Statement statement = connection.createStatement();
+ while (scanner.hasNextLine()) {
+ statement.execute(scanner.nextLine());
+ }
+
+ /*
+ Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+ insertData(todo, connection);
+ todo = readData(connection);
+ todo.setDetails("congratulations, you have updated data!");
+ updateData(todo, connection);
+ deleteData(todo, connection);
+ */
+
+ log.info("Closing database connection");
+ connection.close();
+ }
+}
+```
+
+This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the PostgreSQL server and create a schema that will store our data.
+
+In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
+
+> [!NOTE]
+> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
+
+You can now execute this main class with your favorite tool:
+
+- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.
+- Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
+
+The application should connect to the Azure Database for PostgreSQL, create a database schema, and then close the connection, as you should see in the console logs:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Closing database connection
+```
+
+### Create a domain class
+
+Create a new `Todo` Java class, next to the `DemoApplication` class, and add the following code:
+
+```java
+package com.example.demo;
+
+public class Todo {
+
+ private Long id;
+ private String description;
+ private String details;
+ private boolean done;
+
+ public Todo() {
+ }
+
+ public Todo(Long id, String description, String details, boolean done) {
+ this.id = id;
+ this.description = description;
+ this.details = details;
+ this.done = done;
+ }
+
+ public Long getId() {
+ return id;
+ }
+
+ public void setId(Long id) {
+ this.id = id;
+ }
+
+ public String getDescription() {
+ return description;
+ }
+
+ public void setDescription(String description) {
+ this.description = description;
+ }
+
+ public String getDetails() {
+ return details;
+ }
+
+ public void setDetails(String details) {
+ this.details = details;
+ }
+
+ public boolean isDone() {
+ return done;
+ }
+
+ public void setDone(boolean done) {
+ this.done = done;
+ }
+
+ @Override
+ public String toString() {
+ return "Todo{" +
+ "id=" + id +
+ ", description='" + description + '\'' +
+ ", details='" + details + '\'' +
+ ", done=" + done +
+ '}';
+ }
+}
+```
+
+This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script.
+
+### Insert data into Azure Database for PostgreSQL
+
+In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database:
+
+```java
+private static void insertData(Todo todo, Connection connection) throws SQLException {
+ log.info("Insert data");
+ PreparedStatement insertStatement = connection
+ .prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);");
+
+ insertStatement.setLong(1, todo.getId());
+ insertStatement.setString(2, todo.getDescription());
+ insertStatement.setString(3, todo.getDetails());
+ insertStatement.setBoolean(4, todo.isDone());
+ insertStatement.executeUpdate();
+}
+```
+
+You can now uncomment the two following lines in the `main` method:
+
+```java
+Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+insertData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Closing database connection
+```
+
+### Reading data from Azure Database for PostgreSQL
+
+Let's read the data previously inserted, to validate that our code works correctly.
+
+In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
+
+```java
+private static Todo readData(Connection connection) throws SQLException {
+ log.info("Read data");
+ PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;");
+ ResultSet resultSet = readStatement.executeQuery();
+ if (!resultSet.next()) {
+ log.info("There is no data in the database!");
+ return null;
+ }
+ Todo todo = new Todo();
+ todo.setId(resultSet.getLong("id"));
+ todo.setDescription(resultSet.getString("description"));
+ todo.setDetails(resultSet.getString("details"));
+ todo.setDone(resultSet.getBoolean("done"));
+ log.info("Data read from the database: " + todo.toString());
+ return todo;
+}
+```
+
+You can now uncomment the following line in the `main` method:
+
+```java
+todo = readData(connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Closing database connection
+```
+
+### Updating data in Azure Database for PostgreSQL
+
+Let's update the data we previously inserted.
+
+Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
+
+```java
+private static void updateData(Todo todo, Connection connection) throws SQLException {
+ log.info("Update data");
+ PreparedStatement updateStatement = connection
+ .prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;");
+
+ updateStatement.setString(1, todo.getDescription());
+ updateStatement.setString(2, todo.getDetails());
+ updateStatement.setBoolean(3, todo.isDone());
+ updateStatement.setLong(4, todo.getId());
+ updateStatement.executeUpdate();
+ readData(connection);
+}
+```
+
+You can now uncomment the two following lines in the `main` method:
+
+```java
+todo.setDetails("congratulations, you have updated data!");
+updateData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
+[INFO ] Closing database connection
+```
+
+### Deleting data in Azure Database for PostgreSQL
+
+Finally, let's delete the data we previously inserted.
+
+Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
+
+```java
+private static void deleteData(Todo todo, Connection connection) throws SQLException {
+ log.info("Delete data");
+ PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;");
+ deleteStatement.setLong(1, todo.getId());
+ deleteStatement.executeUpdate();
+ readData(connection);
+}
+```
+
+You can now uncomment the following line in the `main` method:
+
+```java
+deleteData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+[INFO ] Closing database connection
+```
+
+## Clean up resources
+
+Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for PostgreSQL.
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-nodejs.md
+
+ Title: 'Quickstart: Use Node.js to connect to Azure Database for PostgreSQL - Single Server'
+description: This quickstart provides a Node.js code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
++++++
+ms.devlang: javascript
+ Last updated : 5/6/2019++
+# Quickstart: Use Node.js to connect and query data in Azure Database for PostgreSQL - Single Server
+
+In this quickstart, you connect to an Azure Database for PostgreSQL using a Node.js application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using Node.js, and are new to working with Azure Database for PostgreSQL.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+
+- Completion of [Quickstart: Create an Azure Database for PostgreSQL server in the Azure portal](quickstart-create-server-database-portal.md) or [Quickstart: Create an Azure Database for PostgreSQL using the Azure CLI](quickstart-create-server-database-azure-cli.md).
+
+- [Node.js](https://nodejs.org)
+
+## Install pg client
+Install [pg](https://www.npmjs.com/package/pg), which is a PostgreSQL client for Node.js.
+
+To do so, run the node package manager (npm) for JavaScript from your command line to install the pg client.
+```bash
+npm install pg
+```
+
+Verify the installation by listing the packages installed.
+```bash
+npm list
+```
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
+
+1. In the [Azure portal](https://portal.azure.com/), search for and select the server you have created (such as **mydemoserver**).
+
+1. From the server's **Overview** panel, make a note of the **Server name** and **Admin username**. If you forget your password, you can also reset the password from this panel.
+
+ :::image type="content" source="./media/connect-nodejs/server-details-azure-database-postgresql.png" alt-text="Azure Database for PostgreSQL connection string":::
+
+## Running the JavaScript code in Node.js
+You may launch Node.js from the Bash shell, Terminal, or Windows Command Prompt by typing `node`, then run the example JavaScript code interactively by copy and pasting it onto the prompt. Alternatively, you may save the JavaScript code into a text file and launch `node filename.js` with the file name as a parameter to run it.
+
+## Connect, create table, and insert data
+Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements.
+The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
+
+Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
+
+```javascript
+const pg = require('pg');
+
+const config = {
+ host: '<your-db-server-name>.postgres.database.azure.com',
+ // Do not hard code your username and password.
+ // Consider using Node environment variables.
+ user: '<your-db-username>',
+ password: '<your-password>',
+ database: '<name-of-database>',
+ port: 5432,
+ ssl: true
+};
+
+const client = new pg.Client(config);
+
+client.connect(err => {
+ if (err) throw err;
+ else {
+ queryDatabase();
+ }
+});
+
+function queryDatabase() {
+ const query = `
+ DROP TABLE IF EXISTS inventory;
+ CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
+ INSERT INTO inventory (name, quantity) VALUES ('banana', 150);
+ INSERT INTO inventory (name, quantity) VALUES ('orange', 154);
+ INSERT INTO inventory (name, quantity) VALUES ('apple', 100);
+ `;
+
+ client
+ .query(query)
+ .then(() => {
+ console.log('Table created successfully!');
+ client.end(console.log('Closed client connection'));
+ })
+ .catch(err => console.log(err))
+ .then(() => {
+ console.log('Finished execution, exiting now');
+ process.exit();
+ });
+}
+```
+
+## Read data
+Use the following code to connect and read the data using a **SELECT** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
+
+Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
+
+```javascript
+const pg = require('pg');
+
+const config = {
+ host: '<your-db-server-name>.postgres.database.azure.com',
+ // Do not hard code your username and password.
+ // Consider using Node environment variables.
+ user: '<your-db-username>',
+ password: '<your-password>',
+ database: '<name-of-database>',
+ port: 5432,
+ ssl: true
+};
+
+const client = new pg.Client(config);
+
+client.connect(err => {
+ if (err) throw err;
+ else { queryDatabase(); }
+});
+
+function queryDatabase() {
+
+ console.log(`Running query to PostgreSQL server: ${config.host}`);
+
+ const query = 'SELECT * FROM inventory;';
+
+ client.query(query)
+ .then(res => {
+ const rows = res.rows;
+
+ rows.map(row => {
+ console.log(`Read: ${JSON.stringify(row)}`);
+ });
+
+ process.exit();
+ })
+ .catch(err => {
+ console.log(err);
+ });
+}
+```
+
+## Update data
+Use the following code to connect and read the data using a **UPDATE** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
+
+Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
+
+```javascript
+const pg = require('pg');
+
+const config = {
+ host: '<your-db-server-name>.postgres.database.azure.com',
+ // Do not hard code your username and password.
+ // Consider using Node environment variables.
+ user: '<your-db-username>',
+ password: '<your-password>',
+ database: '<name-of-database>',
+ port: 5432,
+ ssl: true
+};
+
+const client = new pg.Client(config);
+
+client.connect(err => {
+ if (err) throw err;
+ else {
+ queryDatabase();
+ }
+});
+
+function queryDatabase() {
+ const query = `
+ UPDATE inventory
+ SET quantity= 1000 WHERE name='banana';
+ `;
+
+ client
+ .query(query)
+ .then(result => {
+ console.log('Update completed');
+ console.log(`Rows affected: ${result.rowCount}`);
+ })
+ .catch(err => {
+ console.log(err);
+ throw err;
+ });
+}
+```
+
+## Delete data
+Use the following code to connect and read the data using a **DELETE** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
+
+Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
+
+```javascript
+const pg = require('pg');
+
+const config = {
+ host: '<your-db-server-name>.postgres.database.azure.com',
+ // Do not hard code your username and password.
+ // Consider using Node environment variables.
+ user: '<your-db-username>',
+ password: '<your-password>',
+ database: '<name-of-database>',
+ port: 5432,
+ ssl: true
+};
+
+const client = new pg.Client(config);
+
+client.connect(err => {
+ if (err) {
+ throw err;
+ } else {
+ queryDatabase();
+ }
+});
+
+function queryDatabase() {
+ const query = `
+ DELETE FROM inventory
+ WHERE name = 'apple';
+ `;
+
+ client
+ .query(query)
+ .then(result => {
+ console.log('Delete completed');
+ console.log(`Rows affected: ${result.rowCount}`);
+ })
+ .catch(err => {
+ console.log(err);
+ throw err;
+ });
+}
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-php.md
+
+ Title: 'Quickstart: Connect with PHP - Azure Database for PostgreSQL - Single Server'
+description: This quickstart provides a PHP code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
++++++
+ms.devlang: php
+ Last updated : 2/28/2018++
+# Quickstart: Use PHP to connect and query data in Azure Database for PostgreSQL - Single Server
+
+This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using PHP, and are new to working with Azure Database for PostgreSQL.
+
+## Prerequisites
+This quickstart uses the resources created in either of these guides as a starting point:
+- [Create DB - Portal](quickstart-create-server-database-portal.md)
+- [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md)
+
+## Install PHP
+Install PHP on your own server, or create an Azure [web app](../../app-service/overview.md) that includes PHP.
+
+### Windows
+- Download [PHP 7.1.4 non-thread safe (x64) version](https://windows.php.net/download#php-7.1)
+- Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.windows.php) for further configuration
+- The code uses the **pgsql** class (ext/php_pgsql.dll) that is included in the PHP installation.
+- Enabled the **pgsql** extension by editing the php.ini configuration file, typically located at `C:\Program Files\PHP\v7.1\php.ini`. The configuration file should contain a line with the text `extension=php_pgsql.so`. If it is not shown, add the text and save the file. If the text is present, but commented with a semicolon prefix, uncomment the text by removing the semicolon.
+
+### Linux (Ubuntu)
+- Download [PHP 7.1.4 non-thread safe (x64) version](https://secure.php.net/downloads.php)
+- Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.unix.php) for further configuration
+- The code uses the **pgsql** class (php_pgsql.so). Install it by running `sudo apt-get install php-pgsql`.
+- Enabled the **pgsql** extension by editing the `/etc/php/7.0/mods-available/pgsql.ini` configuration file. The configuration file should contain a line with the text `extension=php_pgsql.so`. If it is not shown, add the text and save the file. If the text is present, but commented with a semicolon prefix, uncomment the text by removing the semicolon.
+
+### MacOS
+- Download [PHP 7.1.4 version](https://secure.php.net/downloads.php)
+- Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.macosx.php) for further configuration
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-php/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
+
+## Connect and create a table
+Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
+
+The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method
+[pg_query()](https://secure.php.net/manual/en/function.pg-query.php) several times to run several commands, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred each time. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
+
+Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
+
+```php
+<?php
+ // Initialize connection variables.
+ $host = "mydemoserver.postgres.database.azure.com";
+ $database = "mypgsqldb";
+ $user = "mylogin@mydemoserver";
+ $password = "<server_admin_password>";
+
+ // Initialize connection object.
+ $connection = pg_connect("host=$host dbname=$database user=$user password=$password")
+ or die("Failed to create connection to database: ". pg_last_error(). "<br/>");
+ print "Successfully created connection to database.<br/>";
+
+ // Drop previous table of same name if one exists.
+ $query = "DROP TABLE IF EXISTS inventory;";
+ pg_query($connection, $query)
+ or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
+ print "Finished dropping table (if existed).<br/>";
+
+ // Create table.
+ $query = "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
+ pg_query($connection, $query)
+ or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
+ print "Finished creating table.<br/>";
+
+ // Insert some data into table.
+ $name = '\'banana\'';
+ $quantity = 150;
+ $query = "INSERT INTO inventory (name, quantity) VALUES ($name, $quantity);";
+ pg_query($connection, $query)
+ or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
+
+ $name = '\'orange\'';
+ $quantity = 154;
+ $query = "INSERT INTO inventory (name, quantity) VALUES ($name, $quantity);";
+ pg_query($connection, $query)
+ or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
+
+ $name = '\'apple\'';
+ $quantity = 100;
+ $query = "INSERT INTO inventory (name, quantity) VALUES ($name, $quantity);";
+ pg_query($connection, $query)
+ or die("Encountered an error when executing given sql statement: ". pg_last_error()). "<br/>";
+
+ print "Inserted 3 rows of data.<br/>";
+
+ // Closing connection
+ pg_close($connection);
+?>
+```
+
+## Read data
+Use the following code to connect and read the data using a **SELECT** SQL statement.
+
+ The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run the SELECT command, keeping the results in a result set, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. To read the result set, method [pg_fetch_row()](https://secure.php.net/manual/en/function.pg-fetch-row.php) is called in a loop, once per row, and the row data is retrieved in an array `$row`, with one data value per column in each array position. To free the result set, method [pg_free_result()](https://secure.php.net/manual/en/function.pg-free-result.php) is called. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
+
+Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
+
+```php
+<?php
+ // Initialize connection variables.
+ $host = "mydemoserver.postgres.database.azure.com";
+ $database = "mypgsqldb";
+ $user = "mylogin@mydemoserver";
+ $password = "<server_admin_password>";
+
+ // Initialize connection object.
+ $connection = pg_connect("host=$host dbname=$database user=$user password=$password")
+ or die("Failed to create connection to database: ". pg_last_error(). "<br/>");
+
+ print "Successfully created connection to database. <br/>";
+
+ // Perform some SQL queries over the connection.
+ $query = "SELECT * from inventory";
+ $result_set = pg_query($connection, $query)
+ or die("Encountered an error when executing given sql statement: ". pg_last_error(). "<br/>");
+ while ($row = pg_fetch_row($result_set))
+ {
+ print "Data row = ($row[0], $row[1], $row[2]). <br/>";
+ }
+
+ // Free result_set
+ pg_free_result($result_set);
+
+ // Closing connection
+ pg_close($connection);
+?>
+```
+
+## Update data
+Use the following code to connect and update the data using a **UPDATE** SQL statement.
+
+The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run a command, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
+
+Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
+
+```php
+<?php
+ // Initialize connection variables.
+ $host = "mydemoserver.postgres.database.azure.com";
+ $database = "mypgsqldb";
+ $user = "mylogin@mydemoserver";
+ $password = "<server_admin_password>";
+
+ // Initialize connection object.
+ $connection = pg_connect("host=$host dbname=$database user=$user password=$password")
+ or die("Failed to create connection to database: ". pg_last_error(). ".<br/>");
+
+ print "Successfully created connection to database. <br/>";
+
+ // Modify some data in table.
+ $new_quantity = 200;
+ $name = '\'banana\'';
+ $query = "UPDATE inventory SET quantity = $new_quantity WHERE name = $name;";
+ pg_query($connection, $query)
+ or die("Encountered an error when executing given sql statement: ". pg_last_error(). ".<br/>");
+ print "Updated 1 row of data. </br>";
+
+ // Closing connection
+ pg_close($connection);
+?>
+```
++
+## Delete data
+Use the following code to connect and read the data using a **DELETE** SQL statement.
+
+ The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run a command, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
+
+Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
+
+```php
+<?php
+ // Initialize connection variables.
+ $host = "mydemoserver.postgres.database.azure.com";
+ $database = "mypgsqldb";
+ $user = "mylogin@mydemoserver";
+ $password = "<server_admin_password>";
+
+ // Initialize connection object.
+ $connection = pg_connect("host=$host dbname=$database user=$user password=$password")
+ or die("Failed to create connection to database: ". pg_last_error(). ". </br>");
+
+ print "Successfully created connection to database. <br/>";
+
+ // Delete some data from table.
+ $name = '\'orange\'';
+ $query = "DELETE FROM inventory WHERE name = $name;";
+ pg_query($connection, $query)
+ or die("Encountered an error when executing given sql statement: ". pg_last_error(). ". <br/>");
+ print "Deleted 1 row of data. <br/>";
+
+ // Closing connection
+ pg_close($connection);
+?>
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-python.md
+
+ Title: 'Quickstart: Connect with Python - Azure Database for PostgreSQL - Single Server'
+description: This quickstart provides Python code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
++++++
+ms.devlang: python
+ Last updated : 10/28/2020++
+# Quickstart: Use Python to connect and query data in Azure Database for PostgreSQL - Single Server
+
+In this quickstart, you will learn how to connect to the database on Azure Database for PostgreSQL Single Server and run SQL statements to query using Python on macOS, Ubuntu Linux, or Windows.
+
+> [!TIP]
+> If you are looking to build a Django Application with PostgreSQL then checkout the tutorial, [Deploy a Django web app with PostgreSQL](../../app-service/tutorial-python-postgresql-app.md) tutorial.
++
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Create an Azure Database for PostgreSQL single server using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md) if you do not have one.
+- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.
+
+ |Action| Connectivity method|How-to guide|
+ |: |: |: |
+ | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)|
+ | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)|
+ | **Configure private link** | Private | [Portal](./how-to-configure-privatelink-portal.md) <br/> [CLI](./how-to-configure-privatelink-cli.md) |
+
+- [Python](https://www.python.org/downloads/) 2.7 or 3.6+.
+
+- Latest [pip](https://pip.pypa.io/en/stable/installing/) package installer.
+- Install [psycopg2](https://pypi.python.org/pypi/psycopg2-binary/) using `pip install psycopg2-binary` in a terminal or command prompt window. For more information, see [how to install `psycopg2`](https://www.psycopg.org/docs/install.html).
+
+## Get database connection information
+Connecting to an Azure Database for PostgreSQL database requires the fully qualified server name and login credentials. You can get this information from the Azure portal.
+
+1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL server name.
+1. On the server's **Overview** page, copy the fully qualified **Server name** and the **Admin username**. The fully qualified **Server name** is always of the form *\<my-server-name>.postgres.database.azure.com*, and the **Admin username** is always of the form *\<my-admin-username>@\<my-server-name>*.
+
+ You also need your admin password. If you forget it, you can reset it from this page.
+
+ :::image type="content" source="./media/connect-python/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
+
+> [!IMPORTANT]
+> Replace the following values:
+> - `<server-name>` and `<admin-username>` with the values you copied from the Azure portal.
+> - `<admin-password>` with your server password.
+> - `<database-name>` a default database named *postgres* was automatically created when you created your server. You can rename that database or [create a new database](https://www.postgresql.org/docs/current/sql-createdatabase.html) by using SQL commands.
+
+## Step 1: Connect and insert data
+The following code example connects to your Azure Database for PostgreSQL database using
+- [psycopg2.connect](https://www.psycopg.org/docs/connection.html) function, and loads data with a SQL **INSERT** statement.
+- [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) function executes the SQL query against the database.
+
+```Python
+import psycopg2
+
+# Update connection string information
+host = "<server-name>"
+dbname = "<database-name>"
+user = "<admin-username>"
+password = "<admin-password>"
+sslmode = "require"
+
+# Construct connection string
+conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode)
+conn = psycopg2.connect(conn_string)
+print("Connection established")
+
+cursor = conn.cursor()
+
+# Drop previous table of same name if one exists
+cursor.execute("DROP TABLE IF EXISTS inventory;")
+print("Finished dropping table (if existed)")
+
+# Create a table
+cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
+print("Finished creating table")
+
+# Insert some data into the table
+cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150))
+cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154))
+cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100))
+print("Inserted 3 rows of data")
+
+# Clean up
+conn.commit()
+cursor.close()
+conn.close()
+```
+
+When the code runs successfully, it produces the following output:
+++
+[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
+
+## Step 2: Read data
+The following code example connects to your Azure Database for PostgreSQL database and uses
+- [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **SELECT** statement to read data.
+- [cursor.fetchall()](https://www.psycopg.org/docs/cursor.html#cursor.fetchall) accepts a query and returns a result set to iterate over by using
+
+```Python
+
+# Fetch all rows from table
+cursor.execute("SELECT * FROM inventory;")
+rows = cursor.fetchall()
+
+# Print all rows
+for row in rows:
+ print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2])))
++
+```
+[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
+
+## Step 3: Update data
+The following code example uses [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **UPDATE** statement to update data.
+
+```Python
+
+# Update a data row in the table
+cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (200, "banana"))
+print("Updated 1 row of data")
+
+```
+[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
+
+## Step 5: Delete data
+The following code example runs [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **DELETE** statement to delete an inventory item that you previously inserted.
+
+```Python
+
+# Delete data row from table
+cursor.execute("DELETE FROM inventory WHERE name = %s;", ("orange",))
+print("Deleted 1 row of data")
+
+```
+
+[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/>
+
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)<br/>
+
+[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-ruby.md
+
+ Title: 'Quickstart: Connect with Ruby - Azure Database for PostgreSQL - Single Server'
+description: This quickstart provides a Ruby code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
++++++
+ms.devlang: ruby
+ Last updated : 5/6/2019++
+# Quickstart: Use Ruby to connect and query data in Azure Database for PostgreSQL - Single Server
+
+This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a [Ruby](https://www.ruby-lang.org) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using Ruby, and are new to working with Azure Database for PostgreSQL.
+
+## Prerequisites
+This quickstart uses the resources created in either of these guides as a starting point:
+- [Create DB - Portal](quickstart-create-server-database-portal.md)
+- [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md)
+
+You also need to have installed:
+- [Ruby](https://www.ruby-lang.org/en/downloads/)
+- [Ruby pg](https://rubygems.org/gems/pg/), the PostgreSQL module for Ruby
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-ruby/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
+
+> [!NOTE]
+> The `@` symbol in the Azure Postgres username has been url encoded as `%40` in all the connection strings.
+
+## Connect and create a table
+Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
+
+The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See Ruby Pg reference documentation for more information on these classes and methods.
+
+Replace the `host`, `database`, `user`, and `password` strings with your own values.
++
+```ruby
+require 'pg'
+
+begin
+ # Initialize connection variables.
+ host = String('mydemoserver.postgres.database.azure.com')
+ database = String('postgres')
+ user = String('mylogin%40mydemoserver')
+ password = String('<server_admin_password>')
+
+ # Initialize connection object.
+ connection = PG::Connection.new(:host => host, :user => user, :dbname => database, :port => '5432', :password => password)
+ puts 'Successfully created connection to database'
+
+ # Drop previous table of same name if one exists
+ connection.exec('DROP TABLE IF EXISTS inventory;')
+ puts 'Finished dropping table (if existed).'
+
+ # Drop previous table of same name if one exists.
+ connection.exec('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);')
+ puts 'Finished creating table.'
+
+ # Insert some data into table.
+ connection.exec("INSERT INTO inventory VALUES(1, 'banana', 150)")
+ connection.exec("INSERT INTO inventory VALUES(2, 'orange', 154)")
+ connection.exec("INSERT INTO inventory VALUES(3, 'apple', 100)")
+ puts 'Inserted 3 rows of data.'
+
+rescue PG::Error => e
+ puts e.message
+
+ensure
+ connection.close if connection
+end
+```
+
+## Read data
+Use the following code to connect and read the data using a **SELECT** SQL statement.
+
+The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the SELECT command, keeping the results in a result set. The result set collection is iterated over using the `resultSet.each do` loop, keeping the current row values in the `row` variable. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See Ruby Pg reference documentation for more information on these classes and methods.
+
+Replace the `host`, `database`, `user`, and `password` strings with your own values.
+
+```ruby
+require 'pg'
+
+begin
+ # Initialize connection variables.
+ host = String('mydemoserver.postgres.database.azure.com')
+ database = String('postgres')
+ user = String('mylogin%40mydemoserver')
+ password = String('<server_admin_password>')
+
+ # Initialize connection object.
+ connection = PG::Connection.new(:host => host, :user => user, :dbname => database, :port => '5432', :password => password)
+ puts 'Successfully created connection to database.'
+
+ resultSet = connection.exec('SELECT * from inventory;')
+ resultSet.each do |row|
+ puts 'Data row = (%s, %s, %s)' % [row['id'], row['name'], row['quantity']]
+ end
+
+rescue PG::Error => e
+ puts e.message
+
+ensure
+ connection.close if connection
+end
+```
+
+## Update data
+Use the following code to connect and update the data using a **UPDATE** SQL statement.
+
+The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the UPDATE command. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See [Ruby Pg reference documentation](https://rubygems.org/gems/pg) for more information on these classes and methods.
+
+Replace the `host`, `database`, `user`, and `password` strings with your own values.
+
+```ruby
+require 'pg'
+
+begin
+ # Initialize connection variables.
+ host = String('mydemoserver.postgres.database.azure.com')
+ database = String('postgres')
+ user = String('mylogin%40mydemoserver')
+ password = String('<server_admin_password>')
+
+ # Initialize connection object.
+ connection = PG::Connection.new(:host => host, :user => user, :dbname => database, :port => '5432', :password => password)
+ puts 'Successfully created connection to database.'
+
+ # Modify some data in table.
+ connection.exec('UPDATE inventory SET quantity = %d WHERE name = %s;' % [200, '\'banana\''])
+ puts 'Updated 1 row of data.'
+
+rescue PG::Error => e
+ puts e.message
+
+ensure
+ connection.close if connection
+end
+```
++
+## Delete data
+Use the following code to connect and read the data using a **DELETE** SQL statement.
+
+The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the UPDATE command. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating.
+
+Replace the `host`, `database`, `user`, and `password` strings with your own values.
+
+```ruby
+require 'pg'
+
+begin
+ # Initialize connection variables.
+ host = String('mydemoserver.postgres.database.azure.com')
+ database = String('postgres')
+ user = String('mylogin%40mydemoserver')
+ password = String('<server_admin_password>')
+
+ # Initialize connection object.
+ connection = PG::Connection.new(:host => host, :user => user, :dbname => database, :port => '5432', :password => password)
+ puts 'Successfully created connection to database.'
+
+ # Modify some data in table.
+ connection.exec('DELETE FROM inventory WHERE name = %s;' % ['\'orange\''])
+ puts 'Deleted 1 row of data.'
+
+rescue PG::Error => e
+ puts e.message
+
+ensure
+ connection.close if connection
+end
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md) <br/>
+> [!div class="nextstepaction"]
+> [Ruby Pg reference documentation](https://rubygems.org/gems/pg)
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-rust.md
+
+ Title: 'Quickstart: Connect with Rust - Azure Database for PostgreSQL - Single Server'
+description: This quickstart provides Rust code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
+++++
+ms.devlang: rust
+ Last updated : 03/26/2021++
+# Quickstart: Use Rust to connect and query data in Azure Database for PostgreSQL - Single Server
+
+In this article, you will learn how to use the [PostgreSQL driver for Rust](https://github.com/sfackler/rust-postgres) to interact with Azure Database for PostgreSQL by exploring CRUD (create, read, update, delete) operations implemented in the sample code. Finally, you can run the application locally to see it in action.
+
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- A recent version of [Rust](https://www.rust-lang.org/tools/install) installed.
+- An Azure Database for PostgreSQL single server - create one using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md).
+- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.
+
+ |Action| Connectivity method|How-to guide|
+ |: |: |: |
+ | **Configure firewall rules** | Public | [Portal](./how-to-manage-firewall-using-portal.md) <br/> [CLI](./quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)|
+ | **Configure Service Endpoint** | Public | [Portal](./how-to-manage-vnet-using-portal.md) <br/> [CLI](./how-to-manage-vnet-using-cli.md)|
+ | **Configure private link** | Private | [Portal](./how-to-configure-privatelink-portal.md) <br/> [CLI](./how-to-configure-privatelink-cli.md) |
+
+- [Git](https://git-scm.com/downloads) installed.
+
+## Get database connection information
+Connecting to an Azure Database for PostgreSQL database requires the fully qualified server name and login credentials. You can get this information from the Azure portal.
+
+1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL server name.
+1. On the server's **Overview** page, copy the fully qualified **Server name** and the **Admin username**. The fully qualified **Server name** is always of the form *\<my-server-name>.postgres.database.azure.com*, and the **Admin username** is always of the form *\<my-admin-username>@\<my-server-name>*.
+
+## Review the code (optional)
+
+If you're interested in learning how the code works, you can review the following snippets. Otherwise, feel free to skip ahead to [Run the application](#run-the-application).
+
+### Connect
+
+The `main` function starts by connecting to Azure Database for PostgreSQL and it depends on following environment variables for connectivity information `POSTGRES_HOST`, `POSTGRES_USER`, `POSTGRES_PASSWORD` and, `POSTGRES_DBNAME`. By default, the PostgreSQL database service is configured to require `TLS` connection. You can choose to disable requiring `TLS` if your client application does not support `TLS` connectivity. For details, please refer [Configure TLS connectivity in Azure Database for PostgreSQL - Single Server](./concepts-ssl-connection-security.md).
+
+The sample application in this article uses TLS with the [postgres-openssl crate](https://crates.io/crates/postgres-openssl/). [postgres::Client::connect](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.connect) function is used to initiate the connection and the program exits in case this fails.
+
+```rust
+fn main() {
+ let pg_host = std::env::var("POSTGRES_HOST").expect("missing environment variable POSTGRES_HOST");
+ let pg_user = std::env::var("POSTGRES_USER").expect("missing environment variable POSTGRES_USER");
+ let pg_password = std::env::var("POSTGRES_PASSWORD").expect("missing environment variable POSTGRES_PASSWORD");
+ let pg_dbname = std::env::var("POSTGRES_DBNAME").unwrap_or("postgres".to_string());
+
+ let builder = SslConnector::builder(SslMethod::tls()).unwrap();
+ let tls_connector = MakeTlsConnector::new(builder.build());
+
+ let url = format!(
+ "host={} port=5432 user={} password={} dbname={} sslmode=require",
+ pg_host, pg_user, pg_password, pg_dbname
+ );
+ let mut pg_client = postgres::Client::connect(&url, tls_connector).expect("failed to connect to postgres");
+...
+}
+```
+
+### Drop and create table
+
+The sample application uses a simple `inventory` table to demonstrate the CRUD (create, read, update, delete) operations.
+
+```sql
+CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
+```
+
+The `drop_create_table` function initially tries to `DROP` the `inventory` table before creating a new one. This makes it easier for learning/experimentation, as you always start with a known (clean) state. The [execute](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.execute) method is used for create and drop operations.
+
+```rust
+const CREATE_QUERY: &str =
+ "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
+
+const DROP_TABLE: &str = "DROP TABLE inventory";
+
+fn drop_create_table(pg_client: &mut postgres::Client) {
+ let res = pg_client.execute(DROP_TABLE, &[]);
+ match res {
+ Ok(_) => println!("dropped table"),
+ Err(e) => println!("failed to drop table {}", e),
+ }
+ pg_client
+ .execute(CREATE_QUERY, &[])
+ .expect("failed to create 'inventory' table");
+}
+```
+
+### Insert data
+
+`insert_data` adds entries to the `inventory` table. It creates a [prepared statement](https://docs.rs/postgres/0.19.0/postgres/struct.Statement.html) with [prepare](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.prepare) function.
++
+```rust
+const INSERT_QUERY: &str = "INSERT INTO inventory (name, quantity) VALUES ($1, $2) RETURNING id;";
+
+fn insert_data(pg_client: &mut postgres::Client) {
+
+ let prep_stmt = pg_client
+ .prepare(&INSERT_QUERY)
+ .expect("failed to create prepared statement");
+
+ let row = pg_client
+ .query_one(&prep_stmt, &[&"item-1", &42])
+ .expect("insert failed");
+
+ let id: i32 = row.get(0);
+ println!("inserted item with id {}", id);
+...
+}
+```
+
+Also note the usage of [prepare_typed](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.prepare_typed) method, that allows the types of query parameters to be explicitly specified.
+
+```rust
+...
+let typed_prep_stmt = pg_client
+ .prepare_typed(&INSERT_QUERY, &[Type::VARCHAR, Type::INT4])
+ .expect("failed to create prepared statement");
+
+let row = pg_client
+ .query_one(&typed_prep_stmt, &[&"item-2", &43])
+ .expect("insert failed");
+
+let id: i32 = row.get(0);
+println!("inserted item with id {}", id);
+...
+```
+
+Finally, a `for` loop is used to add `item-3`, `item-4` and, `item-5` with randomly generated quantity for each.
+
+```rust
+...
+ for n in 3..=5 {
+ let row = pg_client
+ .query_one(
+ &typed_prep_stmt,
+ &[
+ &("item-".to_owned() + &n.to_string()),
+ &rand::thread_rng().gen_range(10..=50),
+ ],
+ )
+ .expect("insert failed");
+
+ let id: i32 = row.get(0);
+ println!("inserted item with id {} ", id);
+ }
+...
+```
+
+### Query data
+
+`query_data` function demonstrates how to retrieve data from the `inventory` table. The [query_one](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query_one) method is used to get an item by its `id`.
+
+```rust
+const SELECT_ALL_QUERY: &str = "SELECT * FROM inventory;";
+const SELECT_BY_ID: &str = "SELECT name, quantity FROM inventory where id=$1;";
+
+fn query_data(pg_client: &mut postgres::Client) {
+
+ let prep_stmt = pg_client
+ .prepare_typed(&SELECT_BY_ID, &[Type::INT4])
+ .expect("failed to create prepared statement");
+
+ let item_id = 1;
+
+ let c = pg_client
+ .query_one(&prep_stmt, &[&item_id])
+ .expect("failed to query item");
+
+ let name: String = c.get(0);
+ let quantity: i32 = c.get(1);
+ println!("quantity for item {} = {}", name, quantity);
+...
+}
+```
+
+All rows in the inventory table are fetched using a `select * from` query with the [query](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query) method. The returned rows are iterated over to extract the value for each column using [get](https://docs.rs/postgres/0.19.0/postgres/row/struct.Row.html#method.get).
+
+> [!TIP]
+> Note how `get` makes it possible to specify the column either by its numeric index in the row, or by its column name.
+
+```rust
+...
+ let items = pg_client
+ .query(SELECT_ALL_QUERY, &[])
+ .expect("select all failed");
+
+ println!("listing items...");
+
+ for item in items {
+ let id: i32 = item.get("id");
+ let name: String = item.get("name");
+ let quantity: i32 = item.get("quantity");
+ println!(
+ "item info: id = {}, name = {}, quantity = {} ",
+ id, name, quantity
+ );
+ }
+...
+```
+
+### Update data
+
+The `update_date` function randomly updates the quantity for all the items. Since the `insert_data` function had added `5` rows, the same is taken into account in the `for` loop - `for n in 1..=5`
+
+> [!TIP]
+> Note that we use `query` instead of `execute` since we intend to get back the `id` and the newly generated `quantity` (using [RETURNING clause](https://www.postgresql.org/docs/current/dml-returning.html)).
+
+```rust
+const UPDATE_QUERY: &str = "UPDATE inventory SET quantity = $1 WHERE name = $2 RETURNING quantity;";
+
+fn update_data(pg_client: &mut postgres::Client) {
+ let stmt = pg_client
+ .prepare_typed(&UPDATE_QUERY, &[Type::INT4, Type::VARCHAR])
+ .expect("failed to create prepared statement");
+
+ for id in 1..=5 {
+ let row = pg_client
+ .query_one(
+ &stmt,
+ &[
+ &rand::thread_rng().gen_range(10..=50),
+ &("item-".to_owned() + &id.to_string()),
+ ],
+ )
+ .expect("update failed");
+
+ let quantity: i32 = row.get("quantity");
+ println!("updated item id {} to quantity = {}", id, quantity);
+ }
+}
+```
+
+### Delete data
+
+Finally, the `delete` function demonstrates how to remove an item from the `inventory` table by its `id`. The `id` is chosen randomly - it's a random integer between `1` to `5` (`5` inclusive) since the `insert_data` function had added `5` rows to start with.
+
+> [!TIP]
+> Note that we use `query` instead of `execute` since we intend to get back the info about the item we just deleted (using [RETURNING clause](https://www.postgresql.org/docs/current/dml-returning.html)).
+
+```rust
+const DELETE_QUERY: &str = "DELETE FROM inventory WHERE id = $1 RETURNING id, name, quantity;";
+
+fn delete(pg_client: &mut postgres::Client) {
+ let stmt = pg_client
+ .prepare_typed(&DELETE_QUERY, &[Type::INT4])
+ .expect("failed to create prepared statement");
+
+ let item = pg_client
+ .query_one(&stmt, &[&rand::thread_rng().gen_range(1..=5)])
+ .expect("delete failed");
+
+ let id: i32 = item.get(0);
+ let name: String = item.get(1);
+ let quantity: i32 = item.get(2);
+ println!(
+ "deleted item info: id = {}, name = {}, quantity = {} ",
+ id, name, quantity
+ );
+}
+```
+
+## Run the application
+
+1. To begin with, run the following command to clone the sample repository:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-postgresql-rust-quickstart.git
+ ```
+
+2. Set the required environment variables with the values you copied from the Azure portal:
+
+ ```bash
+ export POSTGRES_HOST=<server name e.g. my-server.postgres.database.azure.com>
+ export POSTGRES_USER=<admin username e.g. my-admin-user@my-server>
+ export POSTGRES_PASSWORD=<admin password>
+ export POSTGRES_DBNAME=<database name. it is optional and defaults to postgres>
+ ```
+
+3. To run the application, change into the directory where you cloned it and execute `cargo run`:
+
+ ```bash
+ cd azure-postgresql-rust-quickstart
+ cargo run
+ ```
+
+ You should see an output similar to this:
+
+ ```bash
+ dropped 'inventory' table
+ inserted item with id 1
+ inserted item with id 2
+ inserted item with id 3
+ inserted item with id 4
+ inserted item with id 5
+ quantity for item item-1 = 42
+ listing items...
+ item info: id = 1, name = item-1, quantity = 42
+ item info: id = 2, name = item-2, quantity = 43
+ item info: id = 3, name = item-3, quantity = 11
+ item info: id = 4, name = item-4, quantity = 32
+ item info: id = 5, name = item-5, quantity = 24
+ updated item id 1 to quantity = 27
+ updated item id 2 to quantity = 14
+ updated item id 3 to quantity = 31
+ updated item id 4 to quantity = 16
+ updated item id 5 to quantity = 10
+ deleted item info: id = 4, name = item-4, quantity = 16
+ ```
+
+4. To confirm, you can also connect to Azure Database for PostgreSQL [using psql](./quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) and run queries against the database, for example:
+
+ ```sql
+ select * from inventory;
+ ```
+
+[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Manage Azure Database for PostgreSQL server using Portal](./how-to-create-manage-server-portal.md)<br/>
+
+> [!div class="nextstepaction"]
+> [Manage Azure Database for PostgreSQL server using CLI](./how-to-manage-server-cli.md)<br/>
+
+[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-alert-on-metric.md
+
+ Title: Configure alerts - Azure portal - Azure Database for PostgreSQL - Single Server
+description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Single Server from the Azure portal.
+++++ Last updated : 5/6/2019++
+# Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Single Server
+
+This article shows you how to set up Azure Database for PostgreSQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services.
+
+The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met.
+
+You can configure an alert to do the following actions when it triggers:
+* Send email notifications to the service administrator and co-administrators.
+* Send email to additional emails that you specify.
+* Call a webhook.
+
+You can configure and get information about alert rules using:
+* [Azure portal](../../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal)
+* [Azure CLI](../../azure-monitor/alerts/alerts-metric.md#with-azure-cli)
+* [Azure Monitor REST API](/rest/api/monitor/metricalerts)
+
+## Create an alert rule on a metric from the Azure portal
+1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for PostgreSQL server you want to monitor.
+
+2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown:
+
+ :::image type="content" source="./media/how-to-alert-on-metric/2-alert-rules.png" alt-text="Select Alert Rules":::
+
+3. Select **Add metric alert** (+ icon).
+
+4. The **Create rule** page opens as shown below. Fill in the required information:
+
+ :::image type="content" source="./media/how-to-alert-on-metric/4-add-rule-form.png" alt-text="Add metric alert form":::
+
+5. Within the **Condition** section, select **Add condition**.
+
+6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent".
+
+ :::image type="content" source="./media/how-to-alert-on-metric/6-configure-signal-logic.png" alt-text="Select metric":::
+
+7. Configure the alert logic including the **Condition** (ex. "Greater than"), **Threshold** (ex. 85 percent), **Time Aggregation**, **Period** of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes"), and **Frequency**.
+
+ Select **Done** when complete.
+
+ :::image type="content" source="./media/how-to-alert-on-metric/7-set-threshold-time.png" alt-text="Screenshot that highlights the Alert logic section and the Done button.":::
+
+8. Within the **Action Groups** section, select **Create New** to create a new group to receive notifications on the alert.
+
+9. Fill out the "Add action group" form with a name, short name, subscription, and resource group.
+
+10. Configure an **Email/SMS/Push/Voice** action type.
+
+ Choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications.
+
+ Optionally, provide a valid URI in the **Webhook** field if you want it called when the alert fires.
+
+ Select **OK** when completed.
+
+ :::image type="content" source="./media/how-to-alert-on-metric/10-action-group-type.png" alt-text="Screenshot that shows how to add a new action group.":::
+
+11. Specify an Alert rule name, Description, and Severity.
+
+ :::image type="content" source="./media/how-to-alert-on-metric/11-name-description-severity.png" alt-text="Action group":::
+
+12. Select **Create alert rule** to create the alert.
+
+ Within a few minutes, the alert is active and triggers as previously described.
+
+## Manage your alerts
+Once you have created an alert, you can select it and do the following actions:
+
+* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
+* **Edit** or **Delete** the alert rule.
+* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications.
+
+## Next steps
+* Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md).
+* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive.
postgresql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-cli.md
+
+ Title: Auto-grow storage - Azure CLI - Azure Database for PostgreSQL - Single Server
+description: This article describes how you can configure storage auto-grow using the Azure CLI in Azure Database for PostgreSQL - Single Server.
++++++ Last updated : 8/7/2019 +
+# Auto-grow Azure Database for PostgreSQL storage - Single Server using the Azure CLI
+This article describes how you can configure an Azure Database for PostgreSQL server storage to grow without impacting the workload.
+
+The server [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
+
+## Prerequisites
+
+- You need an [Azure Database for PostgreSQL server](quickstart-create-server-database-azure-cli.md).
++
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Enable PostgreSQL server storage auto-grow
+
+Enable server auto-grow storage on an existing server with the following command:
+
+```azurecli-interactive
+az postgres server update --name mydemoserver --resource-group myresourcegroup --auto-grow Enabled
+```
+
+Enable server auto-grow storage while creating a new server with the following command:
+
+```azurecli-interactive
+az postgres server create --resource-group myresourcegroup --name mydemoserver --auto-grow Enabled --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 9.6
+```
+
+## Next steps
+
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md).
postgresql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-portal.md
+
+ Title: Auto grow storage - Azure portal - Azure Database for PostgreSQL - Single Server
+description: This article describes how you can configure storage auto-grow using the Azure portal in Azure Database for PostgreSQL - Single Server
+++++ Last updated : 5/29/2019+
+# Auto grow storage using the Azure portal in Azure Database for PostgreSQL - Single Server
+This article describes how you can configure an Azure Database for PostgreSQL server storage to grow without impacting the workload.
+
+When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
+
+## Prerequisites
+To complete this how-to guide, you need:
+- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md)
+
+## Enable storage auto grow
+
+Follow these steps to set PostgreSQL server storage auto grow:
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL server.
+
+2. On the PostgreSQL server page, under **Settings**, click **Pricing tier** to open the pricing tier page.
+
+3. In the **Auto-growth** section, select **Yes** to enable storage auto grow.
+
+ :::image type="content" source="./media/how-to-auto-grow-storage-portal/3-auto-grow.png" alt-text="Azure Database for PostgreSQL - Settings_Pricing_tier - Auto-growth":::
+
+4. Click **OK** to save the changes.
+
+5. A notification will confirm that auto grow was successfully enabled.
+
+ :::image type="content" source="./media/how-to-auto-grow-storage-portal/5-auto-grow-successful.png" alt-text="Azure Database for PostgreSQL - auto-growth success":::
+
+## Next steps
+
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md).
postgresql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-powershell.md
+
+ Title: Auto grow storage - Azure PowerShell - Azure Database for PostgreSQL
+description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for PostgreSQL.
+++++ Last updated : 06/08/2020 ++
+# Auto grow storage in Azure Database for PostgreSQL server using PowerShell
+
+This article describes how you can configure an Azure Database for PostgreSQL server storage to grow
+without impacting the workload.
+
+Storage auto grow prevents your server from
+[reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit) and
+becoming read-only. For servers with 100 GB or less of provisioned storage, the size is increased by
+5 GB when the free space is below 10%. For servers with more than 100 GB of provisioned storage, the
+size is increased by 5% when the free space is below 10 GB. Maximum storage limits apply as
+specified in the storage section of the
+[Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md#storage).
+
+> [!IMPORTANT]
+> Remember that storage can only be scaled up, not down.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+ [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
+> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+## Enable PostgreSQL server storage auto grow
+
+Enable server auto grow storage on an existing server with the following command:
+
+```azurepowershell-interactive
+Update-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -StorageAutogrow Enabled
+```
+
+Enable server auto grow storage while creating a new server with the following command:
+
+```azurepowershell-interactive
+$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
+New-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -StorageAutogrow Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to create and manage read replicas in Azure Database for PostgreSQL using PowerShell](how-to-read-replicas-powershell.md).
postgresql How To Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-cli.md
+
+ Title: Private Link - Azure CLI - Azure Database for PostgreSQL - Single server
+description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure CLI
++++++ Last updated : 01/09/2020 +++
+# Create and manage Private Link for Azure Database for PostgreSQL - Single server using CLI
+
+A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure CLI to create a VM in an Azure Virtual Network and an Azure Database for PostgreSQL Single server with an Azure private endpoint.
+
+> [!NOTE]
+> The private link feature is only available for Azure Database for PostgreSQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
+
+## Prerequisites
+
+To step through this how-to guide, you need:
+
+- An [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md).
++
+If you decide to install and use Azure CLI locally instead, this quickstart requires you to use Azure CLI version 2.0.28 or later. To find your installed version, run `az --version`. See [Install Azure CLI](/cli/azure/install-azure-cli) for install or upgrade info.
+
+## Create a resource group
+
+Before you can create any resource, you have to create a resource group to host the Virtual Network. Create a resource group with [az group create](/cli/azure/group). This example creates a resource group named *myResourceGroup* in the *westeurope* location:
+
+```azurecli-interactive
+az group create --name myResourceGroup --location westeurope
+```
+
+## Create a Virtual Network
+Create a Virtual Network with [az network vnet create](/cli/azure/network/vnet). This example creates a default Virtual Network named *myVirtualNetwork* with one subnet named *mySubnet*:
+
+```azurecli-interactive
+az network vnet create \
+ --name myVirtualNetwork \
+ --resource-group myResourceGroup \
+ --subnet-name mySubnet
+```
+
+## Disable subnet private endpoint policies
+Azure deploys resources to a subnet within a virtual network, so you need to create or update the subnet to disable private endpoint [network policies](../../private-link/disable-private-endpoint-network-policy.md). Update a subnet configuration named *mySubnet* with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update):
+
+```azurecli-interactive
+az network vnet subnet update \
+ --name mySubnet \
+ --resource-group myResourceGroup \
+ --vnet-name myVirtualNetwork \
+ --disable-private-endpoint-network-policies true
+```
+## Create the VM
+Create a VM with az vm create. When prompted, provide a password to be used as the sign-in credentials for the VM. This example creates a VM named *myVm*:
+```azurecli-interactive
+az vm create \
+ --resource-group myResourceGroup \
+ --name myVm \
+ --image Win2019Datacenter
+```
+
+ Note the public IP address of the VM. You will use this address to connect to the VM from the internet in the next step.
+
+## Create an Azure Database for PostgreSQL - Single server
+Create a Azure Database for PostgreSQL with the az postgres server create command. Remember that the name of your PostgreSQL Server must be unique across Azure, so replace the placeholder value with your own unique values that you used above:
+
+```azurecli-interactive
+# Create a server in the resource group
+az postgres server create \
+--name mydemoserver \
+--resource-group myresourcegroup \
+--location westeurope \
+--admin-user mylogin \
+--admin-password <server_admin_password> \
+--sku-name GP_Gen5_2
+```
+
+## Create the Private Endpoint
+Create a private endpoint for the PostgreSQL server in your Virtual Network:
+
+```azurecli-interactive
+az network private-endpoint create \
+ --name myPrivateEndpoint \
+ --resource-group myResourceGroup \
+ --vnet-name myVirtualNetwork \
+ --subnet mySubnet \
+ --private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforPostgreSQL/servers" --query "id" -o tsv) \
+ --group-id postgresqlServer \
+ --connection-name myConnection
+ ```
+
+## Configure the Private DNS Zone
+Create a Private DNS Zone for PostgreSQL server domain and create an association link with the Virtual Network.
+
+```azurecli-interactive
+az network private-dns zone create --resource-group myResourceGroup \
+ --name "privatelink.postgres.database.azure.com"
+az network private-dns link vnet create --resource-group myResourceGroup \
+ --zone-name "privatelink.postgres.database.azure.com"\
+ --name MyDNSLink \
+ --virtual-network myVirtualNetwork \
+ --registration-enabled false
+
+#Query for the network interface ID
+networkInterfaceId=$(az network private-endpoint show --name myPrivateEndpoint --resource-group myResourceGroup --query 'networkInterfaces[0].id' -o tsv)
++
+az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json
+# Copy the content for privateIPAddress and FQDN matching the Azure database for PostgreSQL name
++
+#Create DNS records
+az network private-dns record-set a create --name myserver --zone-name privatelink.postgres.database.azure.com --resource-group myResourceGroup
+az network private-dns record-set a add-record --record-set-name myserver --zone-name privatelink.postgres.database.azure.com --resource-group myResourceGroup -a <Private IP Address>
+```
+
+> [!NOTE]
+> The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../../dns/dns-operations-recordsets-portal.md).
+
+> [!NOTE]
+> In some cases the Azure Database for PostgreSQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
+> - Make sure that both the subscription has the **Microsoft.DBforPostgreSQL** resource provider registered. For more information, refer to [resource providers](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+## Connect to a VM from the internet
+
+Connect to the VM *myVm* from the internet as follows:
+
+1. In the portal's search bar, enter *myVm*.
+
+1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens.
+
+1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
+
+1. Open the *downloaded.rdp* file.
+
+ 1. If prompted, select **Connect**.
+
+ 1. Enter the username and password you specified when creating the VM.
+
+ > [!NOTE]
+ > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
+
+1. Select **OK**.
+
+1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
+
+1. Once the VM desktop appears, minimize it to go back to your local desktop.
+
+## Access the PostgreSQL server privately from the VM
+
+1. In the Remote Desktop of *myVM*, open PowerShell.
+
+2. Enter ΓÇ»`nslookup mydemopostgresserver.privatelink.postgres.database.azure.com`.
+
+ You'll receive a message similar to this:
+
+ ```azurepowershell
+ Server: UnKnown
+ Address: 168.63.129.16
+ Non-authoritative answer:
+ Name: mydemopostgresserver.privatelink.postgres.database.azure.com
+ Address: 10.1.3.4
+ ```
+
+3. Test the private link connection for the PostgreSQL server using any available client. The following example uses [Azure Data studio](/sql/azure-data-studio/download) to do the operation.
+
+4. In **New connection**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | Server type| Select **PostgreSQL**.|
+ | Server name| Select *mydemopostgresserver.privatelink.postgres.database.azure.com* |
+ | User name | Enter username as username@servername which is provided during the PostgreSQL server creation. |
+ |Password |Enter a password provided during the PostgreSQL server creation. |
+ |SSL|Select **Required**.|
+ ||
+
+5. Select Connect.
+
+6. Browse databases from left menu.
+
+7. (Optionally) Create or query information from the postgreSQL server.
+
+8. Close the remote desktop connection to myVm.
+
+## Clean up resources
+When no longer needed, you can use az group delete to remove the resource group and all the resources it has:
+
+```azurecli-interactive
+az group delete --name myResourceGroup --yes
+```
+
+## Next steps
+- Learn more about [What is Azure private endpoint](../../private-link/private-endpoint-overview.md)
postgresql How To Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-portal.md
+
+ Title: Private Link - Azure portal - Azure Database for PostgreSQL - Single server
+description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure portal
++++++ Last updated : 01/09/2020++
+# Create and manage Private Link for Azure Database for PostgreSQL - Single server using Portal
+
+A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure portal to create a VM in an Azure Virtual Network and an Azure Database for PostgreSQL Single server with an Azure private endpoint.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+> [!NOTE]
+> The private link feature is only available for Azure Database for PostgreSQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
+
+## Sign in to Azure
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create an Azure VM
+
+In this section, you will create virtual network and the subnet to host the VM that is used to access your Private Link resource (a PostgreSQL server in Azure).
+
+### Create the virtual network
+In this section, you will create a Virtual Network and the subnet to host the VM that is used to access your Private Link resource.
+
+1. On the upper-left side of the screen, select **Create a resource** > **Networking** > **Virtual network**.
+2. In **Create virtual network**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter *MyVirtualNetwork*. |
+ | Address space | Enter *10.1.0.0/16*. |
+ | Subscription | Select your subscription.|
+ | Resource group | Select **Create new**, enter *myResourceGroup*, then select **OK**. |
+ | Location | Select **West Europe**.|
+ | Subnet - Name | Enter *mySubnet*. |
+ | Subnet - Address range | Enter *10.1.0.0/24*. |
+ |||
+3. Leave the rest as default and select **Create**.
+
+### Create Virtual Machine
+
+1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual Machine**.
+
+2. In **Create a virtual machine - Basics**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | **PROJECT DETAILS** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. You created this in the previous section. |
+ | **INSTANCE DETAILS** | |
+ | Virtual machine name | Enter *myVm*. |
+ | Region | Select **West Europe**. |
+ | Availability options | Leave the default **No infrastructure redundancy required**. |
+ | Image | Select **Windows Server 2019 Datacenter**. |
+ | Size | Leave the default **Standard DS1 v2**. |
+ | **ADMINISTRATOR ACCOUNT** | |
+ | Username | Enter a username of your choosing. |
+ | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
+ | Confirm Password | Reenter password. |
+ | **INBOUND PORT RULES** | |
+ | Public inbound ports | Leave the default **None**. |
+ | **SAVE MONEY** | |
+ | Already have a Windows license? | Leave the default **No**. |
+ |||
+
+1. Select **Next: Disks**.
+
+1. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**.
+
+1. In **Create a virtual machine - Networking**, select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | Virtual network | Leave the default **MyVirtualNetwork**. |
+ | Address space | Leave the default **10.1.0.0/24**.|
+ | Subnet | Leave the default **mySubnet (10.1.0.0/24)**.|
+ | Public IP | Leave the default **(new) myVm-ip**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **HTTP** and **RDP**.|
+ |||
++
+1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+
+1. When you see the **Validation passed** message, select **Create**.
+
+> [!NOTE]
+> In some cases the Azure Database for PostgreSQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
+> - Make sure that both the subscription has the **Microsoft.DBforPostgreSQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+## Create an Azure Database for PostgreSQL Single server
+
+In this section, you will create an Azure Database for PostgreSQL server in Azure.
+
+1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **Azure Database for PostgreSQL**.
+
+1. In **Azure Database for PostgreSQL deployment option**, select **Single server** and provide these information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. You created this in the previous section.|
+ | **Server details** | |
+ |Server name | Enter *myserver*. If this name is taken, create a unique name.|
+ | Admin username| Enter an administrator name of your choosing. |
+ | Password | Enter a password of your choosing. The password must be at least 8 characters long and meet the defined requirements. |
+ | Location | Select an Azure region where you want to want your PostgreSQL Server to reside. |
+ |Version | Select the database version of the PostgreSQL server that is required.|
+ | Compute + Storage| Select the pricing tier that is needed for the server based on the workload. |
+ |||
+
+7. Select **OK**.
+8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+9. When you see the Validation passed message, select **Create**.
+10. When you see the Validation passed message, select Create.
+
+## Create a private endpoint
+
+In this section, you will create a PostgreSQL server and add a private endpoint to it.
+
+1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Networking** > **Private Link**.
+2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-private-link/private-link-overview.png" alt-text="Private Link overview":::
+
+1. In **Create a private endpoint - Basics**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. You created this in the previous section.|
+ | **Instance Details** | |
+ | Name | Enter *myPrivateEndpoint*. If this name is taken, create a unique name. |
+ |Region|Select **West Europe**.|
+ |||
+5. Select **Next: Resource**.
+6. In **Create a private endpoint - Resource**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ |Connection method | Select connect to an Azure resource in my directory.|
+ | Subscription| Select your subscription. |
+ | Resource type | Select **Microsoft.DBforPostgreSQL/servers**. |
+ | Resource |Select *myServer*|
+ |Target sub-resource |Select *postgresqlServer*|
+ |||
+7. Select **Next: Configuration**.
+8. In **Create a private endpoint - Configuration**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ |**NETWORKING**| |
+ | Virtual network| Select *MyVirtualNetwork*. |
+ | Subnet | Select *mySubnet*. |
+ |**PRIVATE DNS INTEGRATION**||
+ |Integrate with private DNS zone |Select **Yes**. |
+ |Private DNS Zone |Select *(New)privatelink.postgres.database.azure.com* |
+ |||
+
+ > [!Note]
+ > Use the predefined private DNS zone for your service or provide your preferred DNS zone name. Refer to the [Azure services DNS zone configuration](../../private-link/private-endpoint-dns.md) for details.
+
+1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+2. When you see the **Validation passed** message, select **Create**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-private-link/show-postgres-private-link-1.png" alt-text="Private Link created":::
+
+ > [!NOTE]
+ > The FQDN in the customer DNS setting does not resolve to the private IP configured. You will have to setup a DNS zone for the configured FQDN as shown [here](../../dns/dns-operations-recordsets-portal.md).
+
+## Connect to a VM using Remote Desktop (RDP)
++
+After you've created **myVm**, connect to it from the internet as follows:
+
+1. In the portal's search bar, enter *myVm*.
+
+1. Select the **Connect** button. After selecting the **Connect** button, **Connect to virtual machine** opens.
+
+1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
+
+1. Open the *downloaded.rdp* file.
+
+ 1. If prompted, select **Connect**.
+
+ 1. Enter the username and password you specified when creating the VM.
+
+ > [!NOTE]
+ > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
+
+1. Select **OK**.
+
+1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
+
+1. Once the VM desktop appears, minimize it to go back to your local desktop.
+
+## Access the PostgreSQL server privately from the VM
+
+1. In the Remote Desktop of *myVM*, open PowerShell.
+
+2. EnterΓÇ»`nslookup mydemopostgresserver.privatelink.postgres.database.azure.com`.
+
+ You'll receive a message similar to this:
+ ```azurepowershell
+ Server: UnKnown
+ Address: 168.63.129.16
+ Non-authoritative answer:
+ Name: mydemopostgresserver.privatelink.postgres.database.azure.com
+ Address: 10.1.3.4
+ ```
+
+3. Test the private link connection for the PostgreSQL server using any available client. In the example below I have used [Azure Data studio](/sql/azure-data-studio/download) to do the operation.
+
+4. In **New connection**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | Server type| Select **PostgreSQL**.|
+ | Server name| Select *mydemopostgresserver.privatelink.postgres.database.azure.com* |
+ | User name | Enter username as username@servername which is provided during the PostgreSQL server creation. |
+ |Password |Enter a password provided during the PostgreSQL server creation. |
+ |SSL|Select **Required**.|
+ ||
+
+5. Select Connect.
+
+6. Browse databases from left menu.
+
+7. (Optionally) Create or query information from the postgreSQL server.
+
+8. Close the remote desktop connection to myVm.
+
+## Clean up resources
+When you're done using the private endpoint, PostgreSQL server, and the VM, delete the resource group and all of the resources it contains:
+
+1. Enter *myResourceGroup* in the **Search** box at the top of the portal and select *myResourceGroup* from the search results.
+2. Select **Delete resource group**.
+3. Enter myResourceGroup for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+
+## Next steps
+
+In this how-to, you created a VM on a virtual network, an Azure Database for PostgreSQL - Single server, and a private endpoint for private access. You connected to one VM from the internet and securely communicated to the PostgreSQL server using Private Link. To learn more about private endpoints, see [What is Azure private endpoint](../../private-link/private-endpoint-overview.md).
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql How To Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-logs-in-portal.md
+
+ Title: Manage logs - Azure portal - Azure Database for PostgreSQL - Single Server
+description: This article describes how to configure and access the server logs (.log files) in Azure Database for PostgreSQL - Single Server from the Azure portal.
+++++ Last updated : 5/6/2019++
+# Configure and access Azure Database for PostgreSQL - Single Server logs from the Azure portal
+
+You can configure, list, and download the [Azure Database for PostgreSQL logs](concepts-server-logs.md) from the Azure portal.
+
+## Prerequisites
+The steps in this article require that you have [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md).
+
+## Configure logging
+Configure access to the query logs and error logs.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Select your Azure Database for PostgreSQL server.
+
+3. Under the **Monitoring** section in the sidebar, select **Server logs**.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/1-select-server-logs-configure.png" alt-text="Screenshot of Server logs options":::
+
+4. To see the server parameters, select **Click here to enable logs and configure log parameters**.
+
+5. Change the parameters that you need to adjust. All changes you make in this session are highlighted in purple.
+
+ After you have changed the parameters, select **Save**. Or, you can discard your changes.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/3-save-discard.png" alt-text="Screenshot of Server Parameters options":::
+
+From the **Server Parameters** page, you can return to the list of logs by closing the page.
+
+## View list and download logs
+After logging begins, you can view a list of available logs, and download individual log files.
+
+1. Open the Azure portal.
+
+2. Select your Azure Database for PostgreSQL server.
+
+3. Under the **Monitoring** section in the sidebar, select **Server logs**. The page shows a list of your log files.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/4-server-logs-list.png" alt-text="Screenshot of Server logs page, with list of logs highlighted":::
+
+ > [!TIP]
+ > The naming convention of the log is **postgresql-yyyy-mm-dd_hhmmss.log**. The date and time used in the file name is the time when the log was issued. The log files rotate every hour or 100 MB, whichever comes first.
+
+4. If needed, use the search box to quickly narrow down to a specific log, based on date and time. The search is on the name of the log.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/5-search.png" alt-text="Screenshot of Server logs page, with search box and results highlighted":::
+
+5. To download individual log files, select the down-arrow icon next to each log file in the table row.
+
+ :::image type="content" source="./media/how-to-configure-server-logs-in-portal/6-download.png" alt-text="Screenshot of Server logs page, with down-arrow icon highlighted":::
+
+## Next steps
+- See [Access server logs in CLI](how-to-configure-server-logs-using-cli.md) to learn how to download logs programmatically.
+- Learn more about [server logs](concepts-server-logs.md) in Azure Database for PostgreSQL.
+- For more information about the parameter definitions and PostgreSQL logging, see the PostgreSQL documentation on [error reporting and logging](https://www.postgresql.org/docs/current/static/runtime-config-logging.html).
+
postgresql How To Configure Server Logs Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-logs-using-cli.md
+
+ Title: Manage logs - Azure CLI - Azure Database for PostgreSQL - Single Server
+description: This article describes how to configure and access the server logs (.log files) in Azure Database for PostgreSQL - Single Server by using the Azure CLI.
+++++
+ms.devlang: azurecli
+ Last updated : 5/6/2019 ++
+# Configure and access server logs by using Azure CLI
+You can download the PostgreSQL server error logs by using the command-line interface (Azure CLI). However, access to transaction logs isn't supported.
+
+## Prerequisites
+To step through this how-to guide, you need:
+- [Azure Database for PostgreSQL server](quickstart-create-server-database-azure-cli.md)
+- The [Azure CLI](/cli/azure/install-azure-cli) command-line utility or Azure Cloud Shell in the browser
+
+## Configure logging
+You can configure the server to access query logs and error logs. Error logs can have auto-vacuum, connection, and checkpoint information.
+1. Turn on logging.
+2. To enable query logging, update **log\_statement** and **log\_min\_duration\_statement**.
+3. Update retention period.
+
+For more information, see [Customizing server configuration parameters](how-to-configure-server-parameters-using-cli.md).
+
+## List logs
+To list the available log files for your server, run the [az postgres server-logs list](/cli/azure/postgres/server-logs) command.
+
+You can list the log files for server **mydemoserver.postgres.database.azure.com** under the resource group **myresourcegroup**. Then direct the list of log files to a text file called **log\_files\_list.txt**.
+```azurecli-interactive
+az postgres server-logs list --resource-group myresourcegroup --server mydemoserver > log_files_list.txt
+```
+## Download logs locally from the server
+With the [az postgres server-logs download](/cli/azure/postgres/server-logs) command, you can download individual log files for your server.
+
+Use the following example to download the specific log file for the server **mydemoserver.postgres.database.azure.com** under the resource group **myresourcegroup** to your local environment.
+```azurecli-interactive
+az postgres server-logs download --name 20170414-mydemoserver-postgresql.log --resource-group myresourcegroup --server mydemoserver
+```
+## Next steps
+- To learn more about server logs, see [Server logs in Azure Database for PostgreSQL](concepts-server-logs.md).
+- For more information about server parameters, see [Customize server configuration parameters using Azure CLI](how-to-configure-server-parameters-using-cli.md).
postgresql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-cli.md
+
+ Title: Configure parameters - Azure Database for PostgreSQL - Single Server
+description: This article describes how to configure Postgres parameters in Azure Database for PostgreSQL - Single Server using the Azure CLI.
+++++
+ms.devlang: azurecli
Last updated : 06/19/2019 ++
+# Customize server configuration parameters for Azure Database for PostgreSQL - Single Server using Azure CLI
+You can list, show, and update configuration parameters for an Azure PostgreSQL server using the Command Line Interface (Azure CLI). A subset of engine configurations is exposed at server-level and can be modified.
+
+## Prerequisites
+To step through this how-to guide, you need:
+- Create an Azure Database for PostgreSQL server and database by following [Create an Azure Database for PostgreSQL](quickstart-create-server-database-azure-cli.md)
+- Install [Azure CLI](/cli/azure/install-azure-cli) command-line interface on your machine or use the [Azure Cloud Shell](../../cloud-shell/overview.md) in the Azure portal using your browser.
+
+## List server configuration parameters for Azure Database for PostgreSQL server
+To list all modifiable parameters in a server and their values, run the [az postgres server configuration list](/cli/azure/postgres/server/configuration) command.
+
+You can list the server configuration parameters for the server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup**.
+```azurecli-interactive
+az postgres server configuration list --resource-group myresourcegroup --server mydemoserver
+```
+## Show server configuration parameter details
+To show details about a particular configuration parameter for a server, run the [az postgres server configuration show](/cli/azure/postgres/server/configuration) command.
+
+This example shows details of the **log\_min\_messages** server configuration parameter for server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup.**
+```azurecli-interactive
+az postgres server configuration show --name log_min_messages --resource-group myresourcegroup --server mydemoserver
+```
+## Modify server configuration parameter value
+You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the PostgreSQL server engine. To update the configuration, use the [az postgres server configuration set](/cli/azure/postgres/server/configuration) command.
+
+To update the **log\_min\_messages** server configuration parameter of server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup.**
+```azurecli-interactive
+az postgres server configuration set --name log_min_messages --resource-group myresourcegroup --server mydemoserver --value INFO
+```
+If you want to reset the value of a configuration parameter, you simply choose to leave out the optional `--value` parameter, and the service applies the default value. In above example, it would look like:
+```azurecli-interactive
+az postgres server configuration set --name log_min_messages --resource-group myresourcegroup --server mydemoserver
+```
+This command resets the **log\_min\_messages** configuration to the default value **WARNING**. For more information on server configuration and permissible values, see PostgreSQL documentation on [Server Configuration](https://www.postgresql.org/docs/9.6/static/runtime-config.html).
+
+## Next steps
+- [Learn how to restart a server](how-to-restart-server-cli.md)
+- To configure and access server logs, see [Server Logs in Azure Database for PostgreSQL](concepts-server-logs.md)
postgresql How To Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-portal.md
+
+ Title: Configure server parameters - Azure portal - Azure Database for PostgreSQL - Single Server
+description: This article describes how to configure the Postgres parameters in Azure Database for PostgreSQL through the Azure portal.
+++++ Last updated : 02/28/2018++
+# Configure server parameters in Azure Database for PostgreSQL - Single Server via the Azure portal
+You can list, show, and update configuration parameters for an Azure Database for PostgreSQL server through the Azure portal.
+
+## Prerequisites
+To step through this how-to guide you need:
+- [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md)
+
+## Viewing and editing parameters
+1. Open the [Azure portal](https://portal.azure.com).
+
+2. Select your Azure Database for PostgreSQL server.
+
+3. Under the **SETTINGS** section, select **Server parameters**. The page shows a list of parameters, their values, and descriptions.
+
+4. Select the **drop down** button to see the possible values for enumerated-type parameters like client_min_messages.
+
+5. Select or hover over the **i** (information) button to see the range of possible values for numeric parameters like cpu_index_tuple_cost.
+
+6. If needed, use the **search box** to narrow down to a specific parameter. The search is on the name and description of the parameters.
+
+7. Change the parameter values you would like to adjust. All changes you make in a session are highlighted in purple. Once you have changed the values, you can select **Save**. Or you can **Discard** your changes.
+
+8. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**.
+
+## Next steps
+Learn about:
+- [Overview of server parameters in Azure Database for PostgreSQL](concepts-servers.md)
+- [Configuring parameters using the Azure CLI](how-to-configure-server-parameters-using-cli.md)
postgresql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-powershell.md
+
+ Title: Configure server parameters - Azure PowerShell - Azure Database for PostgreSQL
+description: This article describes how to configure the service parameters in Azure Database for PostgreSQL using PowerShell.
+++++
+ms.devlang: azurepowershell
+ Last updated : 06/08/2020 ++
+# Customize Azure Database for PostgreSQL server parameters using PowerShell
+
+You can list, show, and update configuration parameters for an Azure Database for PostgreSQL server using
+PowerShell. A subset of engine configurations is exposed at the server-level and can be modified.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+ [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
+> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
++
+## List server configuration parameters for Azure Database for PostgreSQL server
+
+To list all modifiable parameters in a server and their values, run the `Get-AzPostgreSqlConfiguration`
+cmdlet.
+
+The following example lists the server configuration parameters for the server **mydemoserver** in
+resource group **myresourcegroup**.
+
+```azurepowershell-interactive
+Get-AzPostgreSqlConfiguration -ResourceGroupName myresourcegroup -ServerName mydemoserver
+```
+
+For the definition of each of the listed parameters, see the PostgreSQL reference section on
+[Environment Variables](https://www.postgresql.org/docs/12/libpq-envars.html).
+
+## Show server configuration parameter details
+
+To show details about a particular configuration parameter for a server, run the
+`Get-AzPostgreSqlConfiguration` cmdlet and specify the **Name** parameter.
+
+This example shows details of the **slow\_query\_log** server configuration parameter for server
+**mydemoserver** under resource group **myresourcegroup**.
+
+```azurepowershell-interactive
+Get-AzPostgreSqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver
+```
+
+## Modify a server configuration parameter value
+
+You can also modify the value of a certain server configuration parameter, which updates the
+underlying configuration value for the PostgreSQL server engine. To update the configuration, use the
+`Update-AzPostgreSqlConfiguration` cmdlet.
+
+To update the **slow\_query\_log** server configuration parameter of server
+**mydemoserver** under resource group **myresourcegroup**.
+
+```azurepowershell-interactive
+Update-AzPostgreSqlConfiguration -Name slow_query_log -ResourceGroupName myresourcegroup -ServerName mydemoserver -Value On
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Auto grow storage in Azure Database for PostgreSQL server using PowerShell](how-to-auto-grow-storage-powershell.md).
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-sign-in-azure-ad-authentication.md
+
+ Title: Use Azure Active Directory - Azure Database for PostgreSQL - Single Server
+description: Learn about how to set up Azure Active Directory (Azure AD) for authentication with Azure Database for PostgreSQL - Single Server
+++++ Last updated : 05/26/2021++
+# Use Azure Active Directory for authentication with PostgreSQL
+
+This article will walk you through the steps how to configure Azure Active Directory access with Azure Database for PostgreSQL, and how to connect using an Azure AD token.
+
+## Setting the Azure AD Admin user
+
+Only Azure AD administrator users can create/enable users for Azure AD-based authentication. We recommend not using the Azure AD administrator for regular database operations, as it has elevated user permissions (e.g. CREATEDB).
+
+To set the Azure AD administrator (you can use a user or a group), please follow the following steps
+
+1. In the Azure portal, select the instance of Azure Database for PostgreSQL that you want to enable for Azure AD.
+2. Under Settings, select Active Directory Admin:
+
+![set azure ad administrator][2]
+
+3. Select a valid Azure AD user in the customer tenant to be Azure AD administrator.
+
+> [!IMPORTANT]
+> When setting the administrator, a new user is added to the Azure Database for PostgreSQL server with full administrator permissions.
+> The Azure AD Admin user in Azure Database for PostgreSQL will have the role `azure_ad_admin`.
+> Only one Azure AD admin can be created per PostgreSQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
+> You can specify an Azure AD group instead of an individual user to have multiple administrators.
+
+Only one Azure AD admin can be created per PostgreSQL server and selection of another one will overwrite the existing Azure AD admin configured for the server. You can specify an Azure AD group instead of an individual user to have multiple administrators. Note that you will then sign in with the group name for administration purposes.
+
+## Connecting to Azure Database for PostgreSQL using Azure AD
+
+The following high-level diagram summarizes the workflow of using Azure AD authentication with Azure Database for PostgreSQL:
+
+![authentication flow][1]
+
+We've designed the Azure AD integration to work with common PostgreSQL tools like psql, which are not Azure AD aware and only support specifying username and password when connecting to PostgreSQL. We pass the Azure AD token as the password as shown in the picture above.
+
+We currently have tested the following clients:
+
+- psql commandline (utilize the PGPASSWORD variable to pass the token, see step 3 for more information)
+- Azure Data Studio (using the PostgreSQL extension)
+- Other libpq based clients (e.g. common application frameworks and ORMs)
+- PgAdmin (uncheck connect now at server creation. See step 4 for more information)
+
+These are the steps that a user/application will need to do authenticate with Azure AD described below:
+
+### Prerequisites
+
+You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine. Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+## Authenticate with Azure AD as a single user
+
+### Step 1: Login to the user's Azure subscription
+
+Start by authenticating with Azure AD using the Azure CLI tool. This step is not required in Azure Cloud Shell.
+
+```
+az login
+```
+
+The command will launch a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and the password.
+
+### Step 2: Retrieve Azure AD access token
+
+Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for PostgreSQL.
+
+Example (for Public Cloud):
+
+```azurecli-interactive
+az account get-access-token --resource https://ossrdbms-aad.database.windows.net
+```
+
+The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
+
+```azurecli-interactive
+az cloud show
+```
+
+For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
+
+```azurecli-interactive
+az account get-access-token --resource-type oss-rdbms
+```
+
+After authentication is successful, Azure AD will return an access token:
+
+```json
+{
+ "accessToken": "TOKEN",
+ "expiresOn": "...",
+ "subscription": "...",
+ "tenant": "...",
+ "tokenType": "Bearer"
+}
+```
+
+The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for PostgreSQL service.
++
+### Step 3: Use token as password for logging in with client psql
+
+When connecting you need to use the access token as the PostgreSQL user password.
+
+When using the `psql` command line client, the access token needs to be passed through the `PGPASSWORD` environment variable, since the access token exceeds the password length that `psql` can accept directly:
+
+Windows Example:
+
+```cmd
+set PGPASSWORD=<copy/pasted TOKEN value from step 2>
+```
+
+```PowerShell
+$env:PGPASSWORD='<copy/pasted TOKEN value from step 2>'
+```
+
+Linux/macOS Example:
+
+```shell
+export PGPASSWORD=<copy/pasted TOKEN value from step 2>
+```
+
+Now you can initiate a connection with Azure Database for PostgreSQL like you normally would:
+
+```shell
+psql "host=mydb.postgres... user=user@tenant.onmicrosoft.com@mydb dbname=postgres sslmode=require"
+```
+### Step 4: Use token as a password for logging in with PgAdmin
+
+To connect using Azure AD token with pgAdmin you need to follow the next steps:
+1. Uncheck the connect now option at server creation.
+2. Enter your server details in the connection tab and save.
+3. From the browser menu, click connect to the Azure Database for PostgreSQL server
+4. Enter the AD token password when prompted.
++
+Important considerations when connecting:
+
+* `user@tenant.onmicrosoft.com` is the name of the Azure AD user
+* Make sure to use the exact way the Azure user is spelled - as the Azure AD user and group names are case sensitive.
+* If the name contains spaces, use `\` before each space to escape it.
+* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
+
+You are now authenticated to your Azure Database for PostgreSQL server using Azure AD authentication.
+
+## Authenticate with Azure AD as a group member
+
+### Step 1: Create Azure AD groups in Azure Database for PostgreSQL
+
+To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
+
+Example:
+
+```
+CREATE ROLE "Prod DB Readonly" WITH LOGIN IN ROLE azure_ad_user;
+```
+When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username.
+
+### Step 2: Login to the userΓÇÖs Azure Subscription
+
+Authenticate with Azure AD using the Azure CLI tool. This step is not required in Azure Cloud Shell. The user needs to be member of the Azure AD group.
+
+```
+az login
+```
+
+### Step 3: Retrieve Azure AD access token
+
+Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 2 to access Azure Database for PostgreSQL.
+
+Example (for Public Cloud):
+
+```azurecli-interactive
+az account get-access-token --resource https://ossrdbms-aad.database.windows.net
+```
+
+The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
+
+```azurecli-interactive
+az cloud show
+```
+
+For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
+
+```azurecli-interactive
+az account get-access-token --resource-type oss-rdbms
+```
+
+After authentication is successful, Azure AD will return an access token:
+
+```json
+{
+ "accessToken": "TOKEN",
+ "expiresOn": "...",
+ "subscription": "...",
+ "tenant": "...",
+ "tokenType": "Bearer"
+}
+```
+
+### Step 4: Use token as password for logging in with psql or PgAdmin (see above steps for user connection)
+
+Important considerations when connecting as a group member:
+* groupname@mydb is the name of the Azure AD group you are trying to connect as
+* Always append the server name after the Azure AD user/group name (e.g. @mydb)
+* Make sure to use the exact way the Azure AD group name is spelled.
+* Azure AD user and group names are case sensitive
+* When connecting as a group, use only the group name (e.g. GroupName@mydb) and not the alias of a group member.
+* If the name contains spaces, use \ before each space to escape it.
+* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
+
+You are now authenticated to your PostgreSQL server using Azure AD authentication.
++
+## Creating Azure AD users in Azure Database for PostgreSQL
+
+To add an Azure AD user to your Azure Database for PostgreSQL database, perform the following steps after connecting (see later section on how to connect):
+
+1. First ensure that the Azure AD user `<user>@yourtenant.onmicrosoft.com` is a valid user in Azure AD tenant.
+2. Sign in to your Azure Database for PostgreSQL instance as the Azure AD Admin user.
+3. Create role `<user>@yourtenant.onmicrosoft.com` in Azure Database for PostgreSQL.
+4. Make `<user>@yourtenant.onmicrosoft.com` a member of role azure_ad_user. This must only be given to Azure AD users.
+
+**Example:**
+
+```sql
+CREATE ROLE "user1@yourtenant.onmicrosoft.com" WITH LOGIN IN ROLE azure_ad_user;
+```
+
+> [!NOTE]
+> Authenticating a user through Azure AD does not give the user any permissions to access objects within the Azure Database for PostgreSQL database. You must grant the user the required permissions manually.
+
+## Token Validation
+
+Azure AD authentication in Azure Database for PostgreSQL ensures that the user exists in the PostgreSQL server, and it checks the validity of the token by validating the contents of the token. The following token validation steps are performed:
+
+- Token is signed by Azure AD and has not been tampered with
+- Token was issued by Azure AD for the tenant associated with the server
+- Token has not expired
+- Token is for the Azure Database for PostgreSQL resource (and not another Azure resource)
+
+## Migrating existing PostgreSQL users to Azure AD-based authentication
+
+You can enable Azure AD authentication for existing users. There are two cases to consider:
+
+### Case 1: PostgreSQL username matches the Azure AD User Principal Name
+
+In the unlikely case that your existing users already match the Azure AD user names, you can grant the `azure_ad_user` role to them in order to enable them for Azure AD authentication:
+
+```sql
+GRANT azure_ad_user TO "existinguser@yourtenant.onmicrosoft.com";
+```
+
+They will now be able to sign in with Azure AD credentials instead of using their previously configured PostgreSQL user password.
+
+### Case 2: PostgreSQL username is different than the Azure AD User Principal Name
+
+If a PostgreSQL user either does not exist in Azure AD or has a different username, you can use Azure AD groups to authenticate as this PostgreSQL user. You can migrate existing Azure Database for PostgreSQL users to Azure AD by creating an Azure AD group with a name that matches the PostgreSQL user, and then granting role azure_ad_user to the existing PostgreSQL user:
+
+```sql
+GRANT azure_ad_user TO "DBReadUser";
+```
+
+This assumes you have created a group "DBReadUser" in your Azure AD. Users belonging to that group will now be able to sign in to the database as this user.
+
+## Next steps
+
+* Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL - Single Server](concepts-azure-ad-authentication.md)
+
+<!--Image references-->
+
+[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
+[2]: ./media/concepts-azure-ad-authentication/set-azure-ad-admin.png
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connect-query-guide.md
+
+ Title: Connect and query - Single Server PostgreSQL
+description: Links to quickstarts showing how to connect to your Azure Database for PostgreSQL Single Server and run queries.
++++++ Last updated : 09/21/2020++
+# Connect and query overview for Azure database for PostgreSQL- Single Server
+
+The following document includes links to examples showing how to connect and query with Azure Database for PostgreSQL Single Server. This guide also includes TLS recommendations and extension that you can use to connect to the server in supported languages below.
+
+## Quickstarts
+
+| Quickstart | Description |
+|||
+|[Pgadmin](https://www.pgadmin.org/)|You can use pgadmin to connect to the server and it simplifies the creation, maintenance and use of database objects.|
+|[psql in Azure Cloud Shell](quickstart-create-server-database-azure-cli.md#connect-to-the-azure-database-for-postgresql-server-by-using-psql)|This article shows how to run [**psql**](https://www.postgresql.org/docs/current/static/app-psql.html) in [Azure Cloud Shell](../../cloud-shell/overview.md) to connect to your server and then run statements to query, insert, update, and delete data in the database.You can run **psql** if installed on your development environment|
+|[PostgreSQL with VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-cosmosdb)|Azure Databases extension for VS Code (Preview) allows you to browse and query your PostgreSQL server both locally and in the cloud using scrapbooks with rich Intellisense. |
+|[PHP](connect-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use work with database objects to query data.|
+|[Java](connect-java.md)|This quickstart demonstrates how to use Java to connect to a database and then use work with database objects to query data.|
+|[Node.js](connect-nodejs.md)|This quickstart demonstrates how to use Node.js to create a program to connect to a database and use work with database objects to query data.|
+|[.NET(C#)](connect-csharp.md)|This quickstart demonstrates how to use.NET (C#) to create a C# program to connect to a database and use work with database objects to query data.|
+|[Go](connect-go.md)|This quickstart demonstrates how to use Go to connect to a database. Transact-SQL statements to query and modify data are also demonstrated.|
+|[Python](connect-python.md)|This quickstart demonstrates how to use Python to connect to a database and use work with database objects to query data. |
+|[Ruby](connect-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use work with database objects to query data.|
+
+## TLS considerations for database connectivity
+
+Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to databases in Azure Database for PostgreSQL. No special configuration is necessary but do enforce TLS 1.2 for newly created servers. We recommend if you are using TLS 1.0 and 1.1, then you update the TLS version for your servers. See [How to configure TLS](how-to-tls-configurations.md)
+
+## PostgreSQL extensions
+
+PostgreSQL provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions function like built-in features.
+
+- [Postgres 11 extensions](./concepts-extensions.md#postgres-11-extensions)
+- [Postgres 10 extensions](./concepts-extensions.md#postgres-10-extensions)
+- [Postgres 9.6 extensions](./concepts-extensions.md#postgres-96-extensions)
+
+Fore more details, see [How to use PostgreSQL extensions on Single server](concepts-extensions.md).
+
+## Next steps
+
+- [Migrate data using dump and restore](how-to-migrate-using-dump-and-restore.md)
+- [Migrate data using import and export](how-to-migrate-using-export-and-import.md)
postgresql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connect-with-managed-identity.md
+
+ Title: Connect with Managed Identity - Azure Database for PostgreSQL - Single Server
+description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for PostgreSQL
++++++ Last updated : 05/19/2020++
+# Connect with Managed Identity to Azure Database for PostgreSQL
+
+You can use both system-assigned and user-assigned managed identities to authenticate to Azure Database for PostgreSQL. This article shows you how to use a system-assigned managed identity for an Azure Virtual Machine (VM) to access an Azure Database for PostgreSQL server. Managed Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code.
+
+You learn how to:
+- Grant your VM access to an Azure Database for PostgreSQL server
+- Create a user in the database that represents the VM's system-assigned identity
+- Get an access token using the VM identity and use it to query an Azure Database for PostgreSQL server
+- Implement the token retrieval in a C# example application
+
+## Prerequisites
+
+- If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.
+- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../../articles/role-based-access-control/role-assignments-portal.md).
+- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity
+- You need an Azure Database for PostgreSQL database server that has [Azure AD authentication](how-to-configure-sign-in-azure-ad-authentication.md) configured
+- To follow the C# example, first complete the guide how to [Connect with C#](connect-csharp.md)
+
+## Creating a system-assigned managed identity for your VM
+
+Use [az vm identity assign](/cli/azure/vm/identity/) with the `identity assign` command enable the system-assigned identity to an existing VM:
+
+```azurecli-interactive
+az vm identity assign -g myResourceGroup -n myVm
+```
+
+Retrieve the application ID for the system-assigned managed identity, which you'll need in the next few steps:
+
+```azurecli
+# Get the client ID (application ID) of the system-assigned managed identity
+az ad sp list --display-name vm-name --query [*].appId --out tsv
+```
+
+## Creating a PostgreSQL user for your Managed Identity
+
+Now, connect as the Azure AD administrator user to your PostgreSQL database, and run the following SQL statements, replacing `CLIENT_ID` with the client ID you retrieved for your system-assigned managed identity:
+
+```sql
+SET aad_validate_oids_in_tenant = off;
+CREATE ROLE myuser WITH LOGIN PASSWORD 'CLIENT_ID' IN ROLE azure_ad_user;
+```
+
+The managed identity now has access when authenticating with the username `myuser` (replace with a name of your choice).
+
+## Retrieving the access token from Azure Instance Metadata service
+
+Your application can now retrieve an access token from the Azure Instance Metadata service and use it for authenticating with the database.
+
+This token retrieval is done by making an HTTP request to `http://169.254.169.254/metadata/identity/oauth2/token` and passing the following parameters:
+
+* `api-version` = `2018-02-01`
+* `resource` = `https://ossrdbms-aad.database.windows.net`
+* `client_id` = `CLIENT_ID` (that you retrieved earlier)
+
+You'll get back a JSON result that contains an `access_token` field - this long text value is the Managed Identity access token, that you should use as the password when connecting to the database.
+
+For testing purposes, you can run the following commands in your shell. Note you need `curl`, `jq`, and the `psql` client installed.
+
+```bash
+# Retrieve the access token
+export PGPASSWORD=`curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=CLIENT_ID' -H Metadata:true | jq -r .access_token`
+
+# Connect to the database
+psql -h SERVER --user USER@SERVER DBNAME
+```
+
+You are now connected to the database you've configured earlier.
+
+## Connecting using Managed Identity in C#
+
+This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for PostgreSQL. Azure Database for PostgreSQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to PostgreSQL, you pass the access token in the password field.
+
+Here's a .NET code example of opening a connection to PostgreSQL using an access token. This code must run on the VM to use the system-assigned managed identity to obtain an access token from Azure AD. Replace the values of HOST, USER, DATABASE, and CLIENT_ID.
+
+```csharp
+using System;
+using System.Net;
+using System.IO;
+using System.Collections;
+using System.Collections.Generic;
+using System.Text.Json;
+using System.Text.Json.Serialization;
+using Npgsql;
+using Azure.Identity;
+
+namespace Driver
+{
+ class Script
+ {
+ // Obtain connection string information from the portal for use in the following variables
+ private static string Host = "HOST";
+ private static string User = "USER";
+ private static string Database = "DATABASE";
+
+ static async Task Main(string[] args)
+ {
+ //
+ // Get an access token for PostgreSQL.
+ //
+ Console.Out.WriteLine("Getting access token from Azure AD...");
+
+ // Azure AD resource ID for Azure Database for PostgreSQL is https://ossrdbms-aad.database.windows.net/
+ string accessToken = null;
+
+ try
+ {
+ // Call managed identities for Azure resources endpoint.
+ var sqlServerTokenProvider = new DefaultAzureCredential();
+ accessToken = (await sqlServerTokenProvider.GetTokenAsync(
+ new Azure.Core.TokenRequestContext(scopes: new string[] { "https://ossrdbms-aad.database.windows.net/.default" }) { })).Token;
+
+ }
+ catch (Exception e)
+ {
+ Console.Out.WriteLine("{0} \n\n{1}", e.Message, e.InnerException != null ? e.InnerException.Message : "Acquire token failed");
+ System.Environment.Exit(1);
+ }
+
+ //
+ // Open a connection to the PostgreSQL server using the access token.
+ //
+ string connString =
+ String.Format(
+ "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4}; SSLMode=Prefer",
+ Host,
+ User,
+ Database,
+ 5432,
+ accessToken);
+
+ using (var conn = new NpgsqlConnection(connString))
+ {
+ Console.Out.WriteLine("Opening connection using access token...");
+ conn.Open();
+
+ using (var command = new NpgsqlCommand("SELECT version()", conn))
+ {
+
+ var reader = command.ExecuteReader();
+ while (reader.Read())
+ {
+ Console.WriteLine("\nConnected!\n\nPostgres version: {0}", reader.GetString(0));
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+When run, this command will give an output like this:
+
+```
+Getting access token from Azure AD...
+Opening connection using access token...
+
+Connected!
+
+Postgres version: PostgreSQL 11.11, compiled by Visual C++ build 1800, 64-bit
+```
+
+## Next steps
+
+* Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL](concepts-azure-ad-authentication.md)
postgresql How To Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connection-string-powershell.md
+
+ Title: Generate a connection string with PowerShell - Azure Database for PostgreSQL
+description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for PostgreSQL.
+++++++ Last updated : 8/6/2020++
+# How to generate an Azure Database for PostgreSQL connection string with PowerShell
+
+This article demonstrates how to generate a connection string for an Azure Database for PostgreSQL
+server. You can use a connection string to connect to an Azure Database for PostgreSQL from many
+different applications.
+
+## Requirements
+
+This article uses the resources created in the following guide as a starting point:
+
+* [Quickstart: Create an Azure Database for PostgreSQL server using PowerShell](quickstart-create-postgresql-server-database-using-azure-powershell.md)
+
+## Get the connection string
+
+The `Get-AzPostgreSqlConnectionString` cmdlet is used to generate a connection string for connecting
+applications to Azure Database for PostgreSQL. The following example returns the connection string
+for a PHP client from **mydemoserver**.
+
+```azurepowershell-interactive
+Get-AzPostgreSqlConnectionString -Client PHP -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+```Output
+host=mydemoserver.postgres.database.azure.com port=5432 dbname={your_database} user=myadmin@mydemoserver password={your_password} sslmode=require
+```
+
+Valid values for the `Client` parameter include:
+
+* ADO&#46;NET
+* JDBC
+* Node.js
+* PHP
+* Python
+* Ruby
+* WebApp
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Customize Azure Database for PostgreSQL server parameters using PowerShell](how-to-configure-server-parameters-using-powershell.md)
postgresql How To Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-create-manage-server-portal.md
+
+ Title: Manage Azure Database for PostgreSQL - Azure portal
+description: Learn how to manage an Azure Database for PostgreSQL server from the Azure portal.
+++++ Last updated : 11/20/2019++
+# Manage an Azure Database for PostgreSQL server using the Azure portal
+
+This article shows you how to manage your Azure Database for PostgreSQL servers. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
+
+## Sign in
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create a server
+
+Visit the [quickstart](quickstart-create-server-database-portal.md) to learn how to create and get started with an Azure Database for PostgreSQL server.
+
+## Scale compute and storage
+
+After server creation you can scale between the General Purpose and Memory Optimized tiers as your needs change. You can also scale compute and memory by increasing or decreasing vCores. Storage can be scaled up (however, you cannot scale storage down).
+
+### Scale between General Purpose and Memory Optimized tiers
+
+You can scale from General Purpose to Memory Optimized and vice-versa. Changing to and from the Basic tier after server creation is not supported.
+
+1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
+
+2. Select **General Purpose** or **Memory Optimized**, depending on what you are scaling to.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/change-pricing-tier.png" alt-text="Screenshot of Azure portal to choose Basic, General Purpose, or Memory Optimized tier in Azure Database for PostgreSQL":::
+
+ > [!NOTE]
+ > Changing tiers causes a server restart.
+
+3. Select **OK** to save changes.
+
+### Scale vCores up or down
+
+1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
+
+2. Change the **vCore** setting by moving the slider to your desired value.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/scale-compute.png" alt-text="Screenshot of Azure portal to choose vCore option in Azure Database for PostgreSQL":::
+
+ > [!NOTE]
+ > Scaling vCores causes a server restart.
+
+3. Select **OK** to save changes.
+
+### Scale storage up
+
+1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
+
+2. Change the **Storage** setting by moving the slider up to your desired value.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/scale-storage.png" alt-text="Screenshot of Azure portal to choose Storage scale in Azure Database for PostgreSQL":::
+
+ > [!NOTE]
+ > Storage cannot be scaled down.
+
+3. Select **OK** to save changes.
+
+## Update admin password
+
+You can change the administrator role's password using the Azure portal.
+
+1. Select your server in the Azure portal. In the **Overview** window select **Reset password**.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/overview-reset-password.png" alt-text="Screenshot of Azure portal to reset the password in Azure Database for PostgreSQL":::
+
+2. Enter a new password and confirm the password. The textbox will prompt you about password complexity requirements.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/reset-password.png" alt-text="Screenshot of Azure portal to reset your password and save in Azure Database for PostgreSQL":::
+
+3. Select **OK** to save the new password.
+
+## Delete a server
+
+You can delete your server if you no longer need it.
+
+1. Select your server in the Azure portal. In the **Overview** window select **Delete**.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/overview-delete.png" alt-text="Screenshot of Azure portal to Delete the server in Azure Database for PostgreSQL":::
+
+2. Type the name of the server into the input box to confirm that this is the server you want to delete.
+
+ :::image type="content" source="./media/how-to-create-manage-server-portal/confirm-delete.png" alt-text="Screenshot of Azure portal to confirm the server delete in Azure Database for PostgreSQL":::
+
+ > [!NOTE]
+ > Deleting a server is irreversible.
+
+3. Select **Delete**.
+
+## Next steps
+
+- Learn about [backups and server restore](how-to-restore-server-portal.md)
+- Learn about [tuning and monitoring options in Azure Database for PostgreSQL](concepts-monitoring.md)
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-create-users.md
+
+ Title: Create users - Azure Database for PostgreSQL - Single Server
+description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 09/22/2019++
+# Create users in Azure Database for PostgreSQL - Single Server
+
+This article describes how you can create users within an Azure Database for PostgreSQL server.
+
+If you would like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md).
+
+## The server admin account
+
+When you first created your Azure Database for PostgreSQL, you provided a server admin user name and password. For more information, you can follow the [Quickstart](quickstart-create-server-database-portal.md) to see the step-by-step approach. Since the server admin user name is a custom name, you can locate the chosen server admin user name from the Azure portal.
+
+The Azure Database for PostgreSQL server is created with the 3 default roles defined. You can see these roles by running the command: `SELECT rolname FROM pg_roles;`
+
+- azure_pg_admin
+- azure_superuser
+- your server admin user
+
+Your server admin user is a member of the azure_pg_admin role. However, the server admin account is not part of the azure_superuser role. Since this service is a managed PaaS service, only Microsoft is part of the super user role.
+
+The PostgreSQL engine uses privileges to control access to database objects, as discussed in the [PostgreSQL product documentation](https://www.postgresql.org/docs/current/static/sql-createrole.html). In Azure Database for PostgreSQL, the server admin user is granted these privileges:
+ LOGIN, NOSUPERUSER, INHERIT, CREATEDB, CREATEROLE, REPLICATION
+
+The server admin user account can be used to create additional users and grant those users into the azure_pg_admin role. Also, the server admin account can be used to create less privileged users and roles that have access to individual databases and schemas.
+
+## How to create additional admin users in Azure Database for PostgreSQL
+
+1. Get the connection information and admin user name.
+ To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+
+2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
+ If you are unsure of how to connect, see [the quickstart](./quickstart-create-server-database-portal.md)
+
+3. Edit and run the following SQL code. Replace your new user name for the placeholder value <new_user>, and replace the placeholder password with your own strong password.
+
+ ```sql
+ CREATE ROLE <new_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
+
+ GRANT azure_pg_admin TO <new_user>;
+ ```
+
+## How to create database users in Azure Database for PostgreSQL
+
+1. Get the connection information and admin user name.
+ To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+
+2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
+
+3. Edit and run the following SQL code. Replace the placeholder value `<db_user>` with your intended new user name, and placeholder value `<newdb>` with your own database name. Replace the placeholder password with your own strong password.
+
+ This sql code syntax creates a new database named testdb, for example purposes. Then it creates a new user in the PostgreSQL service, and grants connect privileges to the new database for that user.
+
+ ```sql
+ CREATE DATABASE <newdb>;
+
+ CREATE ROLE <db_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB NOCREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
+
+ GRANT CONNECT ON DATABASE <newdb> TO <db_user>;
+ ```
+
+4. Using an admin account, you may need to grant additional privileges to secure the objects in the database. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/ddl-priv.html) for further details on database roles and privileges. For example:
+
+ ```sql
+ GRANT ALL PRIVILEGES ON DATABASE <newdb> TO <db_user>;
+ ```
+
+ If a user creates a table "role," the table belongs to that user. If another user needs access to the table, you must grant privileges to the other user on the table level.
+
+ For example:
+
+ ```sql
+ GRANT SELECT ON ALL TABLES IN SCHEMA <schema_name> TO <db_user>;
+ ```
+
+5. Log in to your server, specifying the designated database, using the new user name and password. This example shows the psql command line. With this command, you are prompted for the password for the user name. Replace your own server name, database name, and user name.
+
+ ```shell
+ psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=db_user@mydemoserver --dbname=newdb
+ ```
+
+## Next steps
+
+Open the firewall for the IP addresses of the new users' machines to enable them to connect:
+[Create and manage Azure Database for PostgreSQL firewall rules by using the Azure portal](how-to-manage-firewall-using-portal.md) or [Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule).
+
+For more information regarding user account management, see PostgreSQL product documentation for [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html), [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html), and [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html).
postgresql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-cli.md
+
+ Title: Data encryption - Azure CLI - for Azure Database for PostgreSQL - Single server
+description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure CLI.
++++++ Last updated : 03/30/2020 +++
+# Data encryption for Azure Database for PostgreSQL Single server by using the Azure CLI
+
+Learn how to use the Azure CLI to set up and manage data encryption for your Azure Database for PostgreSQL Single server.
+
+## Prerequisites for Azure CLI
+
+* You must have an Azure subscription and be an administrator on that subscription.
+* Create a key vault and a key to use for a customer-managed key. Also enable purge protection and soft delete on the key vault.
+
+ ```azurecli-interactive
+ az keyvault create -g <resource_group> -n <vault_name> --enable-soft-delete true --enable-purge-protection true
+ ```
+
+* In the created Azure Key Vault, create the key that will be used for the data encryption of the Azure Database for PostgreSQL Single server.
+
+ ```azurecli-interactive
+ az keyvault key create --name <key_name> -p software --vault-name <vault_name>
+ ```
+
+* In order to use an existing key vault, it must have the following properties to use as a customer-managed key:
+ * [Soft delete](../../key-vault/general/soft-delete-overview.md)
+
+ ```azurecli-interactive
+ az resource update --id $(az keyvault show --name \ <key_vault_name> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
+ ```
+
+ * [Purge protected](../../key-vault/general/soft-delete-overview.md#purge-protection)
+
+ ```azurecli-interactive
+ az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true
+ ```
+
+* The key must have the following attributes to use as a customer-managed key:
+ * No expiration date
+ * Not disabled
+ * Perform **get**, **wrap** and **unwrap** operations
+
+## Set the right permissions for key operations
+
+1. There are two ways of getting the managed identity for your Azure Database for PostgreSQL Single server.
+
+ ### Create an new Azure Database for PostgreSQL server with a managed identity.
+
+ ```azurecli-interactive
+ az postgres server create --name <server_name> -g <resource_group> --location <location> --storage-size <size> -u <user> -p <pwd> --backup-retention <7> --sku-name <sku name> --geo-redundant-backup <Enabled/Disabled> --assign-identity
+ ```
+
+ ### Update an existing the Azure Database for PostgreSQL server to get a managed identity.
+
+ ```azurecli-interactive
+ az postgres server update --resource-group <resource_group> --name <server_name> --assign-identity
+ ```
+
+2. Set the **Key permissions** (**Get**, **Wrap**, **Unwrap**) for the **Principal**, which is the name of the PostgreSQL Single server server.
+
+ ```azurecli-interactive
+ az keyvault set-policy --name -g <resource_group> --key-permissions get unwrapKey wrapKey --object-id <principal id of the server>
+ ```
+
+## Set data encryption for Azure Database for PostgreSQL Single server
+
+1. Enable Data encryption for the Azure Database for PostgreSQL Single server using the key created in the Azure Key Vault.
+
+ ```azurecli-interactive
+ az postgres server key create --name <server_name> -g <resource_group> --kid <key_url>
+ ```
+
+ Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>`
+
+## Using Data encryption for restore or replica servers
+
+After Azure Database for PostgreSQL Single server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted PostgreSQL Single server server, you can use the following steps to create an encrypted restored server.
+
+### Creating a restored/replica server
+
+* [Create a restore server](how-to-restore-server-cli.md)
+* [Create a read replica server](how-to-read-replicas-cli.md)
+
+### Once the server is restored, revalidate data encryption the restored server
+
+* Assign identity for the replica server
+```azurecli-interactive
+az postgres server update --name <server name> -g <resource_group> --assign-identity
+```
+
+* Get the existing key that has to be used for the restored/replica server
+
+```azurecli-interactive
+az postgres server key list --name '<server_name>' -g '<resource_group_name>'
+```
+
+* Set the policy for the new identity for the restored/replica server
+
+```azurecli-interactive
+az keyvault set-policy --name <keyvault> -g <resource_group> --key-permissions get unwrapKey wrapKey --object-id <principl id of the server returned by the step 1>
+```
+
+* Re-validate the restored/replica server with the encryption key
+
+```azurecli-interactive
+az postgres server key create ΓÇôname <server name> -g <resource_group> --kid <key url>
+```
+
+## Additional capability for the key being used for the Azure Database for PostgreSQL Single server
+
+### Get the Key used
+
+```azurecli-interactive
+az postgres server key show --name <server name> -g <resource_group> --kid <key url>
+```
+
+Key url: `https://YourVaultName.vault.azure.net/keys/YourKeyName/01234567890123456789012345678901>`
+
+### List the Key used
+
+```azurecli-interactive
+az postgres server key list --name <server name> -g <resource_group>
+```
+
+### Drop the key being used
+
+```azurecli-interactive
+az postgres server key delete -g <resource_group> --kid <key url>
+```
+
+## Using an Azure Resource Manager template to enable data encryption
+
+Apart from Azure portal, you can also enable data encryption on your Azure Database for PostgreSQL single server using Azure Resource Manager templates for new and existing server.
+
+### For a new server
+
+Use one of the pre-created Azure Resource Manager templates to provision the server with data encryption enabled:
+[Example with Data encryption](https://github.com/Azure/azure-postgresql/tree/master/arm-templates/ExampleWithDataEncryption)
+
+This Azure Resource Manager template creates an Azure Database for PostgreSQL Single server and uses the **KeyVault** and **Key** passed as parameters to enable data encryption on the server.
+
+### For an existing server
+Additionally, you can use Azure Resource Manager templates to enable data encryption on your existing Azure Database for PostgreSQL Single servers.
+
+* Pass the Resource ID of the Azure Key Vault key that you copied earlier under the `Uri` property in the properties object.
+
+* Use *2020-01-01-preview* as the API version.
+
+```json
+{
+ "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string"
+ },
+ "serverName": {
+ "type": "string"
+ },
+ "keyVaultName": {
+ "type": "string",
+ "metadata": {
+ "description": "Key vault name where the key to use is stored"
+ }
+ },
+ "keyVaultResourceGroupName": {
+ "type": "string",
+ "metadata": {
+ "description": "Key vault resource group name where it is stored"
+ }
+ },
+ "keyName": {
+ "type": "string",
+ "metadata": {
+ "description": "Key name in the key vault to use as encryption protector"
+ }
+ },
+ "keyVersion": {
+ "type": "string",
+ "metadata": {
+ "description": "Version of the key in the key vault to use as encryption protector"
+ }
+ }
+ },
+ "variables": {
+ "serverKeyName": "[concat(parameters('keyVaultName'), '_', parameters('keyName'), '_', parameters('keyVersion'))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.DBforPostgreSQL/servers",
+ "apiVersion": "2017-12-01",
+ "kind": "",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "name": "[parameters('serverName')]",
+ "properties": {
+ }
+ },
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2019-05-01",
+ "name": "addAccessPolicy",
+ "resourceGroup": "[parameters('keyVaultResourceGroupName')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.DBforPostgreSQL/servers', parameters('serverName'))]"
+ ],
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.KeyVault/vaults/accessPolicies",
+ "name": "[concat(parameters('keyVaultName'), '/add')]",
+ "apiVersion": "2018-02-14-preview",
+ "properties": {
+ "accessPolicies": [
+ {
+ "tenantId": "[subscription().tenantId]",
+ "objectId": "[reference(resourceId('Microsoft.DBforPostgreSQL/servers/', parameters('serverName')), '2017-12-01', 'Full').identity.principalId]",
+ "permissions": {
+ "keys": [
+ "get",
+ "wrapKey",
+ "unwrapKey"
+ ]
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "name": "[concat(parameters('serverName'), '/', variables('serverKeyName'))]",
+ "type": "Microsoft.DBforPostgreSQL/servers/keys",
+ "apiVersion": "2020-01-01-preview",
+ "dependsOn": [
+ "addAccessPolicy",
+ "[resourceId('Microsoft.DBforPostgreSQL/servers', parameters('serverName'))]"
+ ],
+ "properties": {
+ "serverKeyType": "AzureKeyVault",
+ "uri": "[concat(reference(resourceId(parameters('keyVaultResourceGroupName'), 'Microsoft.KeyVault/vaults/', parameters('keyVaultName')), '2018-02-14-preview', 'Full').properties.vaultUri, 'keys/', parameters('keyName'), '/', parameters('keyVersion'))]"
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+ To learn more about data encryption, see [Azure Database for PostgreSQL Single server data encryption with customer-managed key](concepts-data-encryption-postgresql.md).
postgresql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-portal.md
+
+ Title: Data encryption - Azure portal - for Azure Database for PostgreSQL - Single server
+description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure portal.
++++++ Last updated : 01/13/2020
+
++
+# Data encryption for Azure Database for PostgreSQL Single server by using the Azure portal
+
+Learn how to use the Azure portal to set up and manage data encryption for your Azure Database for PostgreSQL Single server.
+
+## Prerequisites for Azure CLI
+
+* You must have an Azure subscription and be an administrator on that subscription.
+* In Azure Key Vault, create a key vault and key to use for a customer-managed key.
+* The key vault must have the following properties to use as a customer-managed key:
+ * [Soft delete](../../key-vault/general/soft-delete-overview.md)
+
+ ```azurecli-interactive
+ az resource update --id $(az keyvault show --name \ <key_vault_name> -test -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
+ ```
+
+ * [Purge protected](../../key-vault/general/soft-delete-overview.md#purge-protection)
+
+ ```azurecli-interactive
+ az keyvault update --name <key_vault_name> --resource-group <resource_group_name> --enable-purge-protection true
+ ```
+
+* The key must have the following attributes to use as a customer-managed key:
+ * No expiration date
+ * Not disabled
+ * Able to perform get, wrap key, and unwrap key operations
+
+## Set the right permissions for key operations
+
+1. In Key Vault, select **Access policies** > **Add Access Policy**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-access-policy-overview.png" alt-text="Screenshot of Key Vault, with Access policies and Add Access Policy highlighted":::
+
+2. Select **Key permissions**, and select **Get**, **Wrap**, **Unwrap**, and the **Principal**, which is the name of the PostgreSQL server. If your server principal can't be found in the list of existing principals, you need to register it. You're prompted to register your server principal when you attempt to set up data encryption for the first time, and it fails.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/access-policy-wrap-unwrap.png" alt-text="Access policy overview":::
+
+3. Select **Save**.
+
+## Set data encryption for Azure Database for PostgreSQL Single server
+
+1. In Azure Database for PostgreSQL, select **Data encryption** to set up the customer-managed key.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/data-encryption-overview.png" alt-text="Screenshot of Azure Database for PostgreSQL, with Data encryption highlighted":::
+
+2. You can either select a key vault and key pair, or enter a key identifier.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/setting-data-encryption.png" alt-text="Screenshot of Azure Database for PostgreSQL, with data encryption options highlighted":::
+
+3. Select **Save**.
+
+4. To ensure all files (including temp files) are fully encrypted, restart the server.
+
+## Using Data encryption for restore or replica servers
+
+After Azure Database for PostgreSQL Single server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through a replica (local/cross-region) operation. So for an encrypted PostgreSQL server, you can use the following steps to create an encrypted restored server.
+
+1. On your server, select **Overview** > **Restore**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore.png" alt-text="Screenshot of Azure Database for PostgreSQL, with Overview and Restore highlighted":::
+
+ Or for a replication-enabled server, under the **Settings** heading, select **Replication**.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/postgresql-replica.png" alt-text="Screenshot of Azure Database for PostgreSQL, with Replication highlighted":::
+
+2. After the restore operation is complete, the new server created is encrypted with the primary server's key. However, the features and options on the server are disabled, and the server is inaccessible. This prevents any data manipulation, because the new server's identity hasn't yet been given permission to access the key vault.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-restore-data-encryption.png" alt-text="Screenshot of Azure Database for PostgreSQL, with Inaccessible status highlighted":::
+
+3. To make the server accessible, revalidate the key on the restored server. Select **Data Encryption** > **Revalidate key**.
+
+ > [!NOTE]
+ > The first attempt to revalidate will fail, because the new server's service principal needs to be given access to the key vault. To generate the service principal, select **Revalidate key**, which will show an error but generates the service principal. Thereafter, refer to [these steps](#set-the-right-permissions-for-key-operations) earlier in this article.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-revalidate-data-encryption.png" alt-text="Screenshot of Azure Database for PostgreSQL, with revalidation step highlighted":::
+
+ You will have to give the key vault access to the new server. For more information, see [Enable Azure RBAC permissions on Key Vault](../../key-vault/general/rbac-guide.md?tabs=azure-cli#enable-azure-rbac-permissions-on-key-vault).
+
+4. After registering the service principal, revalidate the key again, and the server resumes its normal functionality.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/restore-successful.png" alt-text="Screenshot of Azure Database for PostgreSQL, showing restored functionality":::
+
+## Next steps
+
+ To learn more about data encryption, see [Azure Database for PostgreSQL Single server data encryption with customer-managed key](concepts-data-encryption-postgresql.md).
postgresql How To Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-troubleshoot.md
+
+ Title: Troubleshoot data encryption - Azure Database for PostgreSQL - Single Server
+description: Learn how to troubleshoot the data encryption on your Azure Database for PostgreSQL - Single Server
++++++ Last updated : 02/13/2020++
+# Troubleshoot data encryption in Azure Database for PostgreSQL - Single Server
+
+This article helps you identify and resolve common issues that can occur in the single-server deployment of Azure Database for PostgreSQL when configured with data encryption using a customer-managed key.
+
+## Introduction
+
+When you configure data encryption to use a customer-managed key in Azure Key Vault, the server requires continuous access to the key. If the server loses access to the customer-managed key in Azure Key Vault, it will deny all connections, return the appropriate error message, and change its state to ***Inaccessible*** in the Azure portal.
+
+If you no longer need an inaccessible Azure Database for PostgreSQL server, you can delete it to stop incurring costs. No other actions on the server are permitted until access to the key vault has been restored and the server is available. It's also not possible to change the data encryption option from `Yes`(customer-managed) to `No` (service-managed) on an inaccessible server when it's encrypted with a customer-managed key. You'll have to revalidate the key manually before the server is accessible again. This action is necessary to protect the data from unauthorized access while permissions to the customer-managed key are revoked.
+
+## Common errors causing server to become inaccessible
+
+The following misconfigurations cause most issues with data encryption that use Azure Key Vault keys:
+
+- The key vault is unavailable or doesn't exist:
+ - The key vault was accidentally deleted.
+ - An intermittent network error causes the key vault to be unavailable.
+
+- You don't have permissions to access the key vault or the key doesn't exist:
+ - The key expired or was accidentally deleted or disabled.
+- The managed identity of the Azure Database for PostgreSQL instance was accidentally deleted.
+ - The managed identity of the Azure Database for PostgreSQL instance has insufficient key permissions. For example, the permissions don't include Get, Wrap, and Unwrap.
+ - The managed identity permissions to the Azure Database for PostgreSQL instance were revoked or deleted.
+
+## Identify and resolve common errors
+
+### Errors on the key vault
+
+#### Disabled key vault
+
+- `AzureKeyVaultKeyDisabledMessage`
+- **Explanation**: The operation couldn't be completed on server because the Azure Key Vault key is disabled.
+
+#### Missing key vault permissions
+
+- `AzureKeyVaultMissingPermissionsMessage`
+- **Explanation**: The server doesn't have the required Get, Wrap, and Unwrap permissions to Azure Key Vault. Grant any missing permissions to the service principal with ID.
+
+### Mitigation
+
+- Confirm that the customer-managed key is present in the key vault.
+- Identify the key vault, then go to the key vault in the Azure portal.
+- Ensure that the key URI identifies a key that is present.
+
+## Next steps
+
+[Use the Azure portal to set up data encryption with a customer-managed key on Azure Database for PostgreSQL](how-to-data-encryption-portal.md)
postgresql How To Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-validation.md
+
+ Title: How to ensure validation of the Azure Database for PostgreSQL - Data encryption
+description: Learn how to validate the encryption of the Azure Database for PostgreSQL - Data encryption using the customers managed key.
++++++ Last updated : 04/28/2020++
+# Validating data encryption for Azure Database for PostgreSQL
+
+This article helps you validate that data encryption using customer managed key for Azure Database for PostgreSQL is working as expected.
+
+## Check the encryption status
+
+### From portal
+
+1. If you want to verify that the customer's key is used for encryption, follow these steps:
+
+ * In the Azure portal, navigate to the **Azure Key Vault** -> **Keys**
+ * Select the key used for server encryption.
+ * Set the status of the key **Enabled** to **No**.
+
+ After some time (**~15 min**), the Azure Database for PostgreSQL server **Status** should be **Inaccessible**. Any I/O operation done against the server will fail which validates that the server is indeed encrypted with customers key and the key is currently not valid.
+
+ In order to make the server **Available** against, you can revalidate the key.
+
+ * Set the status of the key in the Key Vault to **Yes**.
+ * On the server **Data Encryption**, select **Revalidate key**.
+ * After the revalidation of the key is successful, the server **Status** changes to **Available**
+
+2. On the Azure portal, if you can ensure that the encryption key is set, then data is encrypted using the customers key used in the Azure portal.
+
+ :::image type="content" source="media/concepts-data-access-and-security-data-encryption/byok-validate.png" alt-text="Access policy overview":::
+
+### From CLI
+
+1. We can use *az CLI* command to validate the key resources being used for the Azure Database for PostgreSQL server.
+
+ ```azurecli-interactive
+ az postgres server key list --name '<server_name>' -g '<resource_group_name>'
+ ```
+
+ For a server without Data encryption set, this command will results in empty set [].
+
+### Azure audit reports
+
+[Audit Reports](https://servicetrust.microsoft.com) can also be reviewed that provides information about the compliance with data protection standards and regulatory requirements.
+
+## Next steps
+
+To learn more about data encryption, see [Azure Database for PostgreSQL Single server data encryption with customer-managed key](concepts-data-encryption-postgresql.md).
postgresql How To Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-deny-public-network-access.md
+
+ Title: Deny Public Network Access - Azure portal - Azure Database for PostgreSQL - Single server
+description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for PostgreSQL Single server
++++++ Last updated : 03/10/2020++
+# Deny Public Network Access in Azure Database for PostgreSQL Single server using Azure portal
+
+This article describes how you can configure an Azure Database for PostgreSQL Single server to deny all public configurations and allow only connections through private endpoints to further enhance the network security.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+* An [Azure Database for PostgreSQL Single server](quickstart-create-server-database-portal.md) with General Purpose or Memory Optimized pricing tier.
+
+## Set Deny Public Network Access
+
+Follow these steps to set PostgreSQL Single server Deny Public Network Access:
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL Single server.
+
+1. On the PostgreSQL Single server page, under **Settings**, click **Connection security** to open the connection security configuration page.
+
+1. In **Deny Public Network Access**, select **Yes** to enable deny public access for your PostgreSQL Single server.
+
+ :::image type="content" source="./media/how-to-deny-public-network-access/deny-public-network-access.PNG" alt-text="Azure Database for PostgreSQL Single server Deny network access":::
+
+1. Click **Save** to save the changes.
+
+1. A notification will confirm that connection security setting was successfully enabled.
+
+ :::image type="content" source="./media/how-to-deny-public-network-access/deny-public-network-access-success.png" alt-text="Azure Database for PostgreSQL Single server Deny network access success":::
+
+## Next steps
+
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md).
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-deploy-github-action.md
+
+ Title: 'Quickstart: Connect to Azure PostgreSQL with GitHub Actions'
+description: Use Azure PostgreSQL from a GitHub Actions workflow
+++++++ Last updated : 10/12/2020++
+# Quickstart: Use GitHub Actions to connect to Azure PostgreSQL
+
+**APPLIES TO:** :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Single Server :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Flexible Server
+
+Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a workflow to deploy database updates to [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/).
+
+## Prerequisites
+
+You will need:
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A GitHub repository with sample data (`data.sql`). If you don't have a GitHub account, [sign up for free](https://github.com/join).
+- An Azure Database for PostgreSQL server.
+ - [Quickstart: Create an Azure Database for PostgreSQL server in the Azure portal](quickstart-create-server-database-portal.md)
+
+## Workflow file overview
+
+A GitHub Actions workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
+
+The file has two sections:
+
+|Section |Tasks |
+|||
+|**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. |
+|**Deploy** | 1. Deploy the database. |
+
+## Generate deployment credentials
+
+You can create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac&preserve-view=true) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
+
+Replace the placeholders `server-name` with the name of your PostgreSQL server hosted on Azure. Replace the `subscription-id` and `resource-group` with the subscription ID and resource group connected to your PostgreSQL server.
+
+```azurecli-interactive
+ az ad sp create-for-rbac --name {server-name} --role contributor \
+ --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
+ --sdk-auth
+```
+
+The output is a JSON object with the role assignment credentials that provide access to your database similar to below. Copy this output JSON object for later.
+
+```output
+ {
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>",
+ (...)
+ }
+```
+
+> [!IMPORTANT]
+> It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific server and not the entire resource group.
+
+## Copy the PostgreSQL connection string
+
+In the Azure portal, go to your Azure Database for PostgreSQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string will look similar to this.
+
+> [!IMPORTANT]
+> - For Single server use ```user=adminusername@servername``` . Note the ```@servername``` is required.
+> - For Flexible server , use ```user= adminusername``` without the ```@servername```.
+
+```output
+psql host={servername.postgres.database.azure.com} port=5432 dbname={your_database} user={adminusername} password={your_database_password} sslmode=require
+```
+
+You will use the connection string as a GitHub secret.
+
+## Configure the GitHub secrets
+
+1. In [GitHub](https://github.com/), browse your repository.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
+
+ When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
+
+ ```yaml
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ ```
+
+1. Select **New secret** again.
+
+1. Paste the connection string value into the secret's value field. Give the secret the name `AZURE_POSTGRESQL_CONNECTION_STRING`.
++
+## Add your workflow
+
+1. Go to **Actions** for your GitHub repository.
+
+2. Select **Set up your workflow yourself**.
+
+2. Delete everything after the `on:` section of your workflow file. For example, your remaining workflow may look like this.
+
+ ```yaml
+ name: CI
+
+ on:
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
+ ```
+
+1. Rename your workflow `PostgreSQL for GitHub Actions` and add the checkout and login actions. These actions will checkout your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
+
+ ```yaml
+ name: PostgreSQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
+
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ ```
+
+2. Use the Azure PostgreSQL Deploy action to connect to your PostgreSQL instance. Replace `POSTGRESQL_SERVER_NAME` with the name of your server. You should have a PostgreSQL data file named `data.sql` at the root level of your repository.
+
+ ```yaml
+ - uses: azure/postgresql@v1
+ with:
+ connection-string: ${{ secrets.AZURE_POSTGRESQL_CONNECTION_STRING }}
+ server-name: POSTGRESQL_SERVER_NAME
+ sql-file: './data.sql'
+ ```
+
+3. Complete your workflow by adding an action to logout of Azure. Here is the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
+
+ ```yaml
+ name: PostgreSQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
++
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - uses: azure/postgresql@v1
+ with:
+ server-name: POSTGRESQL_SERVER_NAME
+ connection-string: ${{ secrets.AZURE_POSTGRESQL_CONNECTION_STRING }}
+ sql-file: './data.sql'
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+ ```
+
+## Review your deployment
+
+1. Go to **Actions** for your GitHub repository.
+
+1. Open the first result to see detailed logs of your workflow's run.
+
+ :::image type="content" source="media/how-to-deploy-github-action/gitbub-action-postgres-success.png" alt-text="Log of GitHub actions run":::
+
+## Clean up resources
+
+When your Azure PostgreSQL database and repository are no longer needed, clean up the resources you deployed by deleting the resource group and your GitHub repository.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about Azure and GitHub integration](/azure/developer/github/)
+<br/>
+> [!div class="nextstepaction"]
+> [Learn how to connect to the server](how-to-connect-query-guide.md)
postgresql How To Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-double-encryption.md
+
+ Title: Infrastructure double encryption - Azure portal - Azure Database for PostgreSQL
+description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for PostgreSQL.
+++++ Last updated : 03/14/2021++
+# Infrastructure double encryption for Azure Database for PostgreSQL
+
+Learn how to use the how set up and manage Infrastructure double encryption for your Azure Database for PostgreSQL.
+
+## Prerequisites
+
+* You must have an Azure subscription and be an administrator on that subscription.
+
+## Create an Azure Database for PostgreSQL server with Infrastructure Double encryption - Portal
+
+Follow these steps to create an Azure Database for PostgreSQL server with Infrastructure double encryption from Azure portal:
+
+1. Select **Create a resource** (+) in the upper-left corner of the portal.
+
+2. Select **Databases** > **Azure Database for PostgreSQL**. You can also enter PostgreSQL in the search box to find the service. Enabled the **Single server** deployment option.
+
+ :::image type="content" source="./media/quickstart-create-database-portal/1-create-database.png" alt-text="The Azure Database for PostgreSQL in menu":::
+
+3. Provide the basic information of the server. Select **Additional settings** and enabled the **Infrastructure double encryption** checkbox to set the parameter.
+
+ :::image type="content" source="./media/how-to-infrastructure-double-encryption/infrastructure-encryption-selected.png" alt-text="Azure Database for PostgreSQL selections":::
+
+4. Select **Review + create** to provision the server.
+
+ :::image type="content" source="./media/how-to-infrastructure-double-encryption/infrastructure-encryption-summary.png" alt-text="Azure Database for PostgreSQL summary":::
+
+5. Once the server is created you can validate the infrastructure double encryption by checking the status in the **Data encryption** server blade.
+
+ :::image type="content" source="./media/how-to-infrastructure-double-encryption/infrastructure-encryption-validation.png" alt-text="Azure Database for MySQL validation":::
+
+## Create an Azure Database for PostgreSQL server with Infrastructure Double encryption - CLI
+
+Follow these steps to create an Azure Database for PostgreSQL server with Infrastructure double encryption from CLI:
+
+This example creates a resource group named `myresourcegroup` in the `westus` location.
+
+```azurecli-interactive
+az group create --name myresourcegroup --location westus
+```
+The following example creates a PostgreSQL 11 server in West US named `mydemoserver` in your resource group `myresourcegroup` with server admin login `myadmin`. This is a **Gen 4** **General Purpose** server with **2 vCores**. This will also enabled infrastructure double encryption for the server created. Substitute the `<server_admin_password>` with your own value.
+
+```azurecli-interactive
+az postgres server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen4_2 --version 11 --infrastructure-encryption >Enabled/Disabled>
+```
+
+## Next steps
+
+To learn more about data encryption, see [Azure Database for PostgreSQL data Infrastructure double encryption](concepts-Infrastructure-double-encryption.md).
+
postgresql How To Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-firewall-using-portal.md
+
+ Title: Manage firewall rules - Azure portal - Azure Database for PostgreSQL - Single Server
+description: Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal
+++++ Last updated : 5/6/2019++
+# Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal
+Server-level firewall rules can be used to manage access to an Azure Database for PostgreSQL Server from a specified IP address or range of IP addresses.
+
+Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md).
+
+## Prerequisites
+To step through this how-to guide, you need:
+- A server [Create an Azure Database for PostgreSQL](quickstart-create-server-database-portal.md)
+
+## Create a server-level firewall rule in the Azure portal
+1. On the PostgreSQL server page, under Settings heading, click **Connection security** to open the Connection security page for the Azure Database for PostgreSQL.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection Security":::
+
+2. Click **Add client IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/2-add-my-ip.png" alt-text="Azure portal - click Add My IP":::
+
+3. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Therefore, you may need to change the Start IP and End IP to make the rule function as expected.
+ Use a search engine or other online tool to check your own IP address. For example, search for "what is my IP."
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/3-what-is-my-ip.png" alt-text="Bing search for What is my IP":::
+
+4. Add additional address ranges. In the firewall rules for the Azure Database for PostgreSQL, you can specify a single IP address, or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP and End IP. Opening the firewall enables administrators, users, and applications to access any database on the PostgreSQL server to which they have valid credentials.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/4-specify-addresses.png" alt-text="Azure portal - firewall rules":::
+
+5. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules was successful.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/5-save-firewall-rule.png" alt-text="Azure portal - click Save":::
+
+## Connecting from Azure
+To allow applications from Azure to connect to your Azure Database for PostgreSQL server, Azure connections must be enabled. For example, to host an Azure Web Apps application, or an application that runs in an Azure VM, or to connect from an Azure Data Factory data management gateway. The resources do not need to be in the same Virtual Network (VNet) or Resource Group for the firewall rule to enable those connections. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. There are a couple of methods to enable these types of connections. A firewall setting with starting and ending address equal to 0.0.0.0 indicates these connections are allowed. Alternatively, you can set the **Allow access to Azure services** option to **ON** in the portal from the **Connection security** pane and hit **Save**. If the connection attempt is not allowed, the request does not reach the Azure Database for PostgreSQL server.
+
+> [!IMPORTANT]
+> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+>
+
+## Manage existing server-level firewall rules through the Azure portal
+Repeat the steps to manage the firewall rules.
+* To add the current computer, click the button to + **Add My IP**. Click **Save** to save the changes.
+* To add additional IP addresses, type in the Rule Name, Start IP Address, and End IP Address. Click **Save** to save the changes.
+* To modify an existing rule, click any of the fields in the rule and modify. Click **Save** to save the changes.
+* To delete an existing rule, click the ellipsis […] and click **Delete** to remove the rule. Click **Save** to save the changes.
+
+## Next steps
+- Similarly, you can script to [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule).
+- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md).
+- For help in connecting to an Azure Database for PostgreSQL server, see [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-server-cli.md
+
+ Title: Manage server - Azure CLI - Azure Database for PostgreSQL
+description: Learn how to manage an Azure Database for PostgreSQL server from the Azure CLI.
+++++ Last updated : 9/22/2020++
+# Manage an Azure Database for PostgreSQL Single server using the Azure CLI
+
+This article shows you how to manage your Single servers deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+You'll need to log in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+```azurecli-interactive
+az login
+```
+
+Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
+
+```azurecli
+az account set --subscription <subscription id>
+```
+
+If you have not already created a server, refer to this [quickstart](quickstart-create-server-database-azure-cli.md) to create one.
++
+## Scale compute and storage
+
+You can scale up your pricing tier, compute, and storage easily using the following command. You can see all the server operation you can perform [az postgres server overview](/cli/azure/mysql/server)
+
+```azurecli-interactive
+az postgres server update --resource-group myresourcegroup --name mydemoserver --sku-name GP_Gen5_4 --storage-size 6144
+```
+
+Here are the details for arguments above:
+
+**Setting** | **Sample value** | **Description**
+||
+name | mydemoserver | Enter a unique name for your Azure Database for PostgreSQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
+resource-group | myresourcegroup | Provide the name of the Azure resource group.
+sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information.
+storage-size | 6144 | The storage capacity of the server (unit is megabytes). Minimum 5120 and increases in 1024 increments.
+
+> [!Important]
+> - Storage can be scaled up (however, you cannot scale storage down)
+> - Scaling up from Basic to General purpose or Memory optimized pricing tier is not supported. You can manually scale up with either [using a bash script](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/upgrade-from-basic-to-general-purpose-or-memory-optimized-tiers/ba-p/830404) or [using PostgreSQL Workbench](https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-scale-up-azure-database-for-mysql-from-basic-tier-to/ba-p/369134)
++
+## Manage PostgreSQL databases on a server.
+You can use any of these commands to create, delete, list, and view database properties of a database on your server
+
+| Cmdlet | Usage| Description |
+| | | |
+|[az postgres db create](/cli/azure/sql/db#az-mysql-db-create)|```az postgres db create -g myresourcegroup -s mydemoserver -n mydatabasename``` |Creates a database|
+|[az postgres db delete](/cli/azure/sql/db#az-mysql-db-delete)|```az postgres db delete -g myresourcegroup -s mydemoserver -n mydatabasename```|Delete your database from your server. This command does not delete your server. |
+|[az postgres db list](/cli/azure/sql/db#az-mysql-db-list)|```az postgres db list -g myresourcegroup -s mydemoserver```|lists all the databases on the server|
+|[az postgres db show](/cli/azure/sql/db#az-mysql-db-show)|```az postgres db show -g myresourcegroup -s mydemoserver -n mydatabasename```|Shows more details of the database|
+
+## Update admin password
+You can change the administrator role's password with this command
+```azurecli-interactive
+az postgres server update --resource-group myresourcegroup --name mydemoserver --admin-password <new-password>
+```
+
+> [!Important]
+> Make sure password is minimum 8 characters and maximum 128 characters.
+> Password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
+
+## Delete a server
+If you would just like to delete the PostgreSQL Single server, you can run [az postgres server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
+
+```azurecli-interactive
+az postgres server delete --resource-group myresourcegroup --name mydemoserver
+```
+
+## Next steps
+- [Restart a server](how-to-restart-server-cli.md)
+- [Restore a server in a bad state](how-to-restore-server-cli.md)
+- [Monitor and tune the server](concepts-monitoring.md)
postgresql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-vnet-using-cli.md
+
+ Title: Use virtual network rules - Azure CLI - Azure Database for PostgreSQL - Single Server
+description: This article describes how to create and manage VNet service endpoints and rules for Azure Database for PostgreSQL using Azure CLI command line.
+++++
+ms.devlang: azurecli
+ Last updated : 01/26/2022 ++
+# Create and manage VNet service endpoints for Azure Database for PostgreSQL - Single Server using Azure CLI
+
+Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for PostgreSQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for PostgreSQL VNet service endpoints, including limitations, see [Azure Database for PostgreSQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for PostgreSQL.
+++
+> [!NOTE]
+> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
+
+## Configure Vnet service endpoints
+
+The [az network vnet](/cli/azure/network/vnet) commands are used to configure virtual networks. Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
+
+To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
+
+Learn more about [built-in roles](../../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../../role-based-access-control/custom-roles.md).
+
+VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** resource provider registered. For more information, see [resource-manager-registration][resource-manager-portal].
+
+> [!IMPORTANT]
+> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
+
+## Sample script
++
+### Run the script
++
+## Clean up deployment
++
+ ```azurecli
+ echo "Cleaning up resources by removing the resource group..."
+ az group delete --name $resourceGroup -y
+ ```
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql How To Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-vnet-using-portal.md
+
+ Title: Use virtual network rules - Azure portal - Azure Database for PostgreSQL - Single Server
+description: Create and manage VNet service endpoints and rules Azure Database for PostgreSQL - Single Server using the Azure portal
+++++ Last updated : 5/6/2019+
+# Create and manage VNet service endpoints and VNet rules in Azure Database for PostgreSQL - Single Server by using the Azure portal
+Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for PostgreSQL server. For an overview of Azure Database for PostgreSQL VNet service endpoints, including limitations, see [Azure Database for PostgreSQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for PostgreSQL.
+
+> [!NOTE]
+> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
+> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
++
+## Create a VNet rule and enable service endpoints in the Azure portal
+
+1. On the PostgreSQL server page, under the Settings heading, click **Connection Security** to open the Connection Security pane for Azure Database for PostgreSQL.
+
+2. Ensure that the Allow access to Azure services control is set to **OFF**.
+
+> [!Important]
+> If you leave the control set to ON, your Azure PostgreSQL Database server accepts communication from any subnet. Leaving the control set to ON might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for PostgreSQL, together can reduce your security surface area.
+
+3. Next, click on **+ Adding existing virtual network**. If you do not have an existing VNet you can click **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../../virtual-network/quick-create-portal.md)
+
+ :::image type="content" source="./media/how-to-manage-vnet-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security":::
+
+4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then click **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag.
+
+ :::image type="content" source="./media/how-to-manage-vnet-using-portal/2-configure-vnet.png" alt-text="Azure portal - configure VNet":::
+
+ The account must have the necessary permissions to create a virtual network and service endpoint.
+
+ Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
+
+ To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
+
+ Learn more about [built-in roles](../../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../../role-based-access-control/custom-roles.md).
+
+ VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
+
+ > [!IMPORTANT]
+ > It is highly recommended to read this article about service endpoint configurations and considerations before configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
+ >
+
+5. Once enabled, click **OK** and you will see that VNet service endpoints are enabled along with a VNet rule.
+
+ :::image type="content" source="./media/how-to-manage-vnet-using-portal/3-vnet-service-endpoints-enabled-vnet-rule-created.png" alt-text="VNet service endpoints enabled and VNet rule created":::
+
+## Next steps
+- Similarly, you can script to [Enable VNet service endpoints and create a VNET rule for Azure Database for PostgreSQL using Azure CLI](how-to-manage-vnet-using-cli.md).
+- For help in connecting to an Azure Database for PostgreSQL server, see [Connection libraries for Azure Database for PostgreSQL](./concepts-connection-libraries.md)
+
+<!-- Link references, to text, Within this same GitHub repo. -->
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql How To Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-migrate-from-oracle.md
+
+ Title: "Oracle to Azure Database for PostgreSQL: Migration guide"
+description: This guide helps you to migrate your Oracle schema to Azure Database for PostgreSQL.
+++++ Last updated : 03/18/2021++
+# Migrate Oracle to Azure Database for PostgreSQL
+
+This guide helps you to migrate your Oracle schema to Azure Database for PostgreSQL.
+
+For detailed and comprehensive migration guidance, see the [Migration guide resources](https://github.com/microsoft/OrcasNinjaTeam/blob/master/Oracle%20to%20PostgreSQL%20Migration%20Guide/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Guide.pdf).
+
+## Prerequisites
+
+To migrate your Oracle schema to Azure Database for PostgreSQL, you need to:
+
+- Verify your source environment is supported.
+- Download the latest version of [ora2pg](https://ora2pg.darold.net/).
+- Have the latest version of the [DBD module](https://www.cpan.org/modules/by-module/DBD/).
++
+## Overview
+
+PostgreSQL is one of world's most advanced open-source databases. This article describes how to use the free ora2pg tool to migrate an Oracle database to PostgreSQL. You can use ora2pg to migrate an Oracle database or MySQL database to a PostgreSQL-compatible schema.
+
+The ora2pg tool connects your Oracle database, scans it automatically, and extracts its structure or data. Then ora2pg generates SQL scripts that you can load into your PostgreSQL database. You can use ora2pg for tasks such as reverse-engineering an Oracle database, migrating a huge enterprise database, or simply replicating some Oracle data into a PostgreSQL database. The tool is easy to use and requires no Oracle database knowledge besides the ability to provide the parameters needed to connect to the Oracle database.
+
+> [!NOTE]
+> For more information about using the latest version of ora2pg, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
+
+### Typical ora2pg migration architecture
+
+![Screenshot of the ora2pg migration architecture.](media/how-to-migrate-from-oracle/ora2pg-migration-architecture.png)
+
+After you provision the VM and Azure Database for PostgreSQL, you need two configurations to enable connectivity between them: **Allow access to Azure services** and **Enforce SSL Connection**:
+
+- **Connection Security** blade > **Allow access to Azure services** > **ON**
+
+- **Connection Security** blade > **SSL Settings** > **Enforce SSL Connection** > **DISABLED**
+
+### Recommendations
+
+- To improve the performance of the assessment or export operations in the Oracle server, collect statistics:
+
+ ```
+ BEGIN
+
+ DBMS_STATS.GATHER_SCHEMA_STATS
+ DBMS_STATS.GATHER_DATABASE_STATS
+ DBMS_STATS.GATHER_DICTIONARY_STATS
+ END;
+ ```
+
+- Export data by using the `COPY` command instead of `INSERT`.
+
+- Avoid exporting tables with their foreign keys (FKs), constraints, and indexes. These elements slow down the process of importing data into PostgreSQL.
+
+- Create materialized views by using the *no data clause*. Then refresh the views later.
+
+- If possible, use unique indexes in materialized views. These indexes can speed up the refresh when you use the syntax `REFRESH MATERIALIZED VIEW CONCURRENTLY`.
++
+## Pre-migration
+
+After you verify that your source environment is supported and that you've addressed any prerequisites, you're ready to start the premigration stage. To begin:
+
+1. **Discover**: Inventory the databases that you need to migrate.
+2. **Assess**: Assess those databases for potential migration issues or blockers.
+3. **Convert**: Resolve any items you uncovered.
+
+For heterogenous migrations such as Oracle to Azure Database for PostgreSQL, this stage also involves making the source database schemas compatible with the target environment.
+
+### Discover
+
+The goal of the discovery phase is to identify existing data sources and details about the features that are being used. This phase helps you better understand and plan for the migration. The process involves scanning the network to identify all your organization's Oracle instances together with the version and features in use.
+
+Microsoft pre-assessment scripts for Oracle run against the Oracle database. The pre-assessment scripts query the Oracle metadata. The scripts provide:
+
+- A database inventory, including counts of objects by schema, type, and status.
+- A rough estimate of the raw data in each schema, based on statistics.
+- The size of tables in each schema.
+- The number of code lines per package, function, procedure, and so on.
+
+Download the related scripts from [github](https://github.com/microsoft/DataMigrationTeam/tree/master/Whitepapers).
+
+### Assess
+
+After you inventory the Oracle databases, you'll have an idea of the database size and potential challenges. The next step is to run the assessment.
+
+Estimating the cost of a migration from Oracle to PostgreSQL isn't easy. To assess the migration cost, ora2pg checks all database objects, functions, and stored procedures for objects and PL/SQL code that it can't automatically convert.
+
+The ora2pg tool has a content analysis mode that inspects the Oracle database to generate a text report. The report describes what the Oracle database contains and what can't be exported.
+
+To activate the *analysis and report* mode, use the exported type `SHOW_REPORT` as shown in the following command:
+
+```
+ora2pg -t SHOW_REPORT
+```
+
+The ora2pg tool can convert SQL and PL/SQL code from Oracle syntax to PostgreSQL. So after the database is analyzed, ora2pg can estimate the code difficulties and the time necessary to migrate a full database.
+
+To estimate the migration cost in human-days, ora2pg allows you to use a configuration directive called `ESTIMATE_COST`. You can also enable this directive at a command prompt:
+
+```
+ora2pg -t SHOW_REPORT --estimate_cost
+```
+
+The default migration unit represents around five minutes for a PostgreSQL expert. If this migration is your first, you can increase the default migration unit by using the configuration directive `COST_UNIT_VALUE` or the `--cost_unit_value` command-line option.
+
+The last line of the report shows the total estimated migration code in human-days. The estimate follows the number of migration units estimated for each object.
+
+In the following code example, you see some assessment variations:
+* Tables assessment
+* Columns assessment
+* Schema assessment that uses a default cost unit of 5 minutes
+* Schema assessment that uses a cost unit of 10 minutes
+
+```
+ora2pg -t SHOW_TABLE -c c:\ora2pg\ora2pg_hr.conf > c:\ts303\hr_migration\reports\tables.txt
+ora2pg -t SHOW_COLUMN -c c:\ora2pg\ora2pg_hr.conf > c:\ts303\hr_migration\reports\columns.txt
+ora2pg -t SHOW_REPORT -c c:\ora2pg\ora2pg_hr.conf --dump_as_html --estimate_cost > c:\ts303\hr_migration\reports\report.html
+ora2pg -t SHOW_REPORT -c c:\ora2pg\ora2pg_hr.conf ΓÇô-cost_unit_value 10 --dump_as_html --estimate_cost > c:\ts303\hr_migration\reports\report2.html
+```
+
+Here's the output of the schema assessment migration level B-5:
+
+* Migration levels:
+
+ * A - Migration that can be run automatically
+
+ * B - Migration with code rewrite and a human-days cost up to 5 days
+
+ * C - Migration with code rewrite and a human-days cost over 5 days
+
+* Technical levels:
+
+ * 1 = Trivial: No stored functions and no triggers
+
+ * 2 = Easy: No stored functions, but triggers; no manual rewriting
+
+ * 3 = Simple: Stored functions and/or triggers; no manual rewriting
+
+ * 4 = Manual: No stored functions, but triggers or views with code rewriting
+
+ * 5 = Difficult: Stored functions and/or triggers with code rewriting
+
+The assessment consists of:
+* A letter (A or B) to specify whether the migration needs manual rewriting.
+
+* A number from 1 to 5 to indicate the technical difficulty.
+
+Another option, `-human_days_limit`, specifies the limit of human-days. Here, set the migration level to C to indicate that the migration needs a large amount of work, full project management, and migration support. The default is 10 human-days. You can use the configuration directive `HUMAN_DAYS_LIMIT` to change this default value permanently.
+
+This schema assessment was developed to help users decide which database to migrate first and which teams to mobilize.
+
+### Convert
+
+In minimal-downtime migrations, your migration source changes. It drifts from the target in terms of data and schema after the one-time migration. During the *Data sync* phase, ensure that all changes in the source are captured and applied to the target in near real time. After you verify that all changes are applied to the target, you can *cut over* from the source to the target environment.
+
+In this step of the migration, the Oracle code and DDL scripts are converted or translated to PostgreSQL. The ora2pg tool exports the Oracle objects in a PostgreSQL format automatically. Some of the generated objects can't be compiled in the PostgreSQL database without manual changes.
+
+To understand which elements need manual intervention, first compile the files generated by ora2pg against the PostgreSQL database. Check the log, and then make any necessary changes until the schema structure is compatible with PostgreSQL syntax.
++
+#### Create a migration template
+
+We recommend using the migration template that ora2pg provides. When you use the options `--project_base` and `--init_project`, ora2pg creates a project template with a work tree, a configuration file, and a script to export all objects from the Oracle database. For more information, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
+
+Use the following command:
+
+```
+ora2pg --project_base /app/migration/ --init_project test_project
+```
+
+Here's the example output:
+
+```
+ora2pg --project_base /app/migration/ --init_project test_project
+ Creating project test_project.
+ /app/migration/test_project/
+ schema/
+ dblinks/
+ directories/
+ functions/
+ grants/
+ mviews/
+ packages/
+ partitions/
+ procedures/
+ sequences/
+ synonyms/
+ tables/
+ tablespaces/
+ triggers/
+ types/
+ views/
+ sources/
+ functions/
+ mviews/
+ packages/
+ partitions/
+ procedures/
+ triggers/
+ types/
+ views/
+ data/
+ config/
+ reports/
+
+ Generating generic configuration file
+ Creating script export_schema.sh to automate all exports.
+ Creating script import_all.sh to automate all imports.
+```
+
+The `sources/` directory contains the Oracle code. The `schema/` directory contains the code ported to PostgreSQL. And the `reports/` directory contains the HTML reports and the migration cost assessment.
++
+After the project structure is created, a generic config file is created. Define the Oracle database connection and the relevant config parameters in the config file. For more information about the config file, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
++
+#### Export Oracle objects
+
+Next, export the Oracle objects as PostgreSQL objects by running the file *export_schema.sh*.
+
+```
+cd /app/migration/mig_project
+./export_schema.sh
+```
+
+Run the following command manually.
+
+```
+SET namespace="/app/migration/mig_project"
+
+ora2pg -p -t DBLINK -o dblink.sql -b %namespace%/schema/dblinks -c %namespace%/config/ora2pg.conf
+ora2pg -p -t DIRECTORY -o directory.sql -b %namespace%/schema/directories -c %namespace%/config/ora2pg.conf
+ora2pg -p -t FUNCTION -o functions2.sql -b %namespace%/schema/functions -c %namespace%/config/ora2pg.conf
+ora2pg -p -t GRANT -o grants.sql -b %namespace%/schema/grants -c %namespace%/config/ora2pg.conf
+ora2pg -p -t MVIEW -o mview.sql -b %namespace%/schema/mviews -c %namespace%/config/ora2pg.conf
+ora2pg -p -t PACKAGE -o packages.sql -b %namespace%/schema/packages -c %namespace%/config/ora2pg.conf
+ora2pg -p -t PARTITION -o partitions.sql -b %namespace%/schema/partitions -c %namespace%/config/ora2pg.conf
+ora2pg -p -t PROCEDURE -o procs.sql -b %namespace%/schema/procedures -c %namespace%/config/ora2pg.conf
+ora2pg -p -t SEQUENCE -o sequences.sql -b %namespace%/schema/sequences -c %namespace%/config/ora2pg.conf
+ora2pg -p -t SYNONYM -o synonym.sql -b %namespace%/schema/synonyms -c %namespace%/config/ora2pg.conf
+ora2pg -p -t TABLE -o table.sql -b %namespace%/schema/tables -c %namespace%/config/ora2pg.conf
+ora2pg -p -t TABLESPACE -o tablespaces.sql -b %namespace%/schema/tablespaces -c %namespace%/config/ora2pg.conf
+ora2pg -p -t TRIGGER -o triggers.sql -b %namespace%/schema/triggers -c %namespace%/config/ora2pg.conf
+ora2pg -p -t TYPE -o types.sql -b %namespace%/schema/types -c %namespace%/config/ora2pg.conf
+ora2pg -p -t VIEW -o views.sql -b %namespace%/schema/views -c %namespace%/config/ora2pg.conf
+```
+
+To extract the data, use the following command.
+
+```
+ora2pg -t COPY -o data.sql -b %namespace/data -c %namespace/config/ora2pg.conf
+```
+
+#### Compile files
+
+Finally, compile all files against the Azure Database for PostgreSQL server. You can choose to load the manually generated DDL files or use the second script *import_all.sh* to import those files interactively.
+
+```
+psql -f %namespace%\schema\sequences\sequence.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\ schema\sequences\create_sequences.log
+
+psql -f %namespace%\schema\tables\table.sql -h server1-server.postgres.database.azure.com p 5432 -U username@server1-server -d database -l %namespace%\schema\tables\create_table.log
+```
+
+Here's the data import command:
+
+```
+psql -f %namespace%\data\table1.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\data\table1.log
+
+psql -f %namespace%\data\table2.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\data\table2.log
+```
+
+While the files are being compiled, check the logs and correct any syntax that ora2pg couldn't convert on its own.
+
+For more information, see [Oracle to Azure Database for PostgreSQL migration workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf).
+
+## Migrate
+
+After you have the necessary prerequisites and you've completed the premigration steps, you can start the schema and data migration.
+
+### Migrate schema and data
+
+When you've made the necessary fixes, a stable build of the database is ready to deploy. Run the `psql` import commands, pointing to the files that contain the modified code. This task compiles the database objects against the PostgreSQL database and imports the data.
+
+In this step, you can implement a level of parallelism on importing the data.
+
+### Sync data and cut over
+
+In online (minimal-downtime) migrations, the migration source continues to change. It drifts from the target in terms of data and schema after the one-time migration.
+
+During the *Data sync* phase, ensure that all changes in the source are captured and applied to the target in near real time. After you verify that all changes are applied, you can cut over from the source to the target environment.
+
+To do an online migration, contact AskAzureDBforPostgreSQL@service.microsoft.com for support.
+
+In a *delta/incremental* migration that uses ora2pg, for each table, use a query that filters (*cuts*) by date, time, or another parameter. Then finish the migration by using a second query that migrates the remaining data.
+
+In the source data table, migrate all the historical data first. Here's an example:
+
+```
+select * from table1 where filter_data < 01/01/2019
+```
+
+You can query the changes since the initial migration by running a command like this one:
+
+```
+select * from table1 where filter_data >= 01/01/2019
+```
+
+In this case, we recommended that you enhance validation by checking data parity on both sides, the source and the target.
+
+## Post-migration
+
+After the *Migration* stage, complete the post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+
+### Remediate applications
+
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. The setup sometimes requires changes to the applications.
+
+### Test
+
+After the data is migrated to the target, run tests against the databases to verify that the applications work well with the target. Make sure the source and target are properly migrated by running the manual data validation scripts against the Oracle source and PostgreSQL target databases.
+
+Ideally, if the source and target databases have a networking path, ora2pg should be used for data validation. You can use the `TEST` action to ensure that all objects from the Oracle database have been created in PostgreSQL.
+
+Run this command:
+
+```
+ora2pg -t TEST -c config/ora2pg.conf > migration_diff.txt
+```
+
+### Optimize
+
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness. In this phase, you also address performance issues with the workload.
+
+## Migration assets
+
+For more information about this migration scenario, see the following resources. They support real-world migration project engagement.
+
+| Resource | Description |
+| -- | |
+| [Oracle to Azure PostgreSQL migration cookbook](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20PostgreSQL%20Migration%20Cookbook.pdf) | This document helps architects, consultants, database administrators, and related roles quickly migrate workloads from Oracle to Azure Database for PostgreSQL by using ora2pg. |
+| [Oracle to Azure PostgreSQL migration workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf) | This document helps architects, consultants, database administrators, and related roles quickly fix or work around issues while migrating workloads from Oracle to Azure Database for PostgreSQL. |
+| [Steps to install ora2pg on Windows or Linux](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Steps%20to%20Install%20ora2pg%20on%20Windows%20and%20Linux.pdf) | This document provides a quick installation guide for migrating schema and data from Oracle to Azure Database for PostgreSQL by using ora2pg on Windows or Linux. For more information, see the [ora2pg documentation](http://ora2pg.darold.net/documentation.html). |
+
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to the Microsoft Azure data platform.
+
+## More support
+
+For migration help beyond the scope of ora2pg tooling, contact [@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com).
+
+## Next steps
+
+For a matrix of services and tools for database and data migration and for specialty tasks, see [Services and tools for data migration](../../dms/dms-tools-matrix.md).
+
+Documentation:
+- [Azure Database for PostgreSQL documentation](../index.yml)
+- [ora2pg documentation](https://ora2pg.darold.net/documentation.html)
+- [PostgreSQL website](https://www.postgresql.org/)
+- [Autonomous transaction support in PostgreSQL](http://blog.dalibo.com/2016/08/19/Autonoumous_transactions_support_in_PostgreSQL.html) 
postgresql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-migrate-online.md
+
+ Title: Minimal-downtime migration to Azure Database for PostgreSQL - Single Server
+description: This article describes how to perform a minimal-downtime migration of a PostgreSQL database to Azure Database for PostgreSQL - Single Server by using the Azure Database Migration Service.
+++++ Last updated : 5/6/2019++
+# Minimal-downtime migration to Azure Database for PostgreSQL - Single Server
+
+You can perform PostgreSQL migrations to Azure Database for PostgreSQL with minimal downtime by using the newly introduced **continuous sync capability** for the [Azure Database Migration Service](https://aka.ms/get-dms) (DMS). This functionality limits the amount of downtime that is incurred by the application.
+
+## Overview
+Azure DMS performs an initial load of your on-premises to Azure Database for PostgreSQL, and then continuously syncs any new transactions to Azure while the application remains running. After the data catches up on the target Azure side, you stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then update your connection string to point to Azure. When you are finished, your application will be live on Azure!
++
+## Next steps
+- View the video [App Modernization with Microsoft Azure](https://medius.studios.ms/Embed/Video/BRK2102?sid=BRK2102), which contains a demo showing how to migrate PostgreSQL apps to Azure Database for PostgreSQL.
+- See the tutorial [Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS](../../dms/tutorial-postgresql-azure-postgresql-online.md).
postgresql How To Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-migrate-using-dump-and-restore.md
+
+ Title: Dump and restore - Azure Database for PostgreSQL - Single Server
+description: You can extract a PostgreSQL database into a dump file. Then, you can restore from a file created by pg_dump in Azure Database for PostgreSQL Single Server.
+++++ Last updated : 09/22/2020++
+# Migrate your PostgreSQL database by using dump and restore
+
+You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a dump file. Then use [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) to restore the PostgreSQL database from an archive file created by `pg_dump`.
+
+## Prerequisites
+
+To step through this how-to guide, you need:
+- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md), including firewall rules to allow access.
+- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed.
+
+## Create a dump file that contains the data to be loaded
+
+To back up an existing PostgreSQL database on-premises or in a VM, run the following command:
+
+```bash
+pg_dump -Fc -v --host=<host> --username=<name> --dbname=<database name> -f <database>.dump
+```
+For example, if you have a local server and a database called **testdb** in it, run:
+
+```bash
+pg_dump -Fc -v --host=localhost --username=masterlogin --dbname=testdb -f testdb.dump
+```
+
+## Restore the data into the target database
+
+After you've created the target database, you can use the `pg_restore` command and the `--dbname` parameter to restore the data into the target database from the dump file.
+
+```bash
+pg_restore -v --no-owner --host=<server name> --port=<port> --username=<user-name> --dbname=<target database name> <database>.dump
+```
+
+Including the `--no-owner` parameter causes all objects created during the restore to be owned by the user specified with `--username`. For more information, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/app-pgrestore.html).
+
+> [!NOTE]
+> On Azure Database for PostgreSQL servers, TLS/SSL connections are on by default. If your PostgreSQL server requires TLS/SSL connections, but doesn't have them, set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error might read: "FATAL: SSL connection is required. Please specify SSL options and retry." In the Windows command line, run the command `SET PGSSLMODE=require` before running the `pg_restore` command. In Linux or Bash, run the command `export PGSSLMODE=require` before running the `pg_restore` command.
+>
+
+In this example, restore the data from the dump file **testdb.dump** into the database **mypgsqldb**, on target server **mydemoserver.postgres.database.azure.com**.
+
+Here's an example for how to use this `pg_restore` for Single Server:
+
+```bash
+pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb testdb.dump
+```
+
+Here's an example for how to use this `pg_restore` for Flexible Server:
+
+```bash
+pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin --dbname=mypgsqldb testdb.dump
+```
+
+## Optimize the migration process
+
+One way to migrate your existing PostgreSQL database to Azure Database for PostgreSQL is to back up the database on the source and restore it in Azure. To minimize the time required to complete the migration, consider using the following parameters with the backup and restore commands.
+
+> [!NOTE]
+> For detailed syntax information, see [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
+>
+
+### For the backup
+
+Take the backup with the `-Fc` switch, so that you can perform the restore in parallel to speed it up. For example:
+
+```bash
+pg_dump -h my-source-server-name -U source-server-username -Fc -d source-databasename -f Z:\Data\Backups\my-database-backup.dump
+```
+
+### For the restore
+
+- Move the backup file to an Azure VM in the same region as the Azure Database for PostgreSQL server you are migrating to. Perform the `pg_restore` from that VM to reduce network latency. Create the VM with [accelerated networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) enabled.
+
+- Open the dump file to verify that the create index statements are after the insert of the data. If it isn't the case, move the create index statements after the data is inserted. This should already be done by default, but it's a good idea to confirm.
+
+- Restore with the switches `-Fc` and `-j` (with a number) to parallelize the restore. The number you specify is the number of cores on the target server. You can also set to twice the number of cores of the target server to see the impact.
+
+ Here's an example for how to use this `pg_restore` for Single Server:
+
+ ```bash
+ pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
+ ```
+
+ Here's an example for how to use this `pg_restore` for Flexible Server:
+
+ ```bash
+ pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
+ ```
+
+- You can also edit the dump file by adding the command `set synchronous_commit = off;` at the beginning, and the command `set synchronous_commit = on;` at the end. Not turning it on at the end, before the apps change the data, might result in subsequent loss of data.
+
+- On the target Azure Database for PostgreSQL server, consider doing the following before the restore:
+
+ - Turn off query performance tracking. These statistics aren't needed during the migration. You can do this by setting `pg_stat_statements.track`, `pg_qs.query_capture_mode`, and `pgms_wait_sampling.query_capture_mode` to `NONE`.
+
+ - Use a high compute and high memory SKU, like 32 vCore Memory Optimized, to speed up the migration. You can easily scale back down to your preferred SKU after the restore is complete. The higher the SKU, the more parallelism you can achieve by increasing the corresponding `-j` parameter in the `pg_restore` command.
+
+ - More IOPS on the target server might improve the restore performance. You can provision more IOPS by increasing the server's storage size. This setting isn't reversible, but consider whether a higher IOPS would benefit your actual workload in the future.
+
+Remember to test and validate these commands in a test environment before you use them in production.
+
+## Next steps
+
+- To migrate a PostgreSQL database by using export and import, see [Migrate your PostgreSQL database using export and import](how-to-migrate-using-export-and-import.md).
+- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
postgresql How To Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-migrate-using-export-and-import.md
+
+ Title: Migrate a database - Azure Database for PostgreSQL - Single Server
+description: Describes how extract a PostgreSQL database into a script file and import the data into the target database from that file.
+++++ Last updated : 09/22/2020+
+# Migrate your PostgreSQL database using export and import
+
+You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a script file and [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to import the data into the target database from that file.
+
+## Prerequisites
+To step through this how-to guide, you need:
+- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md) with firewall rules to allow access and database under it.
+- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) command-line utility installed
+- [psql](https://www.postgresql.org/docs/current/static/app-psql.html) command-line utility installed
+
+Follow these steps to export and import your PostgreSQL database.
+
+## Create a script file using pg_dump that contains the data to be loaded
+To export your existing PostgreSQL database on-premises or in a VM to a sql script file, run the following command in your existing environment:
+
+```bash
+pg_dump --host=<host> --username=<name> --dbname=<database name> --file=<database>.sql
+```
+For example, if you have a local server and a database called **testdb** in it:
+```bash
+pg_dump --host=localhost --username=masterlogin --dbname=testdb --file=testdb.sql
+```
+
+## Import the data on target Azure Database for PostgreSQL
+You can use the psql command line and the --dbname parameter (-d) to import the data into the Azure Database for PostgreSQL server and load data from the sql file.
+
+```bash
+psql --file=<database>.sql --host=<server name> --port=5432 --username=<user> --dbname=<target database name>
+```
+This example uses psql utility and a script file named **testdb.sql** from previous step to import data into the database **mypgsqldb** on the target server **mydemoserver.postgres.database.azure.com**.
+
+For **Single Server**, use this command
+```bash
+psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb
+```
+
+For **Flexible Server**, use this command
+```bash
+psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --username=mylogin --dbname=mypgsqldb
+```
+++
+## Next steps
+- To migrate a PostgreSQL database using dump and restore, see [Migrate your PostgreSQL database using dump and restore](how-to-migrate-using-dump-and-restore.md).
+- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
postgresql How To Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-move-regions-portal.md
+
+ Title: Move Azure regions - Azure portal - Azure Database for PostgreSQL - Single Server
+description: Move an Azure Database for PostgreSQL server from one Azure region to another using a read replica and the Azure portal.
++++++ Last updated : 06/29/2020
+#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region
++
+# Move an Azure Database for Azure Database for PostgreSQL - Single Server to another region by using the Azure portal
+
+There are various scenarios for moving an existing Azure Database for PostgreSQL server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning.
+
+You can use an Azure Database for PostgreSQL [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
+
+> [!NOTE]
+> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
+
+## Prerequisites
+
+- The cross-region read replica feature is only available for Azure Database for PostgreSQL - Single Server in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
+
+- Make sure that your Azure Database for PostgreSQL source server is in the Azure region that you want to move from.
+
+## Prepare to move
+
+To prepare the source server for replication using the Azure portal, use the following steps:
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+1. Select the existing Azure Database for PostgreSQL server that you want to use as the source server. This action opens the **Overview** page.
+1. From the server's menu, select **Replication**. If Azure replication support is set to at least **Replica**, you can create read replicas.
+1. If Azure replication support is not set to at least **Replica**, set it. Select **Save**.
+1. Restart the server to apply the change by selecting **Yes**.
+1. You will receive two Azure portal notifications once the operation is complete. There is one notification for updating the server parameter. There is another notification for the server restart that follows immediately.
+1. Refresh the Azure portal page to update the Replication toolbar. You can now create read replicas for this server.
+
+To create a cross-region read replica server in the target region using the Azure portal, use the following steps:
+
+1. Select the existing Azure Database for PostgreSQL server that you want to use as the source server.
+1. Select **Replication** from the menu, under **SETTINGS**.
+1. Select **Add Replica**.
+1. Enter a name for the replica server.
+1. Select the location for the replica server. The default location is the same as the primary server's. Verify that you've selected the target location where you want the replica to be deployed.
+1. Select **OK** to confirm creation of the replica. During replica creation, data is copied from the source server to the replica. Create time may last several minutes or more, in proportion to the size of the source server.
+
+>[!NOTE]
+> When you create a replica, it doesn't inherit the firewall rules and VNet service endpoints of the primary server. These rules must be set up independently for the replica.
+
+## Move
+
+> [!IMPORTANT]
+> The standalone server can't be made into a replica again.
+> Before you stop replication on a read replica, ensure the replica has all the data that you require.
+
+To stop replication to the replica from the Azure portal, use the following steps:
+
+1. Once the replica has been created, locate and select your Azure Database for PostgreSQL source server.
+1. Select **Replication** from the menu, under **SETTINGS**.
+1. Select the replica server.
+1. Select **Stop replication**.
+1. Confirm you want to stop replication by clicking **OK**.
+
+## Clean up source server
+
+You may want to delete the source Azure Database for PostgreSQL server. To do so, use the following steps:
+
+1. Once the replica has been created, locate and select your Azure Database for PostgreSQL source server.
+1. In the **Overview** window, select **Delete**.
+1. Type in the name of the source server to confirm you want to delete.
+1. Select **Delete**.
+
+## Next steps
+
+In this tutorial, you moved an Azure Database for PostgreSQL server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
+
+- Learn more about [read replicas](concepts-read-replicas.md)
+- Learn more about [managing read replicas in the Azure portal](how-to-read-replicas-portal.md)
+- Learn more about [business continuity](concepts-business-continuity.md) options
postgresql How To Optimize Autovacuum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-autovacuum.md
+
+ Title: Optimize autovacuum - Azure Database for PostgreSQL - Single Server
+description: This article describes how you can optimize autovacuum on an Azure Database for PostgreSQL - Single Server
+++++ Last updated : 07/09/2020++
+# Optimize autovacuum on an Azure Database for PostgreSQL - Single Server
+
+This article describes how to effectively optimize autovacuum on an Azure Database for PostgreSQL server.
+
+## Overview of autovacuum
+
+PostgreSQL uses multiversion concurrency control (MVCC) to allow greater database concurrency. Every update results in an insert and delete, and every delete results in rows being soft-marked for deletion. Soft-marking identifies dead tuples that will be purged later. To carry out these tasks, PostgreSQL runs a vacuum job.
+
+A vacuum job can be triggered manually or automatically. More dead tuples exist when the database experiences heavy update or delete operations. Fewer dead tuples exist when the database is idle. You need to vacuum more frequently when the database load is heavy, which makes running vacuum jobs *manually* inconvenient.
+
+Autovacuum can be configured and benefits from tuning. The default values that PostgreSQL ships with try to ensure the product works on all kinds of devices. These devices include Raspberry Pis. The ideal configuration values depend on the:
+
+- Total resources available, such as SKU and storage size.
+- Resource usage.
+- Individual object characteristics.
+
+## Autovacuum benefits
+
+If you don't vacuum from time to time, the dead tuples that accumulate can result in:
+
+- Data bloat, such as larger databases and tables.
+- Larger suboptimal indexes.
+- Increased I/O.
+
+## Monitor bloat with autovacuum queries
+
+The following sample query is designed to identify the number of dead and live tuples in a table named XYZ:
+
+```sql
+SELECT relname,
+ n_dead_tup,
+ n_live_tup,
+ (n_dead_tup / n_live_tup) AS DeadTuplesRatio,
+ last_vacuum,
+ last_autovacuum
+FROM pg_catalog.pg_stat_all_tables
+WHERE relname = 'XYZ'
+ORDER BY n_dead_tup DESC;
+```
+
+## Autovacuum configurations
+
+The configuration parameters that control autovacuum are based on answers to two key questions:
+
+- When should it start?
+- How much should it clean after it starts?
+
+Here are some autovacuum configuration parameters that you can update based on the previous questions, along with some guidance.
+
+Parameter|Description|Default value
+||
+autovacuum_vacuum_threshold|Specifies the minimum number of updated or deleted tuples needed to trigger a vacuum operation in any one table. The default is 50 tuples. Set this parameter only in the postgresql.conf file or on the server command line. To override the setting for individual tables, change the table storage parameters.|50
+autovacuum_vacuum_scale_factor|Specifies a fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a vacuum operation. The default is 0.2, which is 20 percent of table size. Set this parameter only in the postgresql.conf file or on the server command line. To override the setting for individual tables, change the table storage parameters.|0.2
+autovacuum_vacuum_cost_limit|Specifies the cost limit value used in automatic vacuum operations. If -1 is specified, which is the default, the regular vacuum_cost_limit value is used. If there's more than one worker, the value is distributed proportionally among the running autovacuum workers. The sum of the limits for each worker doesn't exceed the value of this variable. Set this parameter only in the postgresql.conf file or on the server command line. To override the setting for individual tables, change the table storage parameters.|-1
+autovacuum_vacuum_cost_delay|Specifies the cost delay value used in automatic vacuum operations. If -1 is specified, the regular vacuum_cost_delay value is used. The default value is 20 milliseconds. Set this parameter only in the postgresql.conf file or on the server command line. To override the setting for individual tables, change the table storage parameters.|20 ms
+autovacuum_naptime | Specifies the minimum delay between autovacuum runs on any given database. In each round, the daemon examines the database and issues VACUUM and ANALYZE commands as needed for tables in that database. The delay is measured in seconds. Set this parameter only in the postgresql.conf file or on the server command line.| 15 s
+autovacuum_max_workers | Specifies the maximum number of autovacuum processes, other than the autovacuum launcher, that can run at any one time. The default is three. Set this parameter only at server start.|3
+
+To override the settings for individual tables, change the table storage parameters.
+
+## Autovacuum cost
+
+Here are the "costs" of running a vacuum operation:
+
+- The data pages that the vacuum runs on are locked.
+- Compute and memory are used when a vacuum job is running.
+
+As a result, don't run vacuum jobs either too frequently or too infrequently. A vacuum job needs to adapt to the workload. Test all autovacuum parameter changes because of the tradeoffs of each one.
+
+## Autovacuum start trigger
+
+Autovacuum is triggered when the number of dead tuples exceeds autovacuum_vacuum_threshold + autovacuum_vacuum_scale_factor * reltuples. Here, reltuples is a constant.
+
+Cleanup from autovacuum must keep up with the database load. Otherwise, you might run out of storage and experience a general slowdown in queries. Amortized over time, the rate at which a vacuum operation cleans up dead tuples should equal the rate at which dead tuples are created.
+
+Databases with many updates and deletes have more dead tuples and need more space. Generally, databases with many updates and deletes benefit from low values of autovacuum_vacuum_scale_factor and autovacuum_vacuum_threshold. The low values prevent prolonged accumulation of dead tuples. You can use higher values for both parameters with smaller databases because the need for vacuuming is less urgent. Frequent vacuuming comes at the cost of compute and memory.
+
+The default scale factor of 20 percent works well on tables with a low percentage of dead tuples. It doesn't work well on tables with a high percentage of dead tuples. For example, on a 20-GB table, this scale factor translates to 4 GB of dead tuples. On a 1-TB table, itΓÇÖs 200 GB of dead tuples.
+
+With PostgreSQL, you can set these parameters at the table level or instance level. Today, you can set these parameters at the table level only in Azure Database for PostgreSQL.
+
+## Estimate the cost of autovacuum
+
+Running autovacuum is "costly," and there are parameters for controlling the runtime of vacuum operations. The following parameters help estimate the cost of running vacuum:
+
+- vacuum_cost_page_hit = 1
+- vacuum_cost_page_miss = 10
+- vacuum_cost_page_dirty = 20
+
+The vacuum process reads physical pages and checks for dead tuples. Every page in shared_buffers is considered to have a cost of 1 (vacuum_cost_page_hit). All other pages are considered to have a cost of 20 (vacuum_cost_page_dirty), if dead tuples exist, or 10 (vacuum_cost_page_miss), if no dead tuples exist. The vacuum operation stops when the process exceeds the autovacuum_vacuum_cost_limit.
+
+After the limit is reached, the process sleeps for the duration specified by the autovacuum_vacuum_cost_delay parameter before it starts again. If the limit isn't reached, autovacuum starts after the value specified by the autovacuum_naptime parameter.
+
+In summary, the autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit parameters control how much data cleanup is allowed per unit of time. Note that the default values are too low for most pricing tiers. The optimal values for these parameters are pricing tier-dependent and should be configured accordingly.
+
+The autovacuum_max_workers parameter determines the maximum number of autovacuum processes that can run simultaneously.
+
+With PostgreSQL, you can set these parameters at the table level or instance level. Today, you can set these parameters at the table level only in Azure Database for PostgreSQL.
+
+## Optimize autovacuum per table
+
+You can configure all the previous configuration parameters per table. Here's an example:
+
+```sql
+ALTER TABLE t SET (autovacuum_vacuum_threshold = 1000);
+ΓÇïALTER TABLE t SET (autovacuum_vacuum_scale_factor = 0.1);
+ALTER TABLE t SET (autovacuum_vacuum_cost_limit = 1000);
+ALTER TABLE t SET (autovacuum_vacuum_cost_delay = 10);
+```
+
+Autovacuum is a per-table synchronous process. The larger percentage of dead tuples that a table has, the higher the "cost" to autovacuum. You can split tables that have a high rate of updates and deletes into multiple tables. Splitting tables helps to parallelize autovacuum and reduce the "cost" to complete autovacuum on one table. You also can increase the number of parallel autovacuum workers to ensure that workers are liberally scheduled.
+
+## Next steps
+
+To learn more about how to use and tune autovacuum, see the following PostgreSQL documentation:
+
+- [Chapter 18, Server configuration](https://www.postgresql.org/docs/9.5/static/runtime-config-autovacuum.html)
+- [Chapter 24, Routine database maintenance tasks](https://www.postgresql.org/docs/9.6/static/routine-vacuuming.html)
postgresql How To Optimize Bulk Inserts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-bulk-inserts.md
+
+ Title: Optimize bulk inserts - Azure Database for PostgreSQL - Single Server
+description: This article describes how you can optimize bulk insert operations on an Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019++
+# Optimize bulk inserts and use transient data on an Azure Database for PostgreSQL - Single Server
+This article describes how you can optimize bulk insert operations and use transient data on an Azure Database for PostgreSQL server.
+
+## Use unlogged tables
+If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables.
+
+Unlogged tables is a PostgreSQL feature that can be used effectively to optimize bulk inserts. PostgreSQL uses Write-Ahead Logging (WAL). It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties.
+
+Inserting into an unlogged table means that PostgreSQL does inserts without writing into the transaction log, which itself is an I/O operation. As a result, these tables are considerably faster than ordinary tables.
+
+Use the following options to create an unlogged table:
+- Create a new unlogged table by using the syntax `CREATE UNLOGGED TABLE <tableName>`.
+- Convert an existing logged table to an unlogged table by using the syntax `ALTER TABLE <tableName> SET UNLOGGED`.
+
+To reverse the process, use the syntax `ALTER TABLE <tableName> SET LOGGED`.
+
+## Unlogged table tradeoff
+Unlogged tables aren't crash-safe. An unlogged table is automatically truncated after a crash or subject to an unclean shutdown. The contents of an unlogged table also aren't replicated to standby servers. Any indexes created on an unlogged table are automatically unlogged as well. After the insert operation completes, convert the table to logged so that the insert is durable.
+
+Some customer workloads have experienced approximately a 15 percent to 20 percent performance improvement when unlogged tables were used.
+
+## Next steps
+Review your workload for uses of transient data and large bulk inserts. See the following PostgreSQL documentation:
+
+- [Create Table SQL commands](https://www.postgresql.org/docs/current/static/sql-createtable.html)
postgresql How To Optimize Query Stats Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-query-stats-collection.md
+
+ Title: Optimize query stats collection - Azure Database for PostgreSQL - Single Server
+description: This article describes how you can optimize query stats collection on an Azure Database for PostgreSQL - Single Server
+++++ Last updated : 5/6/2019++
+# Optimize query statistics collection on an Azure Database for PostgreSQL - Single Server
+This article describes how to optimize query statistics collection on an Azure Database for PostgreSQL server.
+
+## Use pg_stats_statements
+**Pg_stat_statements** is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. This module hooks into every query execution and comes with a non-trivial performance cost. Enabling **pg_stat_statements** forces query text writes to files on disk.
+
+If you have unique queries with long query text or you don't actively monitor **pg_stat_statements**, disable **pg_stat_statements** for best performance. To do so, change the setting to `pg_stat_statements.track = NONE`.
+
+Some customer workloads have seen up to a 50 percent performance improvement when **pg_stat_statements** is disabled. The tradeoff you make when you disable pg_stat_statements is the inability to troubleshoot performance issues.
+
+To set `pg_stat_statements.track = NONE`:
+
+- In the Azure portal, go to the [PostgreSQL resource management page and select the server parameters blade](how-to-configure-server-parameters-using-portal.md).
+
+ :::image type="content" source="./media/how-to-optimize-query-stats-collection/postgresql-stats-statements-portal.png" alt-text="PostgreSQL server parameter blade":::
+
+- Use the [Azure CLI](how-to-configure-server-parameters-using-cli.md) az postgres server configuration set to `--name pg_stat_statements.track --resource-group myresourcegroup --server mydemoserver --value NONE`.
+
+## Use the Query Store
+The [Query Store](concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using *pg_stats_statements*.
+
+## Next steps
+Consider setting `pg_stat_statements.track = NONE` in the [Azure portal](how-to-configure-server-parameters-using-portal.md) or by using the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
+
+For more information, see:
+- [Query Store usage scenarios](concepts-query-store-scenarios.md)
+- [Query Store best practices](concepts-query-store-best-practices.md)
postgresql How To Optimize Query Time With Toast Table Storage Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-query-time-with-toast-table-storage-strategy.md
+
+ Title: Optimize query time by using the TOAST table storage strategy in Azure Database for PostgreSQL - Single Server
+description: This article describes how to optimize query time with the TOAST table storage strategy on an Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019++
+# Optimize query time with the TOAST table storage strategy
+This article describes how to optimize query times with the oversized-attribute storage technique (TOAST) table storage strategy.
+
+## TOAST table storage strategies
+Four different strategies are used to store columns on disk that can use TOAST. They represent various combinations between compression and out-of-line storage. The strategy can be set at the level of data type and at the column level.
+- **Plain** prevents either compression or out-of-line storage. It disables the use of single-byte headers for varlena types. Plain is the only possible strategy for columns of data types that can't use TOAST.
+- **Extended** allows both compression and out-of-line storage. Extended is the default for most data types that can use TOAST. Compression is attempted first. Out-of-line storage is attempted if the row is still too large.
+- **External** allows out-of-line storage but not compression. Use of External makes substring operations on wide text and bytea columns faster. This speed comes with the penalty of increased storage space. These operations are optimized to fetch only the required parts of the out-of-line value when it's not compressed.
+- **Main** allows compression but not out-of-line storage. Out-of-line storage is still performed for such columns, but only as a last resort. It occurs when there's no other way to make the row small enough to fit on a page.
+
+## Use TOAST table storage strategies
+If your queries access data types that can use TOAST, consider using the Main strategy instead of the default Extended option to reduce query times. Main doesn't rule out out-of-line storage. If your queries don't access data types that can use TOAST, it might be beneficial to keep the Extended option. A larger portion of the rows of the main table fit in the shared buffer cache, which helps performance.
+
+If you have a workload that uses a schema with wide tables and high character counts, consider using PostgreSQL TOAST tables. An example customer table had greater than 350 columns with several columns that spanned 255 characters. After it was converted to the TOAST table Main strategy, their benchmark query time reduced from 4203 seconds to 467 seconds. That's an 89 percent improvement.
+
+## Next steps
+Review your workload for the previous characteristics.
+
+Review the following PostgreSQL documentation:
+- [Chapter 68, Database physical storage](https://www.postgresql.org/docs/current/storage-toast.html)
postgresql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-read-replicas-cli.md
+
+ Title: Manage read replicas - Azure CLI, REST API - Azure Database for PostgreSQL - Single Server
+description: Learn how to manage read replicas in Azure Database for PostgreSQL - Single Server from the Azure CLI and REST API
+++++ Last updated : 12/17/2020 +++
+# Create and manage read replicas from the Azure CLI, REST API
+
+In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL by using the Azure CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
+
+## Azure replication support
+[Read replicas](concepts-read-replicas.md) and [logical decoding](concepts-logical.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas.
+
+To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
+
+* **Off** - Puts the least information in the WAL. This setting is not available on most Azure Database for PostgreSQL servers.
+* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers.
+* **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting.
++
+> [!NOTE]
+> When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica.
+
+## Azure CLI
+You can create and manage read replicas using the Azure CLI.
+
+### Prerequisites
+
+- [Install Azure CLI 2.0](/cli/azure/install-azure-cli)
+- An [Azure Database for PostgreSQL server](quickstart-create-server-up-azure-cli.md) to be the primary server.
++
+### Prepare the primary server
+
+1. Check the primary server's `azure.replication_support` value. It should be at least REPLICA for read replicas to work.
+
+ ```azurecli-interactive
+ az postgres server configuration show --resource-group myresourcegroup --server-name mydemoserver --name azure.replication_support
+ ```
+
+2. If `azure.replication_support` is not at least REPLICA, set it.
+
+ ```azurecli-interactive
+ az postgres server configuration set --resource-group myresourcegroup --server-name mydemoserver --name azure.replication_support --value REPLICA
+ ```
+
+3. Restart the server to apply the change.
+
+ ```azurecli-interactive
+ az postgres server restart --name mydemoserver --resource-group myresourcegroup
+ ```
+
+### Create a read replica
+
+The [az postgres server replica create](/cli/azure/postgres/server/replica#az-postgres-server-replica-create) command requires the following parameters:
+
+| Setting | Example value | Description |
+| | | |
+| resource-group | myresourcegroup | The resource group where the replica server will be created. |
+| name | mydemoserver-replica | The name of the new replica server that is created. |
+| source-server | mydemoserver | The name or resource ID of the existing primary server to replicate from. Use the resource ID if you want the replica and master's resource groups to be different. |
+
+In the CLI example below, the replica is created in the same region as the master.
+
+```azurecli-interactive
+az postgres server replica create --name mydemoserver-replica --source-server mydemoserver --resource-group myresourcegroup
+```
+
+To create a cross region read replica, use the `--location` parameter. The CLI example below creates the replica in West US.
+
+```azurecli-interactive
+az postgres server replica create --name mydemoserver-replica --source-server mydemoserver --resource-group myresourcegroup --location westus
+```
+
+> [!NOTE]
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+
+If you haven't set the `azure.replication_support` parameter to **REPLICA** on a General Purpose or Memory Optimized primary server and restarted the server, you receive an error. Complete those two steps before you create a replica.
+
+> [!IMPORTANT]
+> Review the [considerations section of the Read Replica overview](concepts-read-replicas.md#considerations).
+>
+> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
+
+### List replicas
+You can view the list of replicas of a primary server by using [az postgres server replica list](/cli/azure/postgres/server/replica#az-postgres-server-replica-list) command.
+
+```azurecli-interactive
+az postgres server replica list --server-name mydemoserver --resource-group myresourcegroup
+```
+
+### Stop replication to a replica server
+You can stop replication between a primary server and a read replica by using [az postgres server replica stop](/cli/azure/postgres/server/replica#az-postgres-server-replica-stop) command.
+
+After you stop replication to a primary server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
+
+```azurecli-interactive
+az postgres server replica stop --name mydemoserver-replica --resource-group myresourcegroup
+```
+
+### Delete a primary or replica server
+To delete a primary or replica server, you use the [az postgres server delete](/cli/azure/postgres/server#az-postgres-server-delete) command.
+
+When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
+
+```azurecli-interactive
+az postgres server delete --name myserver --resource-group myresourcegroup
+```
+
+## REST API
+You can create and manage read replicas using the [Azure REST API](/rest/api/azure/).
+
+### Prepare the primary server
+
+1. Check the primary server's `azure.replication_support` value. It should be at least REPLICA for read replicas to work.
+
+ ```http
+ GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}/configurations/azure.replication_support?api-version=2017-12-01
+ ```
+
+2. If `azure.replication_support` is not at least REPLICA, set it.
+
+ ```http
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}/configurations/azure.replication_support?api-version=2017-12-01
+ ```
+
+ ```JSON
+ {
+ "properties": {
+ "value": "replica"
+ }
+ }
+ ```
+
+2. [Restart the server](/rest/api/postgresql/singleserver/servers/restart) to apply the change.
+
+ ```http
+ POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}/restart?api-version=2017-12-01
+ ```
+
+### Create a read replica
+You can create a read replica by using the [create API](/rest/api/postgresql/singleserver/servers/create):
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{replicaName}?api-version=2017-12-01
+```
+
+```json
+{
+ "location": "southeastasia",
+ "properties": {
+ "createMode": "Replica",
+ "sourceServerId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}"
+ }
+}
+```
+
+> [!NOTE]
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+
+If you haven't set the `azure.replication_support` parameter to **REPLICA** on a General Purpose or Memory Optimized primary server and restarted the server, you receive an error. Complete those two steps before you create a replica.
+
+A replica is created by using the same compute and storage settings as the master. After a replica is created, several settings can be changed independently from the primary server: compute generation, vCores, storage, and back-up retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
++
+> [!IMPORTANT]
+> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
+
+### List replicas
+You can view the list of replicas of a primary server using the [replica list API](/rest/api/postgresql/singleserver/replicas/listbyserver):
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{masterServerName}/Replicas?api-version=2017-12-01
+```
+
+### Stop replication to a replica server
+You can stop replication between a primary server and a read replica by using the [update API](/rest/api/postgresql/singleserver/servers/update).
+
+After you stop replication to a primary server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{replicaServerName}?api-version=2017-12-01
+```
+
+```json
+{
+ "properties": {
+ "replicationRole":"None"
+ }
+}
+```
+
+### Delete a primary or replica server
+To delete a primary or replica server, you use the [delete API](/rest/api/postgresql/singleserver/servers/delete):
+
+When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/servers/{serverName}?api-version=2017-12-01
+```
+
+## Next steps
+* Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
+* Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-read-replicas-portal.md
+
+ Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Single Server
+description: Learn how to manage read replicas Azure Database for PostgreSQL - Single Server from the Azure portal.
+++++ Last updated : 11/05/2020++
+# Create and manage read replicas in Azure Database for PostgreSQL - Single Server from the Azure portal
+
+In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
++
+## Prerequisites
+An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md) to be the primary server.
+
+## Azure replication support
+
+[Read replicas](concepts-read-replicas.md) and [logical decoding](concepts-logical.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas.
+
+To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
+
+* **Off** - Puts the least information in the WAL. This setting is not available on most Azure Database for PostgreSQL servers.
+* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers.
+* **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting.
++
+> [!NOTE]
+> When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica.
+
+## Prepare the primary server
+
+1. In the Azure portal, select an existing Azure Database for PostgreSQL server to use as a master.
+
+2. From the server's menu, select **Replication**. If Azure replication support is set to at least **Replica**, you can create read replicas.
+
+3. If Azure replication support is not set to at least **Replica**, set it. Select **Save**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/set-replica-save.png" alt-text="Azure Database for PostgreSQL - Replication - Set replica and save":::
+
+4. Restart the server to apply the change by selecting **Yes**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/confirm-restart.png" alt-text="Azure Database for PostgreSQL - Replication - Confirm restart":::
+
+5. You will receive two Azure portal notifications once the operation is complete. There is one notification for updating the server parameter. There is another notification for the server restart that follows immediately.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/success-notifications.png" alt-text="Success notifications":::
+
+6. Refresh the Azure portal page to update the Replication toolbar. You can now create read replicas for this server.
+
+
+## Create a read replica
+To create a read replica, follow these steps:
+
+1. Select an existing Azure Database for PostgreSQL server to use as the primary server.
+
+2. On the server sidebar, under **SETTINGS**, select **Replication**.
+
+3. Select **Add Replica**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/add-replica.png" alt-text="Add a replica":::
+
+4. Enter a name for the read replica.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/name-replica.png" alt-text="Name the replica":::
+
+5. Select a location for the replica. The default location is the same as the primary server's.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/location-replica.png" alt-text="Select a location":::
+
+ > [!NOTE]
+ > To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+
+6. Select **OK** to confirm the creation of the replica.
+
+After the read replica is created, it can be viewed from the **Replication** window:
+
+
+
+> [!IMPORTANT]
+> Review the [considerations section of the Read Replica overview](concepts-read-replicas.md#considerations).
+>
+> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master.
+
+## Stop replication
+You can stop replication between a primary server and a read replica.
+
+> [!IMPORTANT]
+> After you stop replication to a primary server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
+
+To stop replication between a primary server and a read replica from the Azure portal, follow these steps:
+
+1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
+
+2. On the server menu, under **SETTINGS**, select **Replication**.
+
+3. Select the replica server for which to stop replication.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/select-replica.png" alt-text="Select the replica":::
+
+4. Select **Stop replication**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/select-stop-replication.png" alt-text="Select stop replication":::
+
+5. Select **OK** to stop replication.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/confirm-stop-replication.png" alt-text="Confirm to stop replication":::
+
+
+## Delete a primary server
+To delete a primary server, you use the same steps as to delete a standalone Azure Database for PostgreSQL server.
+
+> [!IMPORTANT]
+> When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
+
+To delete a server from the Azure portal, follow these steps:
+
+1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
+
+2. Open the **Overview** page for the server. Select **Delete**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/delete-server.png" alt-text="On the server Overview page, select to delete the primary server":::
+
+3. Enter the name of the primary server to delete. Select **Delete** to confirm deletion of the primary server.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/confirm-delete.png" alt-text="Confirm to delete the primary server":::
+
+
+## Delete a replica
+You can delete a read replica similar to how you delete a primary server.
+
+- In the Azure portal, open the **Overview** page for the read replica. Select **Delete**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/delete-replica.png" alt-text="On the replica Overview page, select to delete the replica":::
+
+You can also delete the read replica from the **Replication** window by following these steps:
+
+1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
+
+2. On the server menu, under **SETTINGS**, select **Replication**.
+
+3. Select the read replica to delete.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/select-replica.png" alt-text="Select the replica to delete":::
+
+4. Select **Delete replica**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/select-delete-replica.png" alt-text="Select delete replica":::
+
+5. Enter the name of the replica to delete. Select **Delete** to confirm deletion of the replica.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/confirm-delete-replica.png" alt-text="Confirm to delete te replica":::
+
+
+## Monitor a replica
+Two metrics are available to monitor read replicas.
+
+### Max Lag Across Replicas metric
+The **Max Lag Across Replicas** metric shows the lag in bytes between the primary server and the most-lagging replica.
+
+1. In the Azure portal, select the primary Azure Database for PostgreSQL server.
+
+2. Select **Metrics**. In the **Metrics** window, select **Max Lag Across Replicas**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/select-max-lag.png" alt-text="Monitor the max lag across replicas":::
+
+3. For your **Aggregation**, select **Max**.
++
+### Replica Lag metric
+The **Replica Lag** metric shows the time since the last replayed transaction on a replica. If there are no transactions occurring on your master, the metric reflects this time lag.
+
+1. In the Azure portal, select the Azure Database for PostgreSQL read replica.
+
+2. Select **Metrics**. In the **Metrics** window, select **Replica Lag**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/select-replica-lag.png" alt-text="Monitor the replica lag":::
+
+3. For your **Aggregation**, select **Max**.
+
+## Next steps
+* Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
+* Learn how to [create and manage read replicas in the Azure CLI and REST API](how-to-read-replicas-cli.md).
postgresql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-read-replicas-powershell.md
+
+ Title: Manage read replicas - Azure PowerShell - Azure Database for PostgreSQL
+description: Learn how to set up and manage read replicas in Azure Database for PostgreSQL using PowerShell.
+++++ Last updated : 06/08/2020 +++
+# How to create and manage read replicas in Azure Database for PostgreSQL using PowerShell
+
+In this article, you learn how to create and manage read replicas in the Azure Database for PostgreSQL
+service using PowerShell. To learn more about read replicas, see the
+[overview](concepts-read-replicas.md).
+
+## Azure PowerShell
+
+You can create and manage read replicas using PowerShell.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed
+ locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
+> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
++
+> [!IMPORTANT]
+> The read replica feature is only available for Azure Database for PostgreSQL servers in the General
+> Purpose or Memory Optimized pricing tiers. Ensure the primary server is in one of these pricing
+> tiers.
+
+### Create a read replica
+
+A read replica server can be created using the following command:
+
+```azurepowershell-interactive
+Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ New-AzPostgreSqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup
+```
+
+The `New-AzPostgreSqlReplica` command requires the following parameters:
+
+| Setting | Example value | Description  |
+| | | |
+| ResourceGroupName |  myresourcegroup |  The resource group where the replica server is created.  |
+| Name | mydemoreplicaserver | The name of the new replica server that is created. |
+
+To create a cross region read replica, use the **Location** parameter. The following example creates
+a replica in the **West US** region.
+
+```azurepowershell-interactive
+Get-AzPostgreSqlServer -Name mrdemoserver -ResourceGroupName myresourcegroup |
+ New-AzPostgreSqlReplica -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -Location westus
+```
+
+To learn more about which regions you can create a replica in, visit the
+[read replica concepts article](concepts-read-replicas.md).
+
+By default, read replicas are created with the same server configuration as the primary unless the
+**Sku** parameter is specified.
+
+> [!NOTE]
+> It is recommended that the replica server's configuration should be kept at equal or greater
+> values than the primary to ensure the replica is able to keep up with the master.
+
+### List replicas for a primary server
+
+To view all replicas for a given primary server, run the following command:
+
+```azurepowershell-interactive
+Get-AzPostgreSQLReplica -ResourceGroupName myresourcegroup -ServerName mydemoserver
+```
+
+The `Get-AzPostgreSQLReplica` command requires the following parameters:
+
+| Setting | Example value | Description  |
+| | | |
+| ResourceGroupName |  myresourcegroup |  The resource group where the replica server will be created to.  |
+| ServerName | mydemoserver | The name or ID of the primary server. |
+
+### Stop a replica server
+
+Stopping a read replica server promotes the read replica to be an independent server. It can be done by running the `Update-AzPostgreSqlServer` cmdlet and by setting the ReplicationRole value to `None`.
+
+```azurepowershell-interactive
+Update-AzPostgreSqlServer -Name mydemoreplicaserver -ResourceGroupName myresourcegroup -ReplicationRole None
+```
+
+### Delete a replica server
+
+Deleting a read replica server can be done by running the `Remove-AzPostgreSqlServer` cmdlet.
+
+```azurepowershell-interactive
+Remove-AzPostgreSqlServer -Name mydemoreplicaserver -ResourceGroupName myresourcegroup
+```
+
+### Delete a primary server
+
+> [!IMPORTANT]
+> Deleting a primary server stops replication to all replica servers and deletes the primary server
+> itself. Replica servers become standalone servers that now support both read and writes.
+
+To delete a primary server, you can run the `Remove-AzPostgreSqlServer` cmdlet.
+
+```azurepowershell-interactive
+Remove-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Restart Azure Database for PostgreSQL server using PowerShell](how-to-restart-server-powershell.md)
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-cli.md
+
+ Title: Restart server - Azure CLI - Azure Database for PostgreSQL - Single Server
+description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure CLI
+++++ Last updated : 5/6/2019 +++
+# Restart Azure Database for PostgreSQL - Single Server using the Azure CLI
+This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation.
+
+The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
+
+> [!NOTE]
+> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server.
+
+## Prerequisites
+To complete this how-to guide:
+- Create an [Azure Database for PostgreSQL server](quickstart-create-server-up-azure-cli.md).
++
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Restart the server
+
+Restart the server with the following command:
+
+```azurecli-interactive
+az postgres server restart --name mydemoserver --resource-group myresourcegroup
+```
+
+## Next steps
+
+Learn about [how to set parameters in Azure Database for PostgreSQL](how-to-configure-server-parameters-using-cli.md)
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-portal.md
+
+ Title: Restart server - Azure portal - Azure Database for PostgreSQL - Single Server
+description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure portal.
+++++ Last updated : 12/20/2020++
+# Restart Azure Database for PostgreSQL - Single Server using the Azure portal
+This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation.
+
+The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
+
+> [!NOTE]
+> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server to enable faster recovery time. If the `CHECKPOINT` command is not performed prior to restarting the server then it may lead to longer recovery time.
+
+## Prerequisites
+To complete this how-to guide, you need:
+- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md)
+
+## Perform server restart
+
+The following steps restart the PostgreSQL server:
+
+1. In the [Azure portal](https://portal.azure.com/), select your Azure Database for PostgreSQL server.
+
+2. In the toolbar of the server's **Overview** page, click **Restart**.
+
+ :::image type="content" source="./media/how-to-restart-server-portal/2-server.png" alt-text="Azure Database for PostgreSQL - Overview - Restart button":::
+
+3. Click **Yes** to confirm restarting the server.
+
+ :::image type="content" source="./media/how-to-restart-server-portal/3-restart-confirm.png" alt-text="Azure Database for PostgreSQL - Restart confirm":::
+
+4. Observe that the server status changes to "Restarting".
+
+ :::image type="content" source="./media/how-to-restart-server-portal/4-restart-status.png" alt-text="Azure Database for PostgreSQL - Restart status":::
+
+5. Confirm server restart is successful.
+
+ :::image type="content" source="./media/how-to-restart-server-portal/5-restart-success.png" alt-text="Azure Database for PostgreSQL - Restart success":::
+
+## Next steps
+
+Learn about [how to set parameters in Azure Database for PostgreSQL](how-to-configure-server-parameters-using-portal.md)
postgresql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-powershell.md
+
+ Title: Restart server - Azure PowerShell - Azure Database for PostgreSQL
+description: This article describes how you can restart an Azure Database for PostgreSQL server using PowerShell.
+++++ Last updated : 06/08/2020 +++
+# Restart Azure Database for PostgreSQL server using PowerShell
+
+This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart
+your server for maintenance reasons, which causes a short outage during the operation.
+
+The server restart is blocked if the service is busy. For example, the service may be processing a
+previously requested operation such as scaling vCores.
+
+> [!NOTE]
+> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+ [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
+> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
++
+## Restart the server
+
+Restart the server with the following command:
+
+```azurepowershell-interactive
+Restart-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create an Azure Database for PostgreSQL server using PowerShell](quickstart-create-postgresql-server-database-using-azure-powershell.md)
postgresql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-dropped-server.md
+
+ Title: Restore a dropped Azure Database for PostgreSQL server
+description: This article describes how to restore a dropped server in Azure Database for PostgreSQL using the Azure portal.
+++++ Last updated : 04/26/2021+
+# Restore a dropped Azure Database for PostgreSQL server
+
+When a server is dropped, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped PostgreSQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
+
+## Pre-requisites
+To restore a dropped Azure Database for PostgreSQL server, you need following:
+- Azure Subscription name hosting the original server
+- Location where the server was created
+
+## Steps to restore
+
+1. Browse to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade). Select the **Azure Monitor** service, then select **Activity Log**.
+
+2. In Activity Log, click on **Add filter** as shown and set following filters for the following
+
+ - **Subscription** = Your Subscription hosting the deleted server
+ - **Resource Type** = Azure Database for PostgreSQL servers (Microsoft.DBforPostgreSQL/servers)
+ - **Operation** = Delete PostgreSQL Server (Microsoft.DBforPostgreSQL/servers/delete)
+
+ ![Activity log filtered for delete PostgreSQL server operation](./media/how-to-restore-dropped-server/activity-log-azure.png)
+
+3. Select the **Delete PostgreSQL Server** event, then select the **JSON tab**. Copy the `resourceId` and `submissionTimestamp` attributes in JSON output. The resourceId is in the following format: `/subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBforPostgreSQL/servers/deletedserver`.
++
+ 1. Browse to the PostgreSQL [Create Server REST API Page](/rest/api/postgresql/singleserver/servers/create) and select the **Try It** tab highlighted in green. Sign in with your Azure account.
+
+ 2. Provide the **resourceGroupName**, **serverName** (deleted server name), **subscriptionId** properties, based on the resourceId attribute JSON value captured in the preceding step 3. The api-version property is pre-populated and can be left as-is, as shown in the following image.
+
+ ![Create server using REST API](./media/how-to-restore-dropped-server/create-server-from-rest-api-azure.png)
+
+ 3. Scroll below on Request Body section and paste the following replacing the "Dropped server Location"(e.g. CentralUS, EastUS etc.), "submissionTimestamp", and "resourceId". For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
+
+ ```json
+ {
+ "location": "Dropped Server Location",
+ "properties":
+ {
+ "restorePointInTime": "submissionTimestamp - 15 minutes",
+ "createMode": "PointInTimeRestore",
+ "sourceServerId": "resourceId"
+ }
+ }
+ ```
+
+ For example, if the current time is 2020-11-02T23:59:59.0000000Z, we recommend a minimum of 15 minutes prior restore point in time 2020-11-02T23:44:59.0000000Z. Please see below example and ensure that you are changing three parameters (location,restorePointInTime,sourceServerId) as per your restore requirements.
+
+ ```json
+ {
+ "location": "EastUS",
+ "properties":
+ {
+ "restorePointInTime": "2020-11-02T23:44:59.0000000Z",
+ "createMode": "PointInTimeRestore",
+ "sourceServerId": "/subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/SourceResourceGroup/providers/Microsoft.DBforPostgreSQL/servers/sourceserver"
+ }
+ }
+ ```
+
+ > [!Important]
+ > There is a time limit of five days after the server was dropped. After five days, an error is expected since the backup file cannot be found.
+
+4. If you see Response Code 201 or 202, the restore request is successfully submitted.
+
+ The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for
+ - **Subscription** = Your Subscription
+ - **Resource Type** = Azure Database for PostgreSQL servers (Microsoft.DBforPostgreSQL/servers)
+ - **Operation** = Update PostgreSQL Server Create
+
+## Next steps
+- If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a dropped server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system.
+- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-PostgreSQL/preventing-the-disaster-of-accidental-deletion-for-your-PostgreSQL/ba-p/825222).
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-cli.md
+
+ Title: Backup and restore - Azure CLI - Azure Database for PostgreSQL - Single Server
+description: Learn how to set backup configurations and restore a server in Azure Database for PostgreSQL - Single Server by using the Azure CLI.
+++++
+ms.devlang: azurecli
+ Last updated : 10/25/2019 ++
+# How to back up and restore a server in Azure Database for PostgreSQL - Single Server using the Azure CLI
+
+Azure Database for PostgreSQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
+
+## Prerequisites
+To complete this how-to guide:
+
+- You need an [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md).
++
+ - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Set backup configuration
+
+You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation.
+
+> [!NOTE]
+> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched.
+>
+
+While creating a server via the `az postgres server create` command, the `--geo-redundant-backup` parameter decides your Backup Redundancy Option. If `Enabled`, geo redundant backups are taken. Or if `Disabled` locally redundant backups are taken.
+
+The backup retention period is set by the parameter `--backup-retention-days`.
+
+For more information about setting these values during create, see the [Azure Database for PostgreSQL server CLI Quickstart](quickstart-create-server-database-azure-cli.md).
+
+The backup retention period of a server can be changed as follows:
+
+```azurecli-interactive
+az postgres server update --name mydemoserver --resource-group myresourcegroup --backup-retention 10
+```
+
+The preceding example changes the backup retention period of mydemoserver to 10 days.
+
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the next section.
+
+## Server point-in-time restore
+You can restore the server to a previous point in time. The restored data is copied to a new server, and the existing server is left as is. For example, if a table is accidentally dropped at noon today, you can restore to the time just before noon. Then, you can retrieve the missing table and data from the restored copy of the server.
+
+To restore the server, use the Azure CLI [az postgres server restore](/cli/azure/postgres/server) command.
+
+### Run the restore command
+
+To restore the server, at the Azure CLI command prompt, enter the following command:
+
+```azurecli-interactive
+az postgres server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time 2018-03-13T13:59:00Z --source-server mydemoserver
+```
+
+The `az postgres server restore` command requires the following parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+| resource-group |  myresourcegroup |  The resource group where the source server exists.  |
+| name | mydemoserver-restored | The name of the new server that is created by the restore command. |
+| restore-point-in-time | 2018-03-13T13:59:00Z | Select a point in time to restore to. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as `2018-03-13T05:59:00-08:00`. You can also use the UTC Zulu format, for example, `2018-03-13T13:59:00Z`. |
+| source-server | mydemoserver | The name or ID of the source server to restore from. |
+
+When you restore a server to an earlier point in time, a new server is created. The original server and its databases from the specified point in time are copied to the new server.
+
+The location and pricing tier values for the restored server remain the same as the original server.
+
+After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
+
+The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server.
+
+## Geo restore
+If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for PostgreSQL is available.
+
+To create a server using a geo redundant backup, use the Azure CLI `az postgres server georestore` command.
+
+> [!NOTE]
+> When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated.
+>
+
+To geo restore the server, at the Azure CLI command prompt, enter the following command:
+
+```azurecli-interactive
+az postgres server georestore --resource-group myresourcegroup --name mydemoserver-georestored --source-server mydemoserver --location eastus --sku-name GP_Gen4_8
+```
+This command creates a new server called *mydemoserver-georestored* in East US that will belong to *myresourcegroup*. It is a General Purpose, Gen 4 server with 8 vCores. The server is created from the geo-redundant backup of *mydemoserver*, which is also in the resource group *myresourcegroup*
+
+If you want to create the new server in a different resource group from the existing server, then in the `--source-server` parameter you would qualify the server name as in the following example:
+
+```azurecli-interactive
+az postgres server georestore --resource-group newresourcegroup --name mydemoserver-georestored --source-server "/subscriptions/$<subscription ID>/resourceGroups/$<resource group ID>/providers/Microsoft.DBforPostgreSQL/servers/mydemoserver" --location eastus --sku-name GP_Gen4_8
+
+```
+
+The `az postgres server georestore` command requires the following parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+|resource-group| myresourcegroup | The name of the resource group the new server will belong to.|
+|name | mydemoserver-georestored | The name of the new server. |
+|source-server | mydemoserver | The name of the existing server whose geo redundant backups are used. |
+|location | eastus | The location of the new server. |
+|sku-name| GP_Gen4_8 | This parameter sets the pricing tier, compute generation, and number of vCores of the new server. GP_Gen4_8 maps to a General Purpose, Gen 4 server with 8 vCores.|
+
+When creating a new server by a geo restore, it inherits the same storage size and pricing tier as the source server. These values cannot be changed during creation. After the new server is created, its storage size can be scaled up.
+
+After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
+
+The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server.
+
+## Next steps
+- Learn more about the service's [backups](concepts-backup.md)
+- Learn about [replicas](concepts-read-replicas.md)
+- Learn more about [business continuity](concepts-business-continuity.md) options
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-portal.md
+
+ Title: Backup and restore - Azure portal - Azure Database for PostgreSQL - Single Server
+description: This article describes how to restore a server in Azure Database for PostgreSQL - Single Server using the Azure portal.
+++++ Last updated : 6/30/2020++
+# How to backup and restore a server in Azure Database for PostgreSQL - Single Server using the Azure portal
+
+## Backup happens automatically
+Azure Database for PostgreSQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
+
+## Set backup configuration
+
+You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation, in the **Pricing Tier** window.
+
+> [!NOTE]
+> After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched.
+>
+
+While creating a server through the Azure portal, the **Pricing Tier** window is where you select either **Locally Redundant** or **Geographically Redundant** backups for your server. This window is also where you select the **Backup Retention Period** - how long (in number of days) you want the server backups stored for.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/pricing-tier.png" alt-text="Pricing Tier - Choose Backup Redundancy":::
+
+For more information about setting these values during create, see the [Azure Database for PostgreSQL server quickstart](quickstart-create-server-database-portal.md).
+
+The backup retention period of a server can be changed through the following steps:
+1. Sign into the [Azure portal](https://portal.azure.com/).
+2. Select your Azure Database for PostgreSQL server. This action opens the **Overview** page.
+3. Select **Pricing Tier** from the menu, under **SETTINGS**. Using the slider you can change the **Backup Retention Period** to your preference between 7 and 35 days.
+In the screenshot below it has been increased to 34 days.
+
+4. Click **OK** to confirm the change.
+
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
+
+## Point-in-time restore
+Azure Database for PostgreSQL allows you to restore the server back to a point-in-time and into to a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server.
+
+For example, if a table was accidentally dropped at noon today, you could restore to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level.
+
+The following steps restore the sample server to a point-in-time:
+1. In the Azure portal, select your Azure Database for PostgreSQL server.
+
+2. In the toolbar of the server's **Overview** page, select **Restore**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/2-server.png" alt-text="Azure Database for PostgreSQL - Overview - Restore button":::
+
+3. Fill out the Restore form with the required information:
+
+ :::image type="content" source="./media/how-to-restore-server-portal/3-restore.png" alt-text="Azure Database for PostgreSQL - Restore information":::
+ - **Restore point**: Select the point-in-time you want to restore to.
+ - **Target server**: Provide a name for the new server.
+ - **Location**: You cannot select the region. By default it is same as the source server.
+ - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
+
+4. Click **OK** to restore the server to restore to a point-in-time.
+
+5. Once the restore finishes, locate the new server that is created to verify the data was restored as expected.
+
+The new server created by point-in-time restore has the same server admin login name and password that was valid for the existing server at the point-in-time chose. You can change the password from the new server's **Overview** page.
+
+The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server.
+
+If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
+
+## Geo restore
+
+If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for PostgreSQL is available.
+
+1. Select the **Create a resource** button (+) in the upper-left corner of the portal. Select **Databases** > **Azure Database for PostgreSQL**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/1-navigate-to-postgres.png" alt-text="Navigate to Azure Database for PostgreSQL.":::
+
+2. Select the **Single server** deployment option.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/2-select-deployment-option.png" alt-text="Select Azure Database for PostgreSQL - Single server deployment option.":::
+
+3. Provide the subscription, resource group, and name of the new server.
+
+4. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo redundant backups enabled.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/4-geo-restore.png" alt-text="Select data source.":::
+
+ > [!NOTE]
+ > When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated.
+ >
+
+5. Select the **Backup** dropdown.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/5-geo-restore-backup.png" alt-text="Select backup dropdown.":::
+
+6. Select the source server to restore from.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/6-select-backup.png" alt-text="Select backup.":::
+
+7. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/7-accept-backup.png" alt-text="Continue with backup.":::
+
+8. Fill out the rest of the form with your preferences. You can select any **Location**.
+
+ After selecting the location, you can select **Configure server** to update the **Compute Generation** (if available in the region you have chosen), number of **vCores**, **Backup Retention Period**, and **Backup Redundancy Option**. Changing **Pricing Tier** (Basic, General Purpose, or Memory Optimized) or **Storage** size during restore is not supported.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/8-create.png" alt-text="Fill form.":::
+
+9. Select **Review + create** to review your selections.
+
+10. Select **Create** to provision the server. This operation may take a few minutes.
+
+The new server created by geo restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
+
+The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server.
+
+If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
++
+## Next steps
+- Learn more about the service's [backups](concepts-backup.md).
+- Learn more about [business continuity](concepts-business-continuity.md) options.
postgresql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-powershell.md
+
+ Title: Backup and restore - Azure PowerShell - Azure Database for PostgreSQL
+description: Learn how to backup and restore a server in Azure Database for PostgreSQL by using Azure PowerShell.
+++++
+ms.devlang: azurepowershell
+ Last updated : 06/08/2020+
+# How to back up and restore an Azure Database for PostgreSQL server using PowerShell
+
+Azure Database for PostgreSQL servers is backed up periodically to enable restore features. Using
+this feature you may restore the server and all its databases to an earlier point-in-time, on a new
+server.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- The [Az PowerShell module](/powershell/azure/install-az-ps) installed
+ locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
+
+> [!IMPORTANT]
+> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
+> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If you choose to use PowerShell locally, connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
++
+## Set backup configuration
+
+At server creation, you make the choice between configuring your server for either locally redundant
+or geographically redundant backups.
+
+> [!NOTE]
+> After a server is created, the kind of redundancy it has, geographically redundant vs locally
+> redundant, can't be changed.
+
+While creating a server via the `New-AzPostgreSqlServer` command, the **GeoRedundantBackup**
+parameter decides your backup redundancy option. If **Enabled**, geo redundant backups are taken. Or
+if **Disabled**, locally redundant backups are taken.
+
+The backup retention period is set by the **BackupRetentionDay** parameter.
+
+For more information about setting these values during server creation, see
+[Create an Azure Database for PostgreSQL server using PowerShell](quickstart-create-postgresql-server-database-using-azure-powershell.md).
+
+The backup retention period of a server can be changed as follows:
+
+```azurepowershell-interactive
+Update-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -BackupRetentionDay 10
+```
+
+The preceding example changes the backup retention period of mydemoserver to 10 days.
+
+The backup retention period governs how far back a point-in-time restore can be retrieved, since
+it's based on available backups. Point-in-time restore is described further in the next section.
+
+## Server point-in-time restore
+
+You can restore the server to a previous point-in-time. The restored data is copied to a new server,
+and the existing server is left unchanged. For example, if a table is accidentally dropped, you can
+restore to the time just the drop occurred. Then, you can retrieve the missing table and data from
+the restored copy of the server.
+
+To restore the server, use the `Restore-AzPostgreSqlServer` PowerShell cmdlet.
+
+### Run the restore command
+
+To restore the server, run the following example from PowerShell.
+
+```azurepowershell-interactive
+$restorePointInTime = (Get-Date).AddMinutes(-10)
+Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Restore-AzPostgreSqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore
+```
+
+The **PointInTimeRestore** parameter set of the `Restore-AzPostgreSqlServer` cmdlet requires the
+following parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+| ResourceGroupName |  myresourcegroup |  The resource group where the source server exists.  |
+| Name | mydemoserver-restored | The name of the new server that is created by the restore command. |
+| RestorePointInTime | 2020-03-13T13:59:00Z | Select a point in time to restore. This date and time must be within the source server's backup retention period. Use the ISO8601 date and time format. For example, you can use your own local time zone, such as **2020-03-13T05:59:00-08:00**. You can also use the UTC Zulu format, for example, **2018-03-13T13:59:00Z**. |
+| UsePointInTimeRestore | `<SwitchParameter>` | Use point-in-time mode to restore. |
+
+When you restore a server to an earlier point-in-time, a new server is created. The original server
+and its databases from the specified point-in-time are copied to the new server.
+
+The location and pricing tier values for the restored server remain the same as the original server.
+
+After the restore process finishes, locate the new server and verify that the data is restored as
+expected. The new server has the same server admin login name and password that was valid for the
+existing server at the time the restore was started. The password can be changed from the new
+server's **Overview** page.
+
+The new server created during a restore does not have the VNet service endpoints that existed on the
+original server. These rules must be set up separately for the new server. Firewall rules from the
+original server are restored.
+
+## Geo restore
+
+If you configured your server for geographically redundant backups, a new server can be created from
+the backup of the existing server. This new server can be created in any region that Azure Database
+for PostgreSQL is available.
+
+To create a server using a geo redundant backup, use the `Restore-AzPostgreSqlServer` command with the
+**UseGeoRestore** parameter.
+
+> [!NOTE]
+> When a server is first created it may not be immediately available for geo restore. It may take a
+> few hours for the necessary metadata to be populated.
+
+To geo restore the server, run the following example from PowerShell:
+
+```azurepowershell-interactive
+Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Restore-AzPostgreSqlServer -Name mydemoserver-georestored -ResourceGroupName myresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore
+```
+
+This example creates a new server called **mydemoserver-georestored** in the East US region that
+belongs to **myresourcegroup**. It is a General Purpose, Gen 5 server with 8 vCores. The server is
+created from the geo-redundant backup of **mydemoserver**, also in the resource group
+**myresourcegroup**.
+
+To create the new server in a different resource group from the existing server, specify the new
+resource group name using the **ResourceGroupName** parameter as shown in the following example:
+
+```azurepowershell-interactive
+Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Restore-AzPostgreSqlServer -Name mydemoserver-georestored -ResourceGroupName newresourcegroup -Location eastus -Sku GP_Gen5_8 -UseGeoRestore
+```
+
+The **GeoRestore** parameter set of the `Restore-AzPostgreSqlServer` cmdlet requires the following
+parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+|ResourceGroupName | myresourcegroup | The name of the resource group the new server belongs to.|
+|Name | mydemoserver-georestored | The name of the new server. |
+|Location | eastus | The location of the new server. |
+|UseGeoRestore | `<SwitchParameter>` | Use geo mode to restore. |
+
+When creating a new server using geo restore, it inherits the same storage size and pricing tier as
+the source server unless the **Sku** parameter is specified.
+
+After the restore process finishes, locate the new server and verify that the data is restored as
+expected. The new server has the same server admin login name and password that was valid for the
+existing server at the time the restore was started. The password can be changed from the new
+server's **Overview** page.
+
+The new server created during a restore does not have the VNet service endpoints that existed on the
+original server. These rules must be set up separately for this new server. Firewall rules from the
+original server are restored.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to generate an Azure Database for PostgreSQL connection string with PowerShell](how-to-connection-string-powershell.md)
postgresql How To Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-tls-configurations.md
+
+ Title: TLS configuration - Azure portal - Azure Database for PostgreSQL - Single server
+description: Learn how to set TLS configuration using Azure portal for your Azure Database for PostgreSQL Single server
++++++ Last updated : 06/02/2020++
+# Configuring TLS settings in Azure Database for PostgreSQL Single - server using Azure portal
+
+This article describes how you can configure an Azure Database for PostgreSQL to enforce minimum TLS version allowed for connections and deny all connections with lower TLS version than configured minimum TLS version thereby enhancing the network security.
+
+You can enforce TLS version for connecting to their Azure Database for PostgreSQL. Customers now have a choice to set the minimum TLS version for their database server. For example, setting the minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Instead, setting minimum tls version to 1.2+ means you only allow connections from clients using TLS 1.2 and all connections with TLS 1.0 and TLS 1.1 will be rejected.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+* An [Azure Database for PostgreSQL](quickstart-create-server-database-portal.md)
+
+## Set TLS configurations for Azure Database for PostgreSQL - Single server
+
+Follow these steps to set PostgreSQL minimum TLS version:
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL.
+
+1. On the Azure Database for PostgreSQL - Single server page, under **Settings**, click **Connection security** to open the connection security configuration page.
+
+1. In **Minimum TLS version**, select **1.2** to deny connections with TLS version less than TLS 1.2 for your PostgreSQL Single server.
+
+ :::image type="content" source="./media/how-to-tls-configurations/setting-tls-value.png" alt-text="Azure Database for PostgreSQL Single - server TLS configuration":::
+
+1. Click **Save** to save the changes.
+
+1. A notification will confirm that connection security setting was successfully enabled.
+
+ :::image type="content" source="./media/how-to-tls-configurations/setting-tls-value-success.png" alt-text="Azure Database for PostgreSQL - Single server TLS configuration success":::
+
+## Next steps
+
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md)
postgresql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-troubleshoot-common-connection-issues.md
+
+ Title: Troubleshoot connections - Azure Database for PostgreSQL - Single Server
+description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019++
+# Troubleshoot connection issues to Azure Database for PostgreSQL - Single Server
+
+Connection problems may be caused by various things, including:
+
+* Firewall settings
+* Connection time-out
+* Incorrect login information
+* Maximum limit reached on some Azure Database for PostgreSQL resources
+* Issues with the infrastructure of the service
+* Maintenance being performed in the service
+* The compute allocation of the server is changed by scaling the number of vCores or moving to a different service tier
+
+Generally, connection issues to Azure Database for PostgreSQL can be classified as follows:
+
+* Transient errors (short-lived or intermittent)
+* Persistent or non-transient errors (errors that regularly recur)
+
+## Troubleshoot transient errors
+
+Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. The Azure Database for PostgreSQL service has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery.
+
+### Steps to resolve transient connectivity issues
+
+1. Check the [Microsoft Azure Service Dashboard](https://azure.microsoft.com/status) for any known outages that occurred during the time in which the errors were reported by the application.
+2. Applications that connect to a cloud service such as Azure Database for PostgreSQL should expect transient errors and implement retry logic to handle these errors instead of surfacing these as application errors to users. Review [Handling of transient connectivity errors for Azure Database for PostgreSQL](concepts-connectivity.md) for best practices and design guidelines for handling transient errors.
+3. As a server approaches its resource limits, errors can seem to be transient connectivity issue. See [Limitations in Azure Database for PostgreSQL](concepts-limits.md).
+4. If connectivity problems continue, or if the duration for which your application encounters the error exceeds 60 seconds or if you see multiple occurrences of the error in a given day, file an Azure support request by selecting **Get Support** on the [Azure Support](https://azure.microsoft.com/support/options) site.
+
+## Troubleshoot persistent errors
+
+If the application persistently fails to connect to Azure Database for PostgreSQL, it usually indicates an issue with one of the following:
+
+* Server firewall configuration: Make sure that the Azure Database for PostgreSQL server firewall is configured to allow connections from your client, including proxy servers and gateways.
+* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you can't connect to must be allowed and the application names such as PostgreSQL in some firewalls.
+* User error: You might have mistyped connection parameters, such as the server name in the connection string or a missing *\@servername* suffix in the user name.
+* If you see the error _Server isn't configured to allow ipv6 connections_, note that the Basic tier doesn't support VNet service endpoints. You have to remove the Microsoft.Sql endpoint from the subnet that is attempting to connect to the Basic server.
+* If you see the connection error _sslmode value "***" invalid when SSL support is not compiled in_ error, it means your PostgreSQL client doesn't support SSL. Most probably, the client-side libpq hasn't been compiled with the "--with-openssl" flag. Try connecting with a PostgreSQL client that has SSL support.
+
+### Steps to resolve persistent connectivity issues
+
+1. Set up [firewall rules](how-to-manage-firewall-using-portal.md) to allow the client IP address. For temporary testing purposes only, set up a firewall rule using 0.0.0.0 as the starting IP address and using 255.255.255.255 as the ending IP address. This will open the server to all IP addresses. If this resolves your connectivity issue, remove this rule and create a firewall rule for an appropriately limited IP address or address range.
+2. On all firewalls between the client and the internet, make sure that port 5432 is open for outbound connections.
+3. Verify your connection string and other connection settings.
+4. Check the service health in the dashboard. If you think thereΓÇÖs a regional outage, see [Overview of business continuity with Azure Database for PostgreSQL](concepts-business-continuity.md) for steps to recover to a new region.
+
+## Next steps
+
+* [Handling of transient connectivity errors for Azure Database for PostgreSQL](concepts-connectivity.md)
postgresql How To Upgrade Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-upgrade-using-dump-and-restore.md
+
+ Title: Upgrade using dump and restore - Azure Database for PostgreSQL
+description: Describes offline upgrade methods using dump and restore databases to migrate to a higher version Azure Database for PostgreSQL.
+++++ Last updated : 11/30/2021++
+# Upgrade your PostgreSQL database using dump and restore
+
+>[!NOTE]
+> The concepts explained in this documentation are applicable to both Azure Database for PostgreSQL - Single Server and Azure Database for PostgreSQL - Flexible Server.
+
+You can upgrade your PostgreSQL server deployed in Azure Database for PostgreSQL by migrating your databases to a higher major version server using following methods.
+* **Offline** method using PostgreSQL [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) which incurs downtime for migrating the data. This document addresses this method of upgrade/migration.
+* **Online** method using [Database Migration Service](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) (DMS). This method provides a reduced downtime migration and keeps the target database in-sync with the source and you can choose when to cut-over. However, there are few prerequisites and restrictions to be addressed for using DMS. For details, see the [DMS documentation](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md).
+
+ The following table provides some recommendations based on database sizes and scenarios.
+
+| **Database/Scenario** | **Dump/restore (Offline)** | **DMS (Online)** |
+| | :: | :--: |
+| You have a small database and can afford downtime to upgrade | X | |
+| Small databases (< 10 GB) | X | X |
+| Small-medium DBs (10 GB ΓÇô 100 GB) | X | X |
+| Large databases (> 100 GB) | | X |
+| Can afford downtime to upgrade (irrespective of the database size) | X | |
+| Can address DMS [pre-requisites](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md#prerequisites), including a reboot? | | X |
+| Can avoid DDLs and unlogged tables during the upgrade process? | | X |
+
+This guide provides few offline migration methodologies and examples to show how you can migrate from your source server to the target server that runs a higher version of PostgreSQL.
+
+> [!NOTE]
+> PostgreSQL dump and restore can be performed in many ways. You may choose to migrate using one of the methods provided in this guide or choose any alternate ways to suit your needs. For detailed dump and restore syntax with additional parameters, see the articles [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
++
+## Prerequisites for using dump and restore with Azure Database for PostgreSQL
+
+To step through this how-to-guide, you need:
+
+- A **source** PostgreSQL database server running a lower version of the engine that you want to upgrade.
+- A **target** PostgreSQL database server with the desired major version [Azure Database for PostgreSQL server - Single Server](quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Flexible Server](../flexible-server/quickstart-create-server-portal.md).
+- A PostgreSQL client system to run the dump and restore commands. It is recommended to use the higher database version. For example, if you are upgrading from PostgreSQL version 9.6 to 11, please use PostgreSQL version 11 client.
+ - It can be a Linux or Windows client that has PostgreSQL installed and that has the [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed.
+ - Alternatively, you can use [Azure Cloud Shell](https://shell.azure.com) or by clicking the Azure Cloud Shell on the menu bar at the upper right in the [Azure portal](https://portal.azure.com). You will have to login to your account `az login` before running the dump and restore commands.
+- Your PostgreSQL client preferably running in the same region as the source and target servers.
++
+## Additional details and considerations
+- You can find the connection string to the source and target databases by clicking the ΓÇ£Connection StringsΓÇ¥ from the portal.
+- You may be running more than one database in your server. You can find the list of databases by connecting to your source server and running `\l`.
+- Create corresponding databases in the target database server or add `-C` option to the `pg_dump` command which creates the databases.
+- You must not upgrade `azure_maintenance` or template databases. If you have made any changes to template databases, you may choose to migrate the changes or make those changes in the target database.
+- Refer to the tables above to determine the database is suitable for this mode of migration.
+- If you want to use Azure Cloud Shell, please note that the session times out after 20 minutes. If your database size is < 10 GB, you may be able to complete the upgrade without the session timing out. Otherwise, you may have to keep the session open by other means, such as pressing any key once in 10-15 minutes.
++
+## Example database used in this guide
+
+In this guide, the following source and target servers and database names are used to illustrate with examples.
+
+ | **Description** | **Value** |
+ | - | - |
+ | Source server (v9.5) | pg-95.postgres.database.azure.com |
+ | Source database | bench5gb |
+ | Source database size | 5 GB |
+ | Source user name | pg@pg-95 |
+ | Target server (v11) | pg-11.postgres.database.azure.com |
+ | Target database | bench5gb |
+ | Target user name | pg@pg-11 |
+
+>[!NOTE]
+> Flexible server supports PostgreSQL version 11 onwards. Also, flexible server user name does not require @dbservername.
+
+## Upgrade your databases using offline migration methods
+You may choose to use one of the methods described in this section for your upgrades. You can use the following tips while performing the tasks.
+
+- If you are using the same password for source and the target database, you can set the `PGPASSWORD=yourPassword` environment variable. Then you donΓÇÖt have to provide password every time you run commands like psql, pg_dump, and pg_restore. Similarly you can setup additional variables like `PGUSER`, `PGSSLMODE` etc. see to [PostgreSQL environment variables](https://www.postgresql.org/docs/11/libpq-envars.html).
+
+- If your PostgreSQL server requires TLS/SSL connections (on by default in Azure Database for PostgreSQL servers), set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error may read `FATAL: SSL connection is required. Please specify SSL options and retry.`
+
+- In the Windows command line, run the command `SET PGSSLMODE=require` before running the pg_restore command. In Linux or Bash run the command `export PGSSLMODE=require` before running the pg_restore command.
+
+>[!Important]
+> The steps and methods provided in this document are to give some examples of pg_dump/pg_restore commands and do not represent all possible ways to perform upgrades. It is recommended to test and validate the commands in a test environment before you use them in production.
+
+### Migrate the Roles
+
+Roles (Users) are global objects and needed to be migrated separately to the new cluster before restoring the database. You can use `pg_dumpall` binary with -r (--roles-only) option to dump them.
+To dump all the roles from the source server:
+
+```azurecli-interactive
+pg_dumpall -r --host=mySourceServer --port=5432 --username=myUser --database=mySourceDB > roles.sql
+```
+
+Edit the `roles.sql` and remove references of `NOSUPERUSER` and `NOBYPASSRLS` before restoring the content using psql in the target server:
+
+```azurecli-interactive
+psql -f roles.sql --host=myTargetServer --port=5432 --username=myUser --dbname=postgres
+```
+
+The dump script should not be expected to run completely without errors. In particular, because the script will issue CREATE ROLE for every role existing in the source cluster, it is certain to get a ΓÇ£role already existsΓÇ¥ error for the bootstrap superuser like azure_pg_admin or azure_superuser. This error is harmless and can be ignored. Use of the `--clean` option is likely to produce additional harmless error messages about non-existent objects, although you can minimize those by adding `--if-exists`.
++
+### Method 1: Using pg_dump and psql
+
+This method involves two steps. First is to dump a SQL file from the source server using `pg_dump`. The second step is to import the file to the target server using `psql`. Please see the [Migrate using export and import](how-to-migrate-using-export-and-import.md) documentation for details.
+
+### Method 2: Using pg_dump and pg_restore
+
+In this method of upgrade, you first create a dump from the source server using `pg_dump`. Then you restore that dump file to the target server using `pg_restore`. Please see the [Migrate using dump and restore](how-to-migrate-using-dump-and-restore.md) documentation for details.
+
+### Method 3: Using streaming the dump data to the target database
+
+If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, then you can use this method. The database dump is streamed directly to the target database server and does not store the dump in the client. Hence, this can be used with a client with limited storage and even can be run from the Azure Cloud Shell.
+
+1. Make sure the database exists in the target server using `\l` command. If the database does not exist, then create the database.
+ ```azurecli-interactive
+ psql "host=myTargetServer port=5432 dbname=postgres user=myUser password=###### sslmode=mySSLmode"
+ ```
+ ```SQL
+ postgres> \l
+ postgres> create database myTargetDB;
+ ```
+
+2. Run the dump and restore as a single command line using a pipe.
+ ```azurecli-interactive
+ pg_dump -Fc --host=mySourceServer --port=5432 --username=myUser --dbname=mySourceDB | pg_restore --no-owner --host=myTargetServer --port=5432 --username=myUser --dbname=myTargetDB
+ ```
+
+ For example,
+
+ ```azurecli-interactive
+ pg_dump -Fc --host=pg-95.postgres.database.azure.com --port=5432 --username=pg@pg-95 --dbname=bench5gb | pg_restore --no-owner --host=pg-11.postgres.database.azure.com --port=5432 --username=pg@pg-11 --dbname=bench5gb
+ ```
+3. Once the upgrade (migration) process completes, you can test your application with the target server.
+4. Repeat this process for all the databases within the server.
+
+ As an example, the following table illustrates the time it took to migrate using streaming dump method. The sample data is populated using [pgbench](https://www.postgresql.org/docs/10/pgbench.html). As your database can have different number of objects with varied sizes than pgbench generated tables and indexes, it is highly recommended to test dump and restore of your database to understand the actual time it takes to upgrade your database.
+
+| **Database Size** | **Approx. time taken** |
+| -- | |
+| 1 GB | 1-2 minutes |
+| 5 GB | 8-10 minutes |
+| 10 GB | 15-20 minutes |
+| 50 GB | 1-1.5 hours |
+| 100 GB | 2.5-3 hours|
+
+### Method 4: Using parallel dump and restore
+
+You can consider this method if you have few larger tables in your database and you want to parallelize the dump and restore process for that database. You also need enough storage in your client system to accommodate backup dumps. This parallel dump and restore process reduces the time consumption to complete the whole migration. For example, the 50 GB pgbench database which took 1-1.5 hrs to migrate was completed using Method 1 and 2 took less than 30 minutes using this method.
+
+1. For each database in your source server, create a corresponding database at the target server.
+
+ ```azurecli-interactive
+ psql "host=myTargetServer port=5432 dbname=postgres user=myuser password=###### sslmode=mySSLmode"
+ ```
+
+ ```SQL
+ postgres> create database myDB;
+ ```
+
+ For example,
+ ```bash
+ psql "host=pg-11.postgres.database.azure.com port=5432 dbname=postgres user=pg@pg-11 password=###### sslmode=require"
+ psql (12.3 (Ubuntu 12.3-1.pgdg18.04+1), server 13.3)
+ SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
+ Type "help" for help.
+
+ postgres> create database bench5gb;
+ postgres> \q
+ ```
+
+2. Run the pg_dump command in a directory format with number of jobs = 4 (number of tables in the database). With larger compute tier and with more tables, you can increase it to a higher number. That pg_dump will create a directory to store compressed files for each job.
+
+ ```azurecli-interactive
+ pg_dump -Fd -v --host=sourceServer --port=5432 --username=myUser --dbname=mySourceDB -j 4 -f myDumpDirectory
+ ```
+ For example,
+ ```bash
+ pg_dump -Fd -v --host=pg-95.postgres.database.azure.com --port=5432 --username=pg@pg-95 --dbname=bench5gb -j 4 -f dump.dir
+ ```
+
+3. Then restore the backup at the target server.
+ ```azurecli-interactive
+ $ pg_restore -v --no-owner --host=myTargetServer --port=5432 --username=myUser --dbname=myTargetDB -j 4 myDumpDir
+ ```
+ For example,
+ ```bash
+ $ pg_restore -v --no-owner --host=pg-11.postgres.database.azure.com --port=5432 --username=pg@pg-11 --dbname=bench5gb -j 4 dump.dir
+ ```
+
+> [!TIP]
+> The process mentioned in this document can also be used to upgrade your Azure Database for PostgreSQL - Flexible server, which is in Preview. The main difference is the connection string for the flexible server target is without the `@dbName`. For example, if the user name is `pg`, the single serverΓÇÖs username in the connect string will be `pg@pg-95`, while with flexible server, you can simply use `pg`.
+
+## Post upgrade/migrate
+After the major version upgrade is complete, we recommend to run the `ANALYZE` command in each database to refresh the `pg_statistic` table. Otherwise, you may run into performance issues.
+
+```SQL
+postgres=> analyze;
+ANALYZE
+```
+
+## Next steps
+
+- After you're satisfied with the target database function, you can drop your old database server.
+- For Azure Database for PostgreSQL - Single server only. If you want to use the same database endpoint as the source server, then after you had deleted your old source database server, you can create a read replica with the old database server name. Once the steady replication state is established, you can stop the replica, which will promote the replica server to be an independent server. See [Replication](./concepts-read-replicas.md) for more details.
+
+>[!Important]
+> It is highly recommended to test the new PostgreSQL upgraded version before using it directly for production. This includes comparing server parameters between the older source version source and the newer version target. Please ensure that they are same and check on any new parameters that were added in the new version. Differences between versions can be found [here](https://www.postgresql.org/docs/release/).
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-postgres-choose-server-options.md
+
+ Title: Choose the right PostgreSQL server option in Azure
+description: Provides guidelines for choosing the right PostgreSQL server option for your deployments.
++++++ Last updated : 12/01/2021+
+# Choose the right PostgreSQL server option in Azure
+
+With Azure, your PostgreSQL Server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has multiple deployment options, each with multiple service tiers. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, and make backups, or if you want to delegate these operations to Azure.
+
+When making your decision, consider the following three options in PaaS or alternatively running on Azure VMs (IaaS)
+- [Azure Database for PostgreSQL Single Server](./overview-single-server.md)
+- [Azure Database for PostgreSQL Flexible Server](../flexible-server/overview.md)
+- [Azure Database for PostgreSQL Hyperscale (Citus)](../hyperscale/index.yml)
+
+**PostgreSQL on Azure VMs** option falls into the industry category of IaaS. With this service, you can run PostgreSQL Server inside a fully managed virtual machine on the Azure cloud platform. All recent versions and editions of PostgreSQL can be installed on an IaaS virtual machine. In the most significant difference from Azure Database for PostgreSQL, PostgreSQL on Azure VMs offers control over the database engine. However, this control comes at the cost of responsibility to manage the VMs and many database administration (DBA) tasks. These tasks include maintaining and patching database servers, database recovery, and high-availability design.
+
+The main differences between these options are listed in the following table:
+
+| **Attribute** | **Postgres on Azure VMs** | **PostgreSQL as PaaS** |
+| -- | -- | -- |
+| **Availability SLA** |- [Virtual Machine SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/) | - [Single Server, Flexible Server, and Hyperscale (Citus) SLA](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/)|
+| **OS and PostgreSQL patching** | - Customer managed | - Single Server ΓÇô Automatic <br> - Flexible Server ΓÇô Automatic with optional customer managed window <br> - Hyperscale (Citus) ΓÇô Automatic |
+| **High availability** | - Customers architect, implement, test, and maintain high availability. Capabilities might include clustering, replication etc. | - Single Server: built-in <br> - Flexible Server: built-in <br> - Hyperscale (Citus): built with standby |
+| **Zone Redundancy** | - Azure VMs can be set up to run in different availability zones. For an on-premises solution, customers must create, manage, and maintain their own secondary data center. | - Single Server: No <br> - Flexible Server: Yes <br> - Hyperscale (Citus): No |
+| **Hybrid Scenario** | - Customer managed |- Single Server: Read-replica <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
+| **Backup and Restore** | - Customer Managed | - Single Server: built-in with user configuration for local and geo <br> - Flexible Server: built-in with user configuration on zone-redundant storage <br> - Hyperscale (Citus): built-in |
+| **Monitoring Database Operations** | - Customer Managed | - Single Server, Flexible Server, and Hyperscale (Citus): All offer customers the ability to set alerts on the database operation and act upon reaching thresholds. |
+| **Advanced Threat Protection** | - Customers must build this protection for themselves. |- Single Server: Yes <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
+| **Disaster Recovery** | - Customer Managed | - Single Server: Geo redundant backup and geo read-replica <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
+| **Intelligent Performance** | - Customer Managed | - Single Server: Yes <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
+
+## Total cost of ownership (TCO)
+
+TCO is often the primary consideration that determines the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for PostgreSQL and PostgreSQL on Azure VMs.
+
+## Billing
+
+Azure Database for PostgreSQL is currently available as a service in several tiers with different prices for resources. All resources are billed hourly at a fixed rate. For the latest information on the currently supported service tiers, compute sizes, and storage amounts, see [pricing page](https://azure.microsoft.com/pricing/details/postgresql/server/) You can dynamically adjust service tiers and compute sizes to match your application's varied throughput needs. You're billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/).
+
+With Azure Database for PostgreSQL, Microsoft automatically configures, patches, and upgrades the database software. These automated actions reduce your administration costs. Also, Azure Database for PostgreSQL has [automated backup-link]() capabilities. These capabilities help you achieve significant cost savings, especially when you have a large number of databases. In contrast, with PostgreSQL on Azure VMs you can choose and run any PostgreSQL version. However, you need to pay for the provisioned VM, storage cost associated with the data, backup, monitoring data and log storage and the costs for the specific PostgreSQL license type used (if any).
+
+Azure Database for PostgreSQL Single Server provides built-in high availability at the zonal-level (within an AZ) for any kind of node-level interruption while still maintaining the [SLA guarantee](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/) for the service. Flexible Server provides [uptime SLAs](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/) with and without zone-redundant configuration. However, for database high availability within VMs, you use the high availability options like [Streaming Replication](https://www.postgresql.org/docs/12/warm-standby.html#STREAMING-REPLICATION) that are available on a PostgreSQL database. Using a supported high availability option doesn't provide an additional SLA. But it does let you achieve greater than 99.99% database availability at additional cost and administrative overhead.
+
+For more information on pricing, see the following articles:
+- [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/)
+- [Virtual machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/)
+- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
+
+## Administration
+
+For many businesses, the decision to transition to a cloud service is as much about offloading complexity of administration as it is about cost.
+
+With IaaS, Microsoft:
+
+- Administers the underlying infrastructure.
+- Provides automated patching for underlying hardware and OS
+
+With PaaS, Microsoft:
+
+- Administers the underlying infrastructure.
+- Provides automated patching for underlying hardware, OS and database engine.
+- Manages high availability of the database.
+- Automatically performs backups and replicates all data to provide disaster recovery.
+- Encrypts the data at rest and in motion by default.
+- Monitors your server and provides features for query performance insights and performance recommendations.
+
+With Azure Database for PostgreSQL, you can continue to administer your database. But you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include:
+
+- Databases
+- Sign-in
+- Index tuning
+- Query tuning
+- Auditing
+- Security
+
+Additionally, configuring high availability to another data center requires minimal to no configuration or administration.
+
+- With PostgreSQL on Azure VMs, you have full control over the operating system and the PostgreSQL server instance configuration. With a VM, you decide when to update or upgrade the operating system and database software and what patches to apply. You also decide when to install any additional software such as an antivirus application. Some automated features are provided to greatly simplify patching, backup, and high availability. You can control the size of the VM, the number of disks, and their storage configurations. For more information, see [Virtual machine and cloud service sizes for Azure](../../virtual-machines/sizes.md).
+
+## Time to move to Azure PostgreSQL Service (PaaS)
+
+- Azure Database for PostgreSQL is the right solution for cloud-designed applications when developer productivity and fast time to market for new solutions are critical. With programmatic functionality that is like DBA, the service is suitable for cloud architects and developers because it lowers the need for managing the underlying operating system and database.
+
+- When you want to avoid the time and expense of acquiring new on-premises hardware, PostgreSQL on Azure VMs is the right solution for applications that require a granular control and customization of PostgreSQL engine not supported by the service or requiring access of the underlying OS.
+
+## Next steps
+
+- See Azure Database for [PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
+- Get started by creating your first server.
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-single-server.md
+
+ Title: Azure Database for PostgreSQL Single Server
+description: Provides an overview of Azure Database for PostgreSQL Single Server.
++++++ Last updated : 11/30/2021+
+# Azure Database for PostgreSQL Single Server
+
+[Azure Database for PostgreSQL](./overview.md) powered by the PostgreSQL community edition is available in three deployment modes:
+
+- Single Server
+- Flexible Server
+- Hyperscale (Citus)
+
+In this article, we will provide an overview and introduction to core concepts of single server deployment model. To learn about flexible server deployment mode, see [flexible server overview](../flexible-server/overview.md) and Hyperscale (Citus) Overview respectively.
+
+## Overview
+
+Single Server is a fully managed database service with minimal requirements for customizations of the database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized to provide 99.99% availability on single availability zone. It supports community version of PostgreSQL 9.6, 10, and 11. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+
+Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom PostgreSQL configuration settings.
+
+## High availability
+
+The single server deployment model is optimized for built-in high availability, and elasticity at reduced cost. The architecture separates compute and storage. The database engine runs on a proprietary compute container, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
+
+During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
+
+1. A new compute container is provisioned.
+2. The storage with data files is mapped to the new container.
+3. PostgreSQL database engine is brought online on the new compute container.
+4. Gateway service ensures transparent failover ensuring no application side changes requires.
+
+ :::image type="content" source="./media/overview/overview-azure-postgres-single-server.png" alt-text="Azure Database for PostgreSQL Single Server":::
+
+The typical failover time ranges from 60-120 seconds. The cloud native design of the single server service allows it to support 99.99% of availability eliminating the cost of passive hot standby.
+
+Azure's industry leading 99.99% availability service level agreement (SLA), powered by a global network of Microsoft-managed datacenters, helps keep your applications running 24/7.
+
+## Automated patching
+
+The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For PostgreSQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patching frequency is service managed based on the criticality of the payload. In general, the service follows monthly release schedule as part of the continuous integration and release. Users can subscribe to the [planned maintenance notification]() to receive notification of the upcoming maintenance 72 hours before the event.
+
+## Automatic backups
+
+The single server service automatically creates server backups and stores them in user configured locally redundant (LRS) or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption. See [Backups](./concepts-backup.md) for details.
+
+## Adjust performance and scale within seconds
+
+The single server service is available in three SKU tiers: Basic, General Purpose, and Memory Optimized. The Basic tier is best suited for low-cost development and low concurrency workloads. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage auto-growth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume. See [Pricing tiers]() for details.
+
+## Enterprise grade security, compliance, and governance
+
+The single server service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default) or [customer managed](). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service supports TLS versions 1.2, 1.1 and 1.0 with an ability to enforce [minimum TLS version]().
+
+The service allows private access to the servers using private link and provides Advanced threat protection feature. Advanced threat protection detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
+
+In addition to native authentication, the single server service supports Azure Active Directory authentication. Azure AD authentication is a mechanism of connecting to the PostgreSQL servers using identities defined and managed in Azure AD. With Azure AD authentication, you can manage database user identities and other Azure services in a central location, which simplifies and centralizes access control.
+
+[Audit logging]() (in preview) is available to track all database level activity.
+
+The single server service is compliant with all the industry-leading certifications like FedRAMP, HIPAA, PCI DSS. Visit the [Azure Trust Center]() for information about Azure's platform security.
+
+For more information about Azure Database for PostgreSQL security features, see the [security overview]().
+
+## Monitoring and alerting
+
+The single server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service allows configuring slow query logs and comes with a differentiated [Query store](./concepts-query-store.md) feature. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. See [Monitoring](./concepts-monitoring.md) for details.
+
+## Migration
+
+The service runs community version of PostgreSQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing application developed on PostgreSQL engine to single server service. The migration to the single server can be performed using one of the following options:
+
+- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like Pg_dump and Pg_restore can provide fastest way to migrate. See [Migrate using dump and restore](./how-to-migrate-using-dump-and-restore.md) for details.
+- **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to single server with minimal downtime, Azure Database Migration Service can be leveraged. See [DMS via portal](../../dms/tutorial-postgresql-azure-postgresql-online-portal.md) and [DMS via CLI](../../dms/tutorial-postgresql-azure-postgresql-online.md).
+
+## Frequently Asked Questions
+
+ Will Flexible Server replace Single Server or Will Single Server be retired soon?
+
+We continue to support Single Server and encourage you to adopt Flexible Server which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
++
+## Contacts
+
+For any questions or suggestions you might have about working with Azure Database for PostgreSQL, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). This email address is not a technical support alias.
+
+In addition, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
+
+## Next steps
+
+Now that you've read an introduction to Azure Database for PostgreSQL single server deployment mode, you're ready to:
+- Create your first server.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview.md
+
+ Title: What is Azure Database for PostgreSQL
+description: Provides an overview of Azure Database for PostgreSQL relational database service in the context of flexible server.
++++++ Last updated : 01/24/2022
+adobe-target: true
++
+# What is Azure Database for PostgreSQL?
+
+Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on the [PostgreSQL open source relational database](https://www.postgresql.org/). Azure Database for PostgreSQL delivers:
+
+- Built-in high availability.
+- Data protection using automatic backups and point-in-time-restore for up to 35 days.
+- Automated maintenance for underlying hardware, operating system and database engine to keep the service secure and up to date.
+- Predictable performance, using inclusive pay-as-you-go pricing.
+- Elastic scaling within seconds.
+- Enterprise grade security and industry-leading compliance to protect sensitive data at-rest and in-motion.
+- Monitoring and automation to simplify management and monitoring for large-scale deployments.
+- Industry-leading support experience.
+
+ :::image type="content" source="./media/overview/overview-what-is-azure-postgres.png" alt-text="Azure Database for PostgreSQL":::
+
+These capabilities require almost no administration, and all are provided at no additional cost. They allow you to focus on rapid application development and accelerating your time to market rather than allocating precious time and resources to managing virtual machines and infrastructure. In addition, you can continue to develop your application with the open-source tools and platform of your choice to deliver with the speed and efficiency your business demands, all without having to learn new skills.
+
+## Deployment models
+
+Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in three deployment modes:
+
+- Single Server
+- Flexible Server
+- Hyperscale (Citus)
+
+### Azure Database for PostgreSQL - Single Server
+
+Azure Database for PostgreSQL Single Server is a fully managed database service with minimal requirements for customizations of database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of PostgreSQL 9.5, 9,6, 10, and 11. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+
+The Single Server deployment option offers three pricing tiers: Basic, General Purpose, and Memory Optimized. Each tier offers different resource capabilities to support your database workloads. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Pricing tiers](./concepts-pricing-tiers.md) for details.
+
+Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom PostgreSQL configuration settings.
+
+For detailed overview of single server deployment mode, refer [single server overview](./overview-single-server.md).
+
+### Azure Database for PostgreSQL - Flexible Server
+
+Azure Database for PostgreSQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible Server provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. The service currently supports community version of PostgreSQL 11, 12, and 13, with plans to add newer versions soon. The service is generally available today in wide variety of Azure regions.
+
+Flexible servers are best suited for
+
+- Application developments requiring better control and customizations
+- Cost optimization controls with ability to stop/start server
+- Zone redundant high availability
+- Managed maintenance windows
+
+For a detailed overview of flexible server deployment mode, see [flexible server overview](../flexible-server/overview.md).
+
+### Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+The Hyperscale (Citus) option horizontally scales queries across multiple machines using sharding. Its query engine parallelizes incoming SQL queries across these servers for faster responses on large datasets. It serves applications that require greater scale and performance, generally workloads that are approaching--or already exceed--100 GB of data.
+
+The Hyperscale (Citus) deployment option delivers:
+
+- Horizontal scaling across multiple machines using sharding
+- Query parallelization across these servers for faster responses on large datasets
+- Excellent support for multi-tenant applications, real-time operational analytics, and high-throughput transactional workloads
+
+Applications built for PostgreSQL can run distributed queries on Hyperscale (Citus) with standard [connection libraries](./concepts-connection-libraries.md) and minimal changes.
+
+## Next steps
+
+Learn more about the three deployment modes for Azure Database for PostgreSQL and choose the right options based on your needs.
+
+- [Single Server](./overview-single-server.md)
+- [Flexible Server](../flexible-server/overview.md)
+- [Hyperscale (Citus)](../hyperscale/overview.md)
postgresql Partners Migration Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/partners-migration-postgresql.md
+
+ Title: Azure Database for PostgreSQL migration partners
+description: Lists of third-party migration partners with solutions that support Azure Database for PostgreSQL.
+++++ Last updated : 08/07/2018++
+# Azure Database for PostgreSQL migration partners
+To broadly support your Azure Database for PostgreSQL solution, you can choose from a wide variety of industry-leading partners and tools. This article highlights Microsoft partners with migration solutions that support Azure Database for PostgreSQL.
+
+## Migration partners
+| Partner | Description | Links | Videos |
+| | | | |
+| ![SNP Technologies][1] |**SNP Technologies**<br>SNP Technologies is a cloud-only service provider, building secure and reliable solutions for businesses of the future. The company believes in generating real value for your business. From thought to execution, SNP Technologies shares a common purpose with clients, to turn their investment into an advantage.|[Website][snp_website]<br>[Twitter][snp_twitter]<br>[Contact][snp_contact] | |
+| ![Pragmatic Works][3] |**Pragmatic Works**<br>Pragmatic Works is a training and consulting company with deep expertise in data management and performance, Business Intelligence, Big Data, Power BI, and Azure. They focus on data optimization and improving the efficiency of SQL Server and cloud management.|[Website][pragmatic-works_website]<br>[Twitter][pragmatic-works_twitter]<br>[YouTube][pragmatic-works_youtube]<br>[Contact][pragmatic-works_contact] | |
+| ![Infosys][4] |**Infosys**<br>Infosys is a global leader in the latest digital services and consulting. With over three decades of experience managing the systems of global enterprises, Infosys expertly steers clients through their digital journey by enabling organizations with an AI-powered core. Doing so helps prioritize the execution of change. Infosys also provides businesses with agile digital at scale to deliver unprecedented levels of performance and customer delight.|[Website][infosys_website]<br>[Twitter][infosys_twitter]<br>[YouTube][infosys_youtube]<br>[Contact][infosys_contact] | |
+| ![credativ][5] |**credativ**<br>credativ is an independent consulting and services company. Since 1999, they have offered comprehensive services and technical support for the implementation and operation of Open Source software in business applications. Their comprehensive range of services includes strategic consulting, sound technical advice, qualified training, and personalized support up to 24 hours per day for all your IT needs.|[Marketplace][credativ_marketplace]<br>[Website][credativ_website]<br>[Twitter][credative_twitter]<br>[YouTube][credativ_youtube]<br>[Contact][credativ_contact] | |
+| ![Pactera][6] |**Pactera**<br>Pactera is a global company offering consulting, digital, technology, and operations services to the worldΓÇÖs leading enterprises. From their roots in engineering to the latest in digital transformation, they give customers a competitive edge. Their proven methodologies and tools ensure your data is secure, authentic, and accurate.|[Website][pactera_website]<br>[Twitter][pactera_twitter]<br>[Contact][pactera_contact] | |
+
+## Next steps
+To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/).
+
+<!--Image references-->
+[1]: ./media/partner-migration-postgresql/snp-logo.png
+[2]: ./media/partner-migration-postgresql/db-best-logo.png
+[3]: ./media/partner-migration-postgresql/pw-logo-text-cmyk-1000.png
+[4]: ./media/partner-migration-postgresql/infosys-logo.png
+[5]: ./media/partner-migration-postgresql/credativ-round-logo-2.png
+[6]: ./media/partner-migration-postgresql/pactera-logo-small-2.png
+
+<!--Website links -->
+[snp_website]:https://www.snp.com//
+[pragmatic-works_website]:https://pragmaticworks.com//
+[infosys_website]:https://www.infosys.com/
+[credativ_website]:https://www.credativ.com/postgresql-competence-center/microsoft-azure
+[pactera_website]:https://en.pactera.com/
+
+<!--Get Started Links-->
+<!--Datasheet Links-->
+<!--Marketplace Links -->
+[credativ_marketplace]:https://azuremarketplace.microsoft.com/de-de/marketplace/apps?search=credativ&page=1
+
+<!--Press links-->
+
+<!--YouTube links-->
+[pragmatic-works_youtube]:https://www.youtube.com/user/PragmaticWorks
+[infosys_youtube]:https://www.youtube.com/user/Infosys
+[credativ_youtube]:https://www.youtube.com/channel/UCnSnr6_TcILUQQvAwlYFc8A
+
+<!--Twitter links-->
+[snp_twitter]:https://twitter.com/snptechnologies
+[pragmatic-works_twitter]:https://twitter.com/PragmaticWorks
+[infosys_twitter]:https://twitter.com/infosys
+[credative_twitter]:https://twitter.com/credativ
+[pactera_twitter]:https://twitter.com/Pactera?s=17
+
+<!--Contact links-->
+[snp_contact]:mailto:sachin@snp.com
+[pragmatic-works_contact]:mailto:marketing@pragmaticworks.com
+[infosys_contact]:https://www.infosys.com/contact/
+[credativ_contact]:mailto:info@credativ.com
+[pactera_contact]:mailto:shushi.gaur@pactera.com
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
+
+ Title: Built-in policy definitions for Azure Database for PostgreSQL
+description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources.
++++++ Last updated : 05/11/2022+
+# Azure Policy built-in definitions for Azure Database for PostgreSQL
+
+This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy
+definitions for Azure Database for PostgreSQL. For additional Azure Policy built-ins for other
+services, see
+[Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure Database for PostgreSQL
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../../governance/policy/concepts/effects.md).
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-arm-template.md
+
+ Title: 'Quickstart: Create an Azure DB for PostgreSQL - ARM template'
+description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server by using an Azure Resource Manager template.
++++++ Last updated : 02/11/2021++
+# Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - single server
+
+Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for PostgreSQL - single server in the Azure portal, PowerShell, or Azure CLI.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.dbforpostgresql%2Fmanaged-postgresql-with-vnet%2Fazuredeploy.json)
+
+## Prerequisites
+
+# [Portal](#tab/azure-portal)
+
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+# [PowerShell](#tab/PowerShell)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally, [Azure PowerShell](/powershell/azure/).
+
+# [CLI](#tab/CLI)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally, [Azure CLI](/cli/azure/).
+++
+## Review the template
+
+You create an Azure Database for PostgreSQL server with a configured set of compute and storage resources. To learn more, see [Pricing tiers in Azure Database for PostgreSQL - Single Server](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-postgresql-with-vnet/).
++
+The template defines five Azure resources:
+
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets)
+* [**Microsoft.DBforPostgreSQL/servers**](/azure/templates/microsoft.dbforpostgresql/servers)
+* [**Microsoft.DBforPostgreSQL/servers/virtualNetworkRules**](/azure/templates/microsoft.dbforpostgresql/servers/virtualnetworkrules)
+* [**Microsoft.DBforPostgreSQL/servers/firewallRules**](/azure/templates/microsoft.dbforpostgresql/servers/firewallrules)
+
+More Azure Database for PostgreSQL template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Dbforpostgresql&pageNumber=1&sort=Popular).
+
+## Deploy the template
+
+# [Portal](#tab/azure-portal)
+
+Select the following link to deploy the Azure Database for PostgreSQL server template in the Azure portal:
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure link":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.dbforpostgresql%2Fmanaged-postgresql-with-vnet%2Fazuredeploy.json)
+
+On the **Deploy Azure Database for PostgreSQL with VNet** page:
+
+1. For **Resource group**, select **Create new**, enter a name for the new resource group, and select **OK**.
+
+2. If you created a new resource group, select a **Location** for the resource group and the new server.
+
+3. Enter a **Server Name**, **Administrator Login**, and **Administrator Login Password**.
+
+ :::image type="content" source="./media/quickstart-create-postgresql-server-database-using-arm-template/deploy-azure-database-for-postgresql-with-vnet.png" alt-text="Deploy Azure Database for PostgreSQL with VNet window, Azure quickstart template, Azure portal":::
+
+4. Change the other default settings if you want:
+
+ * **Subscription**: the Azure subscription you want to use for the server.
+ * **Sku Capacity**: the vCore capacity, which can be *2* (the default), *4*, *8*, *16*, *32*, or *64*.
+ * **Sku Name**: the SKU tier prefix, SKU family, and SKU capacity, joined by underscores, such as *B_Gen5_1*, *GP_Gen5_2* (the default), or *MO_Gen5_32*.
+ * **Sku Size MB**: the storage size, in megabytes, of the Azure Database for PostgreSQL server (default *51200*).
+ * **Sku Tier**: the deployment tier, such as *Basic*, *GeneralPurpose* (the default), or *MemoryOptimized*.
+ * **Sku Family**: *Gen4* or *Gen5* (the default), which indicates hardware generation for server deployment.
+ * **PostgreSQL Version**: the version of PostgreSQL server to deploy, such as *9.5*, *9.6*, *10*, or *11* (the default).
+ * **Backup Retention Days**: the desired period for geo-redundant backup retention, in days (default *7*).
+ * **Geo Redundant Backup**: *Enabled* or *Disabled* (the default), depending on geo-disaster recovery (Geo-DR) requirements.
+ * **Virtual Network Name**: the name of the virtual network (default *azure_postgresql_vnet*).
+ * **Subnet Name**: the name of the subnet (default *azure_postgresql_subnet*).
+ * **Virtual Network Rule Name**: the name of the virtual network rule allowing the subnet (default *AllowSubnet*).
+ * **Vnet Address Prefix**: the address prefix for the virtual network (default *10.0.0.0/16*).
+ * **Subnet Prefix**: the address prefix for the subnet (default *10.0.0.0/16*).
+
+5. Read the terms and conditions, and then select **I agree to the terms and conditions stated above**.
+
+6. Select **Purchase**.
+
+# [PowerShell](#tab/PowerShell)
+
+Use the following interactive code to create a new Azure Database for PostgreSQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password.
+
+To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurepowershell-interactive
+$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for PostgreSQL server"
+$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist"
+$location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
+$adminUser = Read-Host -Prompt "Enter the Azure Database for PostgreSQL server's administrator account name"
+$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString
+
+New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
+ -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbforpostgresql/managed-postgresql-with-vnet/azuredeploy.json `
+ -serverName $serverName `
+ -administratorLogin $adminUser `
+ -administratorLoginPassword $adminPassword
+
+Read-Host -Prompt "Press [ENTER] to continue: "
+```
+
+# [CLI](#tab/CLI)
+
+Use the following interactive code to create a new Azure Database for PostgreSQL server using the template. The code prompts you for the new server name, the name and location of a new resource group, and an administrator account name and password.
+
+To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurecli-interactive
+read -p "Enter a name for the new Azure Database for PostgreSQL server:" serverName &&
+read -p "Enter a name for the new resource group where the server will exist:" resourceGroupName &&
+read -p "Enter an Azure region (for example, centralus) for the resource group:" location &&
+read -p "Enter the Azure Database for PostgreSQL server's administrator account name:" adminUser &&
+read -p "Enter the administrator password:" adminPassword &&
+params='serverName='$serverName' administratorLogin='$adminUser' administratorLoginPassword='$adminPassword &&
+az group create --name $resourceGroupName --location $location &&
+az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.dbforpostgresql/managed-postgresql-with-vnet/azuredeploy.json &&
+read -p "Press [ENTER] to continue: "
+```
+++
+## Review deployed resources
+
+# [Portal](#tab/azure-portal)
+
+Follow these steps to see an overview of your new Azure Database for PostgreSQL server:
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for PostgreSQL servers**.
+
+2. In the database list, select your new server. The **Overview** page for your new Azure Database for PostgreSQL server appears.
+
+# [PowerShell](#tab/PowerShell)
+
+Run the following interactive code to view details about your Azure Database for PostgreSQL server. You'll have to enter the name of the new server.
+
+```azurepowershell-interactive
+$serverName = Read-Host -Prompt "Enter the name of your Azure Database for PostgreSQL server"
+Get-AzResource -ResourceType "Microsoft.DbForPostgreSQL/servers" -Name $serverName | ft
+Read-Host -Prompt "Press [ENTER] to continue: "
+```
+
+# [CLI](#tab/CLI)
+
+Run the following interactive code to view details about your Azure Database for PostgreSQL server. You'll have to enter the name and the resource group of the new server.
+
+```azurecli-interactive
+read -p "Enter your Azure Database for PostgreSQL server name: " serverName &&
+read -p "Enter the resource group where the Azure Database for PostgreSQL server exists: " resourcegroupName &&
+az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DbForPostgreSQL/servers" &&
+read -p "Press [ENTER] to continue: "
+```
+++
+## Exporting ARM template from the portal
+You can [export an ARM template](../../azure-resource-manager/templates/export-template-portal.md) from the Azure portal. There are two ways to export a template:
+
+- [Export from resource group or resource](../../azure-resource-manager/templates/export-template-portal.md#export-template-from-a-resource). This option generates a new template from existing resources. The exported template is a "snapshot" of the current state of the resource group. You can export an entire resource group or specific resources within that resource group.
+- [Export before deployment or from history](../../azure-resource-manager/templates/export-template-portal.md#download-template-before-deployment). This option retrieves an exact copy of a template used for deployment.
+
+When exporting the template, in the ```"properties":{ }``` section of the PostgreSQL server resource you will notice that ```administratorLogin``` and ```administratorLoginPassword``` will not be included for security reasons. You **MUST** add these parameters to your template before deploying the template or the template will fail.
+
+```
+"resources": [
+ {
+ "type": "Microsoft.DBforPostgreSQL/servers",
+ "apiVersion": "2017-12-01",
+ "name": "[parameters('servers_name')]",
+ "location": "southcentralus",
+ "sku": {
+ "name": "B_Gen5_1",
+ "tier": "Basic",
+ "family": "Gen5",
+ "capacity": 1
+ },
+ "properties": {
+ "administratorLogin": "[parameters('administratorLogin')]",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+```
+++
+## Clean up resources
+
+When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
+
+2. In the resource group list, choose the name of your resource group.
+
+3. In the **Overview** page of your resource group, select **Delete resource group**.
+
+4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+Read-Host -Prompt "Press [ENTER] to continue: "
+```
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+read -p "Enter the Resource Group name: " resourceGroupName &&
+az group delete --name $resourceGroupName &&
+read -p "Press [ENTER] to continue: "
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a template, see:
+
+> [!div class="nextstepaction"]
+> [ Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-azure-powershell.md
+
+ Title: 'Quickstart: Create server - Azure PowerShell - Azure Database for PostgreSQL - Single Server'
+description: Quickstart guide to create an Azure Database for PostgreSQL - Single Server using Azure PowerShell.
+++++
+ms.devlang: azurepowershell
+ Last updated : 06/08/2020++
+# Quickstart: Create an Azure Database for PostgreSQL - Single Server using PowerShell
+
+This quickstart describes how to use PowerShell to create an Azure Database for PostgreSQL server in an
+Azure resource group. You can use PowerShell to create and manage Azure resources interactively or
+in scripts.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
+before you begin.
+
+If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
+module and connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)
+cmdlet. For more information about installing the Az PowerShell module, see
+[Install Azure PowerShell](/powershell/azure/install-az-ps).
+
+> [!IMPORTANT]
+> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
+> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If this is your first time using the Azure Database for PostgreSQL service, you must register the
+**Microsoft.DBforPostgreSQL** resource provider.
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.DBforPostgreSQL
+```
++
+If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
+should be billed. Select a specific subscription ID using the
+[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+```azurepowershell-interactive
+Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+```
+
+## Create a resource group
+
+Create an
+[Azure resource group](../../azure-resource-manager/management/overview.md)
+using the
+[New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)
+cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as
+a group.
+
+The following example creates a resource group named **myresourcegroup** in the **West US** region.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myresourcegroup -Location westus
+```
+
+## Create an Azure Database for PostgreSQL server
+
+Create an Azure Database for PostgreSQL server with the `New-AzPostgreSqlServer` cmdlet. A server
+can manage multiple databases. Typically, a separate database is used for each project or for each
+user.
+
+The following table contains a list of commonly used parameters and sample values for the
+`New-AzPostgreSqlServer` cmdlet.
+
+| **Setting** | **Sample value** | **Description** |
+| -- | - | - |
+| Name | mydemoserver | Choose a globally unique name in Azure that identifies your Azure Database for PostgreSQL server. The server name can only contain letters, numbers, and the hyphen (-) character. Any uppercase characters that are specified are automatically converted to lowercase during the creation process. It must contain from 3 to 63 characters. |
+| ResourceGroupName | myresourcegroup | Provide the name of the Azure resource group. |
+| Sku | GP_Gen5_2 | The name of the SKU. Follows the convention **pricing-tier\_compute-generation\_vCores** in shorthand. For more information about the Sku parameter, see the information following this table. |
+| BackupRetentionDay | 7 | How long a backup should be retained. Unit is days. Range is 7-35. |
+| GeoRedundantBackup | Enabled | Whether geo-redundant backups should be enabled for this server or not. This value cannot be enabled for servers in the basic pricing tier and it cannot be changed after the server is created. Allowed values: Enabled, Disabled. |
+| Location | westus | The Azure region for the server. |
+| SslEnforcement | Enabled | Whether SSL should be enabled or not for this server. Allowed values: Enabled, Disabled. |
+| StorageInMb | 51200 | The storage capacity of the server (unit is megabytes). Valid StorageInMb is a minimum of 5120 MB and increases in 1024 MB increments. For more information about storage size limits, see [Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md). |
+| Version | 9.6 | The PostgreSQL major version. |
+| AdministratorUserName | myadmin | The username for the administrator login. It cannot be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**. |
+| AdministratorLoginPassword | `<securestring>` | The password of the administrator user in the form of a secure string. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. |
+
+The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as
+shown in the following examples.
+
+- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.
+- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.
+- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.
+
+For information about valid **Sku** values by region and for tiers, see
+[Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md).
+
+The following example creates a PostgreSQL server in the **West US** region named **mydemoserver**
+in the **myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5
+server in the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document
+the password used in the first line of the example as this is the password for the PostgreSQL server
+admin account.
+
+> [!TIP]
+> A server name maps to a DNS name and must be globally unique in Azure.
+
+```azurepowershell-interactive
+$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
+New-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
+```
+
+Consider using the basic pricing tier if light compute and I/O are adequate for your workload.
+
+> [!IMPORTANT]
+> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory-
+> optimized and cannot be geo-replicated.
+
+## Configure a firewall rule
+
+Create an Azure Database for PostgreSQL server-level firewall rule using the
+`New-AzPostgreSqlFirewallRule` cmdlet. A server-level firewall rule allows an external application,
+such as the `psql` command-line tool or PostgreSQL Workbench to connect to your server through the
+Azure Database for PostgreSQL service firewall.
+
+The following example creates a firewall rule named **AllowMyIP** that allows connections from a
+specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond
+to the location that you are connecting from.
+
+```azurepowershell-interactive
+New-AzPostgreSqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1
+```
+
+> [!NOTE]
+> Connections to Azure Database for PostgreSQL communicate over port 5432. If you try to connect from
+> within a corporate network, outbound traffic over port 5432 might not be allowed. In this
+> scenario, you can only connect to the server if your IT department opens port 5432.
+
+## Get the connection information
+
+To connect to your server, you need to provide host information and access credentials. Use the
+following example to determine the connection information. Make a note of the values for
+**FullyQualifiedDomainName** and **AdministratorLogin**.
+
+```azurepowershell-interactive
+Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Select-Object -Property FullyQualifiedDomainName, AdministratorLogin
+```
+
+```Output
+FullyQualifiedDomainName AdministratorLogin
+
+mydemoserver.postgres.database.azure.com myadmin
+```
+
+## Connect to PostgreSQL database using psql
+
+If your client computer has PostgreSQL installed, you can use a local instance of
+[psql](https://www.postgresql.org/docs/current/static/app-psql.html) to connect to an Azure
+PostgreSQL server. You can also access a pre-installed version of the `psql` command-line tool in
+Azure Cloud Shell by selecting the **Try It** button on a code sample in this article. Other ways to
+access Azure Cloud Shell are to select the **>_** button on the upper-right toolbar in the Azure
+portal or by visiting [shell.azure.com](https://shell.azure.com/).
+
+1. Connect to your Azure PostgreSQL server using the `psql` command-line utility.
+
+ ```azurepowershell-interactive
+ psql --host=<servername> --port=<port> --username=<user@servername> --dbname=<dbname>
+ ```
+
+ For example, the following command connects to the default database called **postgres** on your
+ PostgreSQL server `mydemoserver.postgres.database.azure.com` using access credentials. Enter
+ the `<server_admin_password>` you chose when prompted for password.
+
+ ```azurepowershell-interactive
+ psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
+ ```
+
+ > [!TIP]
+ > If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username
+ > with `%40`. For example the connection string for psql would be,
+ > `psql postgresql://myadmin%40mydemoserver@mydemoserver.postgres.database.azure.com:5432/postgres`
+
+1. Once you are connected to the server, create a blank database at the prompt.
+
+ ```sql
+ CREATE DATABASE mypgsqldb;
+ ```
+
+1. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
+
+ ```sql
+ \c mypgsqldb
+ ```
+
+## Connect to the PostgreSQL Server using pgAdmin
+
+pgAdmin is an open-source tool used with PostgreSQL. You can install pgAdmin from the
+[pgAdmin website](https://www.pgadmin.org/). The pgAdmin version you're using may be different from
+what is used in this Quickstart. Read the pgAdmin documentation if you need additional guidance.
+
+1. Open the pgAdmin application on your client computer.
+
+1. From the toolbar go to **Object**, hover over **Create**, and select **Server**.
+
+1. In the **Create - Server** dialog box, on the **General** tab, enter a unique friendly name for
+ the server, such as **mydemoserver**.
+
+ :::image type="content" source="./media/quickstart-create-postgresql-server-database-using-azure-powershell/9-pgadmin-create-server.png" alt-text="The General tab":::
+
+1. In the **Create - Server** dialog box, on the **Connection** tab, fill in the settings table.
+
+ :::image type="content" source="./media/quickstart-create-postgresql-server-database-using-azure-powershell/10-pgadmin-create-server.png" alt-text="The Connection tab":::
+
+ pgAdmin parameter |Value|Description
+ ||
+ Host name/address | Server name | The server name value that you used when you created the Azure Database for PostgreSQL server earlier. Our example server is **mydemoserver.postgres.database.azure.com.** Use the fully qualified domain name (**\*.postgres.database.azure.com**) as shown in the example. If you don't remember your server name, follow the steps in the previous section to get the connection information.
+ Port | 5432 | The port to use when you connect to the Azure Database for PostgreSQL server.
+ Maintenance database | *postgres* | The default system-generated database name.
+ Username | Server admin login name | The server admin login username that you supplied when you created the Azure Database for PostgreSQL server earlier. If you don't remember the username, follow the steps in the previous section to get the connection information. The format is *username\@servername*.
+ Password | Your admin password | The password you chose when you created the server earlier in this Quickstart.
+ Role | Leave blank | There's no need to provide a role name at this point. Leave the field blank.
+ SSL mode | *Require* | You can set the TLS/SSL mode in pgAdmin's SSL tab. By default, all Azure Database for PostgreSQL servers are created with TLS enforcing turned on. To turn off TLS enforcing, see [Configure Enforcement of TLS](./concepts-ssl-connection-security.md#configure-enforcement-of-tls).
+
+1. Select **Save**.
+
+1. In the **Browser** pane on the left, expand the **Servers** node. Select your server, for
+ example, **mydemoserver**. Click to connect to it.
+
+1. Expand the server node, and then expand **Databases** under it. The list should include your
+ existing *postgres* database and any other databases you've created. You can create multiple
+ databases per server with Azure Database for PostgreSQL.
+
+1. Right-click **Databases**, choose the **Create** menu, and then select **Database**.
+
+1. Type a database name of your choice in the **Database** field, such as **mypgsqldb2**.
+
+1. Select the **Owner** for the database from the list box. Choose your server admin login name,
+ such as the example, **my admin**.
+
+ :::image type="content" source="./media/quickstart-create-postgresql-server-database-using-azure-powershell/11-pgadmin-database.png" alt-text="Create a database in pgAdmin":::
+
+1. Select **Save** to create a new blank database.
+
+1. In the **Browser** pane, you can see the database that you created in the list of databases under
+ your server name.
+
+## Clean up resources
+
+If the resources created in this quickstart aren't needed for another quickstart or tutorial, you
+can delete them by running the following example.
+
+> [!CAUTION]
+> The following example deletes the specified resource group and all resources contained within it.
+> If resources outside the scope of this quickstart exist in the specified resource group, they will
+> also be deleted.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myresourcegroup
+```
+
+To delete only the server created in this quickstart without deleting the resource group, use the
+`Remove-AzPostgreSqlServer` cmdlet.
+
+```azurepowershell-interactive
+Remove-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Design an Azure Database for PostgreSQL using PowerShell](tutorial-design-database-using-powershell.md)
postgresql Quickstart Create Postgresql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-bicep.md
+
+ Title: 'Quickstart: Create an Azure DB for PostgreSQL - Bicep'
+description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server using Bicep.
++++++ Last updated : 04/29/2022++
+# Quickstart: Use Bicep to create an Azure Database for PostgreSQL - single server
+
+Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for PostgreSQL - single server in Azure CLI or PowerShell.
++
+## Prerequisites
+
+You'll need an Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+# [CLI](#tab/CLI)
+
+* If you want to run the code locally, [Azure CLI](/cli/azure/).
+
+# [PowerShell](#tab/PowerShell)
+
+* If you want to run the code locally, [Azure PowerShell](/powershell/azure/).
+++
+## Review the Bicep file
+
+You create an Azure Database for PostgreSQL server with a configured set of compute and storage resources. To learn more, see [Pricing tiers in Azure Database for PostgreSQL - Single Server](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-postgresql-with-vnet/).
++
+The Bicep file defines five Azure resources:
+
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets)
+* [**Microsoft.DBforPostgreSQL/servers**](/azure/templates/microsoft.dbforpostgresql/servers)
+* [**Microsoft.DBforPostgreSQL/servers/virtualNetworkRules**](/azure/templates/microsoft.dbforpostgresql/servers/virtualnetworkrules)
+* [**Microsoft.DBforPostgreSQL/servers/firewallRules**](/azure/templates/microsoft.dbforpostgresql/servers/firewallrules)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serverName=<server-name> administratorLogin=<admin-login>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serverName "<server-name>" -administratorLogin "<admin-login>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<server-name\>** with the name of the server for Azure database for PostgreSQL. Replace **\<admin-login\>** with the database administrator name, which has a minimum length of one character. You'll also be prompted to enter **administratorLoginPassword**, which has a minimum length of eight characters.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a Bicep file, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-azure-cli.md
+
+ Title: 'Quickstart: Create server - Azure CLI - Azure Database for PostgreSQL - single server'
+description: In this quickstart guide, you'll create an Azure Database for PostgreSQL server by using the Azure CLI.
+++++
+ms.devlang: azurecli
+ Last updated : 01/26/2022 ++
+# Quickstart: Create an Azure Database for PostgreSQL server by using the Azure CLI
+
+This quickstart shows how to use [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create a single Azure Database for PostgreSQL server in five minutes.
+
+> [!TIP]
+> Consider using the simpler [az postgres up](/cli/azure/postgres#az-postgres-up) Azure CLI command. Try out the [quickstart](./quickstart-create-server-up-azure-cli.md).
++++
+## Set parameter values
+
+The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
+
+Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
++
+## Create a resource group
+
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
++
+## Create a server
+
+Create a server with the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command.
++
+> [!NOTE]
+>
+>- The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. For more information, see [Azure Database for PostgreSQL Naming Rules](../../azure-resource-manager/management/resource-name-rules.md#microsoftdbforpostgresql).
+>- The user name for the admin user can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
+>- The password must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
+>- For information about SKUs, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
+
+>[!IMPORTANT]
+>
+>- The default PostgreSQL version on your server is 9.6. To see all the versions supported, see [Supported PostgreSQL major versions](./concepts-supported-versions.md).
+>- SSL is enabled by default on your server. For more information on SSL, see [Configure SSL connectivity](./concepts-ssl-connection-security.md).
+
+## Configure a server-based firewall rule
+
+Create a firewall rule with the [az postgres server firewall-rule create](/cli/azure/mysql/server/firewall-rule) command to give your local environment access to connect to the server.
++
+> [!TIP]
+> If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
+
+> [!NOTE]
+> To avoid connectivity issues, make sure your network's firewall allows port 5432. Azure Database for PostgreSQL servers use that port.
+
+## List server-based firewall rules
+
+To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command.
++
+The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
+
+## Get the connection information
+
+To connect to your server, provide host information and access credentials.
+
+```azurecli
+az postgres server show --resource-group $resourceGroup --name $server
+```
+
+Make a note of the **administratorLogin** and **fullyQualifiedDomainName** values.
+
+## Connect to the Azure Database for PostgreSQL server by using psql
+
+The [psql](https://www.postgresql.org/docs/current/static/app-psql.html) client is a popular choice for connecting to PostgreSQL servers. You can connect to your server by using `psql` with [Azure Cloud Shell](../../cloud-shell/overview.md). You can also use `psql` on your local environment if you have it available. An empty database, **postgres**, is automatically created with a new PostgreSQL server. You can use that database to connect with `psql`, as shown in the following code.
+
+```bash
+psql --host=<server_name>.postgres.database.azure.com --port=5432 --username=<admin_user>@<server_name> --dbname=postgres
+```
+
+> [!TIP]
+> If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example, the connection string for psql would be:
+>
+> ```bash
+> psql postgresql://<admin_user>%40<server_name>@<server_name>.postgres.database.azure.com:5432/postgres
+> ```
+
+## Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az-vm-extension-set) command - unless you have an ongoing need for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Design your first Azure Database for PostgreSQL using the Azure CLI](tutorial-design-database-using-azure-cli.md)
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-portal.md
+
+ Title: 'Quickstart: Create server - Azure portal - Azure Database for PostgreSQL - single server'
+description: In this quickstart guide, you'll create and manage an Azure Database for PostgreSQL server by using the Azure portal.
++++++ Last updated : 10/18/2020++
+# Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal
+
+Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This quickstart shows you how to create a single Azure Database for PostgreSQL server and connect to it.
+
+## Prerequisites
+An Azure subscription is required. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+
+## Create an Azure Database for PostgreSQL server
+Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database for PostgreSQL Single Server database. Search for and select *Azure Database for PostgreSQL servers*.
+
+>[!div class="mx-imgBorder"]
+> :::image type="content" source="./media/quickstart-create-database-portal/search-postgres.png" alt-text="Find Azure Database for PostgreSQL.":::
+
+1. Select **Add**.
+
+2. On the Create a Azure Database for PostgreSQL page , select **Single server**.
+
+ >[!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-database-portal/select-single-server.png" alt-text="Select single server":::
+
+3. Now enter the **Basics** form with the following information.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-database-portal/create-basics.png" alt-text="Screenshot that shows the Basics tab for creating a single server.":::
+
+ |Setting|Suggested value|Description|
+ |:|:|:|
+ |Subscription|your subscription name|select the desired Azure Subscription.|
+ |Resource group|*myresourcegroup*| A new or an existing resource group from your subscription.|
+ |Server name |*mydemoserver*|A unique name that identifies your Azure Database for PostgreSQL server. The domain name *postgres.database.azure.com* is appended to the server name that you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters.|
+ |Data source | None | Select **None** to create a new server from scratch. Select **Backup** only if you were restoring from a geo-backup of an existing server.|
+ |Admin username |*myadmin*| Enter your server admin username. It can't start with **pg_** and these values are not allowed: **azure_superuser**, **azure_pg_admin**, **admin**, **administrator**, **root**, **guest**, or **public**.|
+ |Password |your password| A new password for the server admin user. It must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, !, $, #, %).|
+ |Location|your desired location| Select a location from the dropdown list.|
+ |Version|The latest major version| The latest PostgreSQL major version, unless you have specific requirements otherwise.|
+ |Compute + storage | *use the defaults*| The default pricing tier is **General Purpose** with **4 vCores** and **100 GB** storage. Backup retention is set to **7 days** with **Geographically Redundant** backup option.<br/>Learn about the [pricing](https://azure.microsoft.com/pricing/details/postgresql/server/) and update the defaults if needed.|
++
+ > [!NOTE]
+ > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier can't later be scaled to General Purpose or Memory Optimized.
+
+5. Select **Review + create** to review your selections. Select **Create** to provision the server. This operation might take a few minutes.
+ > [!NOTE]
+ > An empty database, **postgres**, is created. You'll also find an **azure_maintenance** database that's used to separate the managed service processes from user actions. You can't access the **azure_maintenance** database.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/quickstart-create-database-portal/deployment-success.png" alt-text="success deployment.":::
+
+[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
+
+## Configure a firewall rule
+By default, the server that you create is not publicly accessible. You need to give permissions to your IP address. Go to your server resource in the Azure portal and select **Connection security** from left-side menu for your server resource. If you're not sure how to find your resource, see [Open resources](../../azure-resource-manager/management/manage-resources-portal.md#open-resources).
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/quickstart-create-database-portal/add-current-ip-firewall.png" alt-text="Screenshot that shows firewall rules for connection security.":::
+
+Select **Add current client IP address**, and then select **Save**. You can add more IP addresses or provide an IP range to connect to your server from those IP addresses. For more information, see [Firewall rules in Azure Database for PostgreSQL](./concepts-firewall-rules.md).
+
+> [!NOTE]
+> To avoid connectivity issues, check if your network allows outbound traffic over port 5432. Azure Database for PostgreSQL uses that port.
+
+[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
+
+## Connect to the server with psql
+
+You can use [psql](http://postgresguide.com/utilities/psql.html) or [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/connecting.html), which are popular PostgreSQL clients. For this quickstart, we'll connect by using psql in [Azure Cloud Shell](../../cloud-shell/overview.md) within the Azure portal.
+
+1. Make a note of your server name, server admin login name, password, and subscription ID for your newly created server from the **Overview** section of your server.
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/quickstart-create-database-portal/overview-new.png" alt-text="get connection information.":::
++
+2. Open Azure Cloud Shell in the portal by selecting the icon on the upper-left side.
+
+ > [!NOTE]
+ > If you're opening Cloud Shell for the first time, you'll see a prompt to create a resource group and a storage account. This is a one-time step and will be automatically attached for all sessions.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="media/quickstart-create-database-portal/use-in-cloud-shell.png" alt-text="Screenshot that shows server information and the icon for opening Azure Cloud Shell.":::
+
+3. Run the following command in the Azure Cloud Shell terminal. Replace values with your actual server name and admin user login name. Use the empty database **postgres** with admin user in this format: `<admin-username>@<servername>`.
+
+ ```azurecli-interactive
+ psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
+ ```
+
+ Here's how the experience looks in the Cloud Shell terminal:
+
+ ```bash
+ Requesting a Cloud Shell.Succeeded.
+ Connecting terminal...
+
+ Welcome to Azure Cloud Shell
+
+ Type "az" to use Azure CLI
+ Type "help" to learn about Cloud Shell
+
+ user@Azure:~$psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
+ Password for user myadmin@mydemoserver.postgres.database.azure.com:
+ psql (12.2 (Ubuntu 12.2-2.pgdg16.04+1), server 11.6)
+ SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
+ Type "help" for help.
+
+ postgres=>
+ ```
+4. In the same Azure Cloud Shell terminal, create a database called **guest**.
+
+ ```bash
+ postgres=> CREATE DATABASE guest;
+ ```
+
+5. Switch connections to the newly created **guest** database.
+
+ ```bash
+ \c guest
+ ```
+6. Type `\q`, and then select the Enter key to close psql.
+
+[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
+
+## Clean up resources
+You've successfully created an Azure Database for PostgreSQL server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting either the resource group or the PostgreSQL server.
+
+To delete the resource group:
+
+1. In the Azure portal, search for and select **Resource groups**.
+2. In the resource group list, choose the name of your resource group.
+3. On the **Overview** page of your resource group, select **Delete resource group**.
+4. In the confirmation dialog box, enter the name of your resource group, and then select **Delete**.
+
+To delete the server, select the **Delete** button on the **Overview** page of your server:
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="media/quickstart-create-database-portal/12-delete.png" alt-text="Screenshot that shows the button for deleting a server.":::
+
+[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Migrate your database using export and import](./how-to-migrate-using-export-and-import.md) <br/>
+
+> [!div class="nextstepaction"]
+> [Design a database](./tutorial-design-database-using-azure-portal.md#create-tables-in-the-database)
+
+[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-up-azure-cli.md
+
+ Title: 'Quickstart: Create server - az postgres up - Azure Database for PostgreSQL - Single Server'
+description: Quickstart guide to create Azure Database for PostgreSQL - Single Server using Azure CLI (command-line interface) up command.
+++++
+ms.devlang: azurecli
+ Last updated : 01/25/2022+
+# Quickstart: Use the az postgres up command to create an Azure Database for PostgreSQL - Single Server
+
+Azure Database for PostgreSQL is a managed service that enables you to run, manage, and scale highly available PostgreSQL databases in the cloud. The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the [az postgres up](/cli/azure/postgres#az-postgres-up) command to create an Azure Database for PostgreSQL server using the Azure CLI. In addition to creating the server, the `az postgres up` command creates a sample database, a root user in the database, opens the firewall for Azure services, and creates default firewall rules for the client computer. These defaults help to expedite the development process.
++
+## Create an Azure Database for PostgreSQL server
+++
+Install the [db-up](/cli/azure/ext/db-up/mysql) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
+
+```azurecli
+az extension add --name db-up
+```
+
+Create an Azure Database for PostgreSQL server using the following command:
+
+```azurecli
+az postgres up
+```
+
+The server is created with the following default values (unless you manually override them):
+
+**Setting** | **Default value** | **Description**
+||
+server-name | System generated | A unique name that identifies your Azure Database for PostgreSQL server.
+resource-group | System generated | A new Azure resource group.
+sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. See our [pricing page](https://azure.microsoft.com/pricing/details/postgresql/) for more information about the tiers.
+backup-retention | 7 | How long a backup is retained. Unit is days.
+geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not.
+location | westus2 | The Azure location for the server.
+ssl-enforcement | Disabled | Whether TLS/SSL should be enabled or not for this server.
+storage-size | 5120 | The storage capacity of the server (unit is megabytes).
+version | 10 | The PostgreSQL major version.
+admin-user | System generated | The username for the administrator.
+admin-password | System generated | The password of the administrator user.
+
+> [!NOTE]
+> For more information about the `az postgres up` command and its additional parameters, see the [Azure CLI documentation](/cli/azure/postgres#az-postgres-up).
+
+Once your server is created, it comes with the following settings:
+
+- A firewall rule called "devbox" is created. The Azure CLI attempts to detect the IP address of the machine the `az postgres up` command is run from and allows that IP address.
+- "Allow access to Azure services" is set to ON. This setting configures the server's firewall to accept connections from all Azure resources, including resources not in your subscription.
+- An empty database named "sampledb" is created
+- A new user named "root" with privileges to "sampledb" is created
+
+> [!NOTE]
+> Azure Database for PostgreSQL communicates over port 5432. When connecting from within a corporate network, outbound traffic over port 5432 may not be allowed by your network's firewall. Have your IT department open port 5432 to connect to your server.
+
+## Get the connection information
+
+After the `az postgres up` command is completed, a list of connection strings for popular programming languages is returned to you. These connection strings are pre-configured with the specific attributes of your newly created Azure Database for PostgreSQL server.
+
+You can use the [az postgres show-connection-string](/cli/azure/postgres#az-postgres-show-connection-string) command to list these connection strings again.
+
+## Clean up resources
+
+Clean up all resources you created in the quickstart using the following command. This command deletes the Azure Database for PostgreSQL server and the resource group.
+
+```azurecli
+az postgres down --delete-group
+```
+
+If you would just like to delete the newly created server, you can run [az postgres down](/cli/azure/postgres#az-postgres-down) command.
+
+```azurecli
+az postgres down
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/sample-scripts-azure-cli.md
+
+ Title: Azure CLI samples - Azure Database for PostgreSQL - Single Server | Microsoft Docs
+description: This article lists several Azure CLI code samples available for interacting with Azure Database for PostgreSQL - Single Server.
+++++
+ms.devlang: azurecli
+ Last updated : 09/17/2021+
+# Azure CLI samples for Azure Database for PostgreSQL - Single Server
+
+The following table includes links to sample Azure CLI scripts for Azure Database for PostgreSQL.
+
+| Sample link | Description |
+|||
+|**Create a server**||
+| [Create a server and firewall rule](../scripts/sample-create-server-and-firewall-rule.md) | Azure CLI script that creates an Azure Database for PostgreSQL server and configures a server-level firewall rule. |
+| **Create server with vNet rules**||
+| [Create a server with vNet rules](../scripts/sample-create-server-with-vnet-rule.md) | Azure CLI that creates an Azure Database for PostgreSQL server with a service endpoint on a virtual network and configures a vNet rule. |
+|**Scale a server**||
+| [Scale a server](../scripts/sample-scale-server-up-or-down.md) | Azure CLI script that scales an Azure Database for PostgreSQL server up or down to allow for changing performance needs. |
+|**Change server configurations**||
+| [Change server configurations](../scripts/sample-change-server-configuration.md) | Azure CLI script that change configurations options of an Azure Database for PostgreSQL server. |
+|**Restore a server**||
+| [Restore a server](../scripts/sample-point-in-time-restore.md) | Azure CLI script that restores an Azure Database for PostgreSQL server to a previous point in time. |
+|**Download server logs**||
+| [Enable and download server logs](../scripts/sample-server-logs.md) | Azure CLI script that enables and downloads server logs of an Azure Database for PostgreSQL server. |
+|||
+
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
+
+ Title: Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
+description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
++++++ Last updated : 03/10/2022++
+# Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
+
+[Regulatory Compliance in Azure Policy](../../governance/policy/concepts/regulatory-compliance.md)
+provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
+**compliance domains** and **security controls** related to different compliance standards. This
+page lists the **compliance domains** and **security controls** for Azure Database for PostgreSQL.
+You can assign the built-ins for a **security control** individually to help make your Azure
+resources compliant with the specific standard.
+++
+## Next steps
+
+- Learn more about [Azure Policy Regulatory Compliance](../../governance/policy/concepts/regulatory-compliance.md).
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-azure-cli.md
+
+ Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure CLI'
+description: This tutorial shows how to create, configure, and query your first Azure Database for PostgreSQL - Single Server using Azure CLI.
+++++
+ms.devlang: azurecli
+ Last updated : 01/26/2022 ++
+# Tutorial: Design an Azure Database for PostgreSQL - Single Server using Azure CLI
+
+In this tutorial, you use Azure CLI (command-line interface) and other utilities to learn how to:
+> [!div class="checklist"]
+>
+> * Create an Azure Database for PostgreSQL server
+> * Configure the server firewall
+> * Use [**psql**](https://www.postgresql.org/docs/9.6/static/app-psql.html) utility to create a database
+> * Load sample data
+> * Query data
+> * Update data
+> * Restore data
++++
+## Set parameter values
+
+The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
+
+Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
++
+## Create a resource group
+
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
++
+## Create a server
+
+Create a server with the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command.
++
+> [!NOTE]
+>
+> * The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. For more information, see [Azure Database for PostgreSQL Naming Rules](../../azure-resource-manager/management/resource-name-rules.md#microsoftdbforpostgresql).
+> * The user name for the admin user can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
+> * The password must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
+> * For information about SKUs, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
+
+>[!IMPORTANT]
+>
+> * The default PostgreSQL version on your server is 9.6. To see all the versions supported, see [Supported PostgreSQL major versions](./concepts-supported-versions.md).
+> * SSL is enabled by default on your server. For more information on SSL, see [Configure SSL connectivity](./concepts-ssl-connection-security.md).
+
+## Configure a server-based firewall rule
+
+Create a firewall rule with the [az postgres server firewall-rule create](./concepts-firewall-rules.md) command to give your local environment access to connect to the server.
++
+> [!TIP]
+> If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
+
+> [!NOTE]
+> To avoid connectivity issues, make sure your network's firewall allows port 5432. Azure Database for PostgreSQL servers use that port.
+
+## List server-based firewall rules
+
+To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command.
++
+The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
+
+## Get the connection information
+
+To connect to your server, provide host information and access credentials.
+
+```azurecli
+az postgres server show --resource-group $resourceGroup --name $server
+```
+
+Make a note of the **administratorLogin** and **fullyQualifiedDomainName** values.
+
+## Connect to the Azure Database for PostgreSQL server by using psql
+
+The [psql](https://www.postgresql.org/docs/current/static/app-psql.html) client is a popular choice for connecting to PostgreSQL servers. You can connect to your server by using `psql` with [Azure Cloud Shell](../../cloud-shell/overview.md). You can also use `psql` on your local environment if you have it available. An empty database, **postgres**, is automatically created with a new PostgreSQL server. You can use that database to connect with `psql`, as shown in the following code.
+
+```bash
+psql --host=<server_name>.postgres.database.azure.com --port=5432 --username=<admin_user>@<server_name> --dbname=postgres
+```
+
+> [!TIP]
+> If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example, the connection string for psql would be:
+>
+> ```bash
+> psql postgresql://<admin_user>%40<server_name>@<server_name>.postgres.database.azure.com:5432/postgres
+> ```
+
+## Create a blank database
+
+1. Once you are connected to the server, create a blank database at the prompt:
+
+ ```sql
+ CREATE DATABASE mypgsqldb;
+ ```
+
+1. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
+
+ ```sql
+ \c mypgsqldb
+ ```
+
+## Create tables in the database
+
+Now that you know how to connect to the Azure Database for PostgreSQL, you can complete some basic tasks:
+
+First, create a table and load it with some data. For example, create a table that tracks inventory information:
+
+```sql
+CREATE TABLE inventory (
+ id serial PRIMARY KEY,
+ name VARCHAR(50),
+ quantity INTEGER
+);
+```
+
+You can see the newly created table in the list of tables now by typing:
+
+```sql
+\dt
+```
+
+## Load data into the table
+
+Now that there is a table created, insert some data into it. At the open command prompt window, run the following query to insert some rows of data:
+
+```sql
+INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
+INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
+```
+
+You have now added two rows of sample data into the table you created earlier.
+
+## Query and update the data in the tables
+
+Execute the following query to retrieve information from the inventory table:
+
+```sql
+SELECT * FROM inventory;
+```
+
+You can also update the data in the inventory table:
+
+```sql
+UPDATE inventory SET quantity = 200 WHERE name = 'banana';
+```
+
+You can see the updated values when you retrieve the data:
+
+```sql
+SELECT * FROM inventory;
+```
+
+## Restore a database to a previous point in time
+
+Imagine you have accidentally deleted a table. This is something you cannot easily recover from. Azure Database for PostgreSQL allows you to go back to any point-in-time for which your server has backups (determined by the backup retention period you configured) and restore this point-in-time to a new server. You can use this new server to recover your deleted data.
+
+The following command restores the sample server to a point before the table was added:
+
+```azurecli-interactive
+az postgres server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time 2017-04-13T13:59:00Z --source-server mydemoserver
+```
+
+The `az postgres server restore` command needs the following parameters:
+
+| Setting | Suggested value | Description  |
+| | | |
+| resource-group |  myresourcegroup |  The resource group in which the source server exists.  |
+| name | mydemoserver-restored | The name of the new server that is created by the restore command. |
+| restore-point-in-time | 2017-04-13T13:59:00Z | Select a point-in-time to restore to. This date and time must be within the source server's backup retention period. Use ISO8601 date and time format. For example, you may use your own local timezone, such as `2017-04-13T05:59:00-08:00`, or use UTC Zulu format `2017-04-13T13:59:00Z`. |
+| source-server | mydemoserver | The name or ID of the source server to restore from. |
+
+Restoring a server to a point-in-time creates a new server, copied as the original server as of the point in time you specify. The location and pricing tier values for the restored server are the same as the source server.
+
+The command is synchronous, and will return after the server is restored. Once the restore finishes, locate the new server that was created. Verify the data was restored as expected.
+
+## Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az-vm-extension-set) command - unless you have an ongoing need for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Next steps
+
+In this tutorial, you learned how to use Azure CLI (command-line interface) and other utilities to:
+> [!div class="checklist"]
+>
+> * Create an Azure Database for PostgreSQL server
+> * Configure the server firewall
+> * Use the **psql** utility to create a database
+> * Load sample data
+> * Query data
+> * Update data
+> * Restore data
+
+> [!div class="nextstepaction"]
+> [Design your first Azure Database for PostgreSQL using the Azure portal](tutorial-design-database-using-azure-portal.md)
postgresql Tutorial Design Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-azure-portal.md
+
+ Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure portal'
+description: This tutorial shows how to Design your first Azure Database for PostgreSQL - Single Server using the Azure portal.
++++++ Last updated : 06/25/2019+
+# Tutorial: Design an Azure Database for PostgreSQL - Single Server using the Azure portal
+
+Azure Database for PostgreSQL is a managed service that enables you to run, manage, and scale highly available PostgreSQL databases in the cloud. Using the Azure portal, you can easily manage your server and design a database.
+
+In this tutorial, you use the Azure portal to learn how to:
+> [!div class="checklist"]
+> * Create an Azure Database for PostgreSQL server
+> * Configure the server firewall
+> * Use [**psql**](https://www.postgresql.org/docs/9.6/static/app-psql.html) utility to create a database
+> * Load sample data
+> * Query data
+> * Update data
+> * Restore data
+
+## Prerequisites
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+## Create an Azure Database for PostgreSQL
+
+An Azure Database for PostgreSQL server is created with a defined set of [compute and storage resources](./concepts-pricing-tiers.md). The server is created within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+
+Follow these steps to create an Azure Database for PostgreSQL server:
+1. Click **Create a resource** in the upper left-hand corner of the Azure portal.
+2. Select **Databases** from the **New** page, and select **Azure Database for PostgreSQL** from the **Databases** page.
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/1-create-database.png" alt-text="Azure Database for PostgreSQL - Create the database":::
+
+3. Select the **Single server** deployment option.
+
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/select-deployment-option.png" alt-text="Select Azure Database for PostgreSQL - Single server deployment option":::
+
+4. Fill out the **Basics** form with the following information:
+
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/create-basics.png" alt-text="Create a server":::
+
+ Setting|Suggested Value|Description
+ ||
+ Subscription|Your subscription name|The Azure subscription that you want to use for your server. If you have multiple subscriptions, choose the subscription in which you're billed for the resource.
+ Resource group|*myresourcegroup*| A new resource group name or an existing one from your subscription.
+ Server name |*mydemoserver*|A unique name that identifies your Azure Database for PostgreSQL server. The domain name *postgres.database.azure.com* is appended to the server name you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain at least 3 through 63 characters.
+ Data source | *None* | Select *None* to create a new server from scratch. (You would select *Backup* if you were creating a server from a geo-backup of an existing Azure Database for PostgreSQL server).
+ Admin username |*myadmin*| Your own login account to use when you connect to the server. The admin login name can't be **azure_superuser**, **azure_pg_admin**, **admin**, **administrator**, **root**, **guest**, or **public**. It can't start with **pg_**.
+ Password |Your password| A new password for the server admin account. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).
+ Location|The region closest to your users| The location that is closest to your users.
+ Version|The latest major version| The latest PostgreSQL major version, unless you have specific requirements otherwise.
+ Compute + storage | **General Purpose**, **Gen 5**, **2 vCores**, **5 GB**, **7 days**, **Geographically Redundant** | The compute, storage, and backup configurations for your new server. Select **Configure server**. Next, select the **General Purpose** tab. *Gen 5*, *4 vCores*, *100 GB*, and *7 days* are the default values for **Compute Generation**, **vCore**, **Storage**, and **Backup Retention Period**. You can leave those sliders as is or adjust them. To enable your server backups in geo-redundant storage select **Geographically Redundant** from the **Backup Redundancy Options**. To save this pricing tier selection, select **OK**. The next screenshot captures these selections.
+
+ > [!NOTE]
+ > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier cannot later be scaled to General Purpose or Memory Optimized. See the [pricing page](https://azure.microsoft.com/pricing/details/postgresql/) for more information.
+ >
+
+ :::image type="content" source="./media/quickstart-create-database-portal/2-pricing-tier.png" alt-text="The Pricing tier pane":::
+
+ > [!TIP]
+ > With **auto-growth** enabled your server increases storage when you are approaching the allocated limit, without impacting your workload.
+
+5. Select **Review + create** to review your selections. Select **Create** to provision the server. This operation may take a few minutes.
+
+6. On the toolbar, select the **Notifications** icon (a bell) to monitor the deployment process. Once the deployment is done, you can select **Pin to dashboard**, which creates a tile for this server on your Azure portal dashboard as a shortcut to the server's **Overview** page. Selecting **Go to resource** opens the server's **Overview** page.
+
+ :::image type="content" source="./media/quickstart-create-database-portal/3-notifications.png" alt-text="The Notifications pane":::
+
+ By default, a **postgres** database is created under your server. The [postgres](https://www.postgresql.org/docs/9.6/static/app-initdb.html) database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You cannot access this database.)
++
+## Configure a server-level firewall rule
+
+The Azure Database for PostgreSQL service uses a firewall at the server-level. By default, this firewall prevents all external applications and tools from connecting to the server and any databases on the server unless a firewall rule is created to open the firewall for a specific IP address range.
+
+1. After the deployment completes, click **All Resources** from the left-hand menu and type in the name **mydemoserver** to search for your newly created server. Click the server name listed in the search result. The **Overview** page for your server opens and provides options for further configuration.
+
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/4-locate.png" alt-text="Azure Database for PostgreSQL - Search for server":::
+
+2. In the server page, select **Connection security**.
+
+3. Click in the text box under **Rule Name,** and add a new firewall rule to specify the IP range for connectivity. Enter your IP range. Click **Save**.
+
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/5-firewall-2.png" alt-text="Azure Database for PostgreSQL - Create Firewall Rule":::
+
+4. Click **Save** and then click the **X** to close the **Connections security** page.
+
+ > [!NOTE]
+ > Azure PostgreSQL server communicates over port 5432. If you are trying to connect from within a corporate network, outbound traffic over port 5432 may not be allowed by your network's firewall. If so, you cannot connect to your Azure SQL Database server unless your IT department opens port 5432.
+ >
+
+## Get the connection information
+
+When you created the Azure Database for PostgreSQL server, the default **postgres** database was also created. To connect to your database server, you need to provide host information and access credentials.
+
+1. From the left-hand menu in the Azure portal, click **All resources** and search for the server you just created.
+
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/4-locate.png" alt-text="Azure Database for PostgreSQL - Search for server":::
+
+2. Click the server name **mydemoserver**.
+
+3. Select the server's **Overview** page. Make a note of the **Server name** and **Server admin login name**.
+
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/6-server-name.png" alt-text="Azure Database for PostgreSQL - Server Admin Login":::
++
+## Connect to PostgreSQL database using psql
+If your client computer has PostgreSQL installed, you can use a local instance of [psql](https://www.postgresql.org/docs/9.6/static/app-psql.html), or the Azure Cloud Console to connect to an Azure PostgreSQL server. Let's now use the psql command-line utility to connect to the Azure Database for PostgreSQL server.
+
+1. Run the following psql command to connect to an Azure Database for PostgreSQL database:
+ ```
+ psql --host=<servername> --port=<port> --username=<user@servername> --dbname=<dbname>
+ ```
+
+ For example, the following command connects to the default database called **postgres** on your PostgreSQL server **mydemoserver.postgres.database.azure.com** using access credentials. Enter the `<server_admin_password>` you chose when prompted for password.
+
+ ```
+ psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
+ ```
+
+ > [!TIP]
+ > If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example the connection string for psql would be,
+ > ```
+ > psql postgresql://myadmin%40mydemoserver@mydemoserver.postgres.database.azure.com:5432/postgres
+ > ```
+
+2. Once you are connected to the server, create a blank database at the prompt:
+ ```sql
+ CREATE DATABASE mypgsqldb;
+ ```
+
+3. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
+ ```sql
+ \c mypgsqldb
+ ```
+
+## Create tables in the database
+Now that you know how to connect to the Azure Database for PostgreSQL, you can complete some basic tasks:
+
+First, create a table and load it with some data. Let's create a table that tracks inventory information using this SQL code:
+```sql
+CREATE TABLE inventory (
+ id serial PRIMARY KEY,
+ name VARCHAR(50),
+ quantity INTEGER
+);
+```
+
+You can see the newly created table in the list of tables now by typing:
+```sql
+\dt
+```
+
+## Load data into the tables
+Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data.
+```sql
+INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
+INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
+```
+
+You have now two rows of sample data into the inventory table you created earlier.
+
+## Query and update the data in the tables
+Execute the following query to retrieve information from the inventory database table.
+```sql
+SELECT * FROM inventory;
+```
+
+You can also update the data in the table.
+```sql
+UPDATE inventory SET quantity = 200 WHERE name = 'banana';
+```
+
+You can see the updated values when you retrieve the data.
+```sql
+SELECT * FROM inventory;
+```
+
+## Restore data to a previous point in time
+Imagine you have accidentally deleted this table. This situation is something you cannot easily recover from. Azure Database for PostgreSQL allows you to go back to any point-in-time for which your server has backups (determined by the backup retention period you configured) and restore this point-in-time to a new server. You can use this new server to recover your deleted data. The following steps restore the **mydemoserver** server to a point before the inventory table was added.
+
+1. On the Azure Database for PostgreSQL **Overview** page for your server, click **Restore** on the toolbar. The **Restore** page opens.
+
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/9-azure-portal-restore.png" alt-text="Screenshot that shows the Azure Database for PostgreSQL **Overview** page for your server and highlights the Restore button.":::
+
+2. Fill out the **Restore** form with the required information:
+
+ :::image type="content" source="./media/tutorial-design-database-using-azure-portal/10-azure-portal-restore.png" alt-text="Azure portal - Restore form options":::
+
+ - **Restore point**: Select a point-in-time that occurs before the server was changed
+ - **Target server**: Provide a new server name you want to restore to
+ - **Location**: You cannot select the region, by default it is same as the source server
+ - **Pricing tier**: You cannot change this value when restoring a server. It is same as the source server.
+3. Click **OK** to [restore the server to a point-in-time](./how-to-restore-server-portal.md) before the table was deleted. Restoring a server to a different point in time creates a duplicate new server as the original server as of the point in time you specify, provided that it is within the retention period for your [pricing tier](./concepts-pricing-tiers.md).
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+
+## Next steps
+In this tutorial, you learned how to use the Azure portal and other utilities to:
+> [!div class="checklist"]
+> * Create an Azure Database for PostgreSQL server
+> * Configure the server firewall
+> * Use the **psql** utility to create a database
+> * Load sample data
+> * Query data
+> * Update data
+> * Restore data
+
+> [!div class="nextstepaction"]
+>[Design your first Azure Database for PostgreSQL using Azure CLI](tutorial-design-database-using-azure-cli.md)
postgresql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-powershell.md
+
+ Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure PowerShell'
+description: This tutorial shows how to create, configure, and query your first Azure Database for PostgreSQL - Single Server using Azure PowerShell.
+++++
+ms.devlang: azurepowershell
+ Last updated : 06/08/2020++
+# Tutorial: Design an Azure Database for PostgreSQL - Single Server using PowerShell
+
+Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on PostgreSQL
+Community Edition database engine. In this tutorial, you use PowerShell and other utilities to learn
+how to:
+
+> [!div class="checklist"]
+> - Create an Azure Database for PostgreSQL
+> - Configure the server firewall
+> - Use [**psql**](https://www.postgresql.org/docs/9.6/static/app-psql.html) utility to create a database
+> - Load sample data
+> - Query data
+> - Update data
+> - Restore data
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
+before you begin.
+
+If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
+module and connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)
+cmdlet. For more information about installing the Az PowerShell module, see
+[Install Azure PowerShell](/powershell/azure/install-az-ps).
+
+> [!IMPORTANT]
+> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
+> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
+> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
+> PowerShell module releases and available natively from within Azure Cloud Shell.
+
+If this is your first time using the Azure Database for PostgreSQL service, you must register the
+**Microsoft.DBforPostgreSQL** resource provider.
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.DBforPostgreSQL
+```
++
+If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
+should be billed. Select a specific subscription ID using the
+[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+```azurepowershell-interactive
+Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+```
+
+## Create a resource group
+
+Create an
+[Azure resource group](../../azure-resource-manager/management/overview.md)
+using the
+[New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)
+cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as
+a group.
+
+The following example creates a resource group named **myresourcegroup** in the **West US** region.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myresourcegroup -Location westus
+```
+
+## Create an Azure Database for PostgreSQL server
+
+Create an Azure Database for PostgreSQL server with the `New-AzPostgreSqlServer` cmdlet. A server can manage
+multiple databases. Typically, a separate database is used for each project or for each user.
+
+The following example creates a PostgreSQL server in the **West US** region named **mydemoserver** in the
+**myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5 server in
+the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document the
+password used in the first line of the example as this is the password for the PostgreSQL server admin
+account.
+
+> [!TIP]
+> A server name maps to a DNS name and must be globally unique in Azure.
+
+```azurepowershell-interactive
+$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
+New-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
+```
+
+The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as
+shown in the following examples.
+
+- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.
+- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.
+- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.
+
+For information about valid **Sku** values by region and for tiers, see
+[Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md).
+
+Consider using the basic pricing tier if light compute and I/O are adequate for your workload.
+
+> [!IMPORTANT]
+> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory-
+> optimized and cannot be geo-replicated.
+
+## Configure a firewall rule
+
+Create an Azure Database for PostgreSQL server-level firewall rule using the `New-AzPostgreSqlFirewallRule`
+cmdlet. A server-level firewall rule allows an external application, such as the `psql`
+command-line tool or PostgreSQL Workbench to connect to your server through the Azure Database for PostgreSQL
+service firewall.
+
+The following example creates a firewall rule named **AllowMyIP** that allows connections from a
+specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond
+to the location that you are connecting from.
+
+```azurepowershell-interactive
+New-AzPostgreSqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1
+```
+
+> [!NOTE]
+> Connections to Azure Database for PostgreSQL communicate over port 5432. If you try to connect from
+> within a corporate network, outbound traffic over port 5432 might not be allowed. In this
+> scenario, you can only connect to the server if your IT department opens port 5432.
+
+## Get the connection information
+
+To connect to your server, you need to provide host information and access credentials. Use the
+following example to determine the connection information. Make a note of the values for
+**FullyQualifiedDomainName** and **AdministratorLogin**.
+
+```azurepowershell-interactive
+Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Select-Object -Property FullyQualifiedDomainName, AdministratorLogin
+```
+
+```Output
+FullyQualifiedDomainName AdministratorLogin
+
+mydemoserver.postgresql.database.azure.com myadmin
+```
+
+## Connect to PostgreSQL database using psql
+
+If your client computer has PostgreSQL installed, you can use a local instance of
+[psql](https://www.postgresql.org/docs/current/static/app-psql.html) to connect to an Azure
+PostgreSQL server. You can also access a pre-installed version of the `psql` command-line tool in
+Azure Cloud Shell by selecting the **Try It** button on a code sample in this article. Other ways to
+access Azure Cloud Shell are to select the **>_** button on the upper-right toolbar in the Azure
+portal or by visiting [shell.azure.com](https://shell.azure.com/).
+
+1. Connect to your Azure PostgreSQL server using the `psql` command-line utility.
+
+ ```azurepowershell-interactive
+ psql --host=<servername> --port=<port> --username=<user@servername> --dbname=<dbname>
+ ```
+
+ For example, the following command connects to the default database called **postgres** on your
+ PostgreSQL server `mydemoserver.postgres.database.azure.com` using access credentials. Enter
+ the `<server_admin_password>` you chose when prompted for password.
+
+ ```azurepowershell-interactive
+ psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
+ ```
+
+ > [!TIP]
+ > If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username
+ > with `%40`. For example the connection string for psql would be,
+ > `psql postgresql://myadmin%40mydemoserver@mydemoserver.postgres.database.azure.com:5432/postgres`
+
+1. Once you are connected to the server, create a blank database at the prompt.
+
+ ```sql
+ CREATE DATABASE mypgsqldb;
+ ```
+
+1. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
+
+ ```sql
+ \c mypgsqldb
+ ```
+
+## Create tables in the database
+
+Now that you know how to connect to the Azure Database for PostgreSQL database, complete some basic
+tasks.
+
+First, create a table and load it with some data. Let's create a table that stores inventory
+information.
+
+```sql
+CREATE TABLE inventory (
+ id serial PRIMARY KEY,
+ name VARCHAR(50),
+ quantity INTEGER
+);
+```
+
+## Load data into the tables
+
+Now that you have a table, insert some data into it. At the open command prompt window, run the
+following query to insert some rows of data.
+
+```sql
+INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
+INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
+```
+
+Now you have two rows of sample data into the table you created earlier.
+
+## Query and update the data in the tables
+
+Execute the following query to retrieve information from the database table.
+
+```sql
+SELECT * FROM inventory;
+```
+
+You can also update the data in the tables.
+
+```sql
+UPDATE inventory SET quantity = 200 WHERE name = 'banana';
+```
+
+The row gets updated accordingly when you retrieve data.
+
+```sql
+SELECT * FROM inventory;
+```
+
+## Restore a database to a previous point in time
+
+You can restore the server to a previous point-in-time. The restored data is copied to a new server,
+and the existing server is left unchanged. For example, if a table is accidentally dropped, you can
+restore to the time just the drop occurred. Then, you can retrieve the missing table and data from
+the restored copy of the server.
+
+To restore the server, use the `Restore-AzPostgreSqlServer` PowerShell cmdlet.
+
+### Run the restore command
+
+To restore the server, run the following example from PowerShell.
+
+```azurepowershell-interactive
+$restorePointInTime = (Get-Date).AddMinutes(-10)
+Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
+ Restore-AzPostgreSqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore
+```
+
+When you restore a server to an earlier point-in-time, a new server is created. The original server
+and its databases from the specified point-in-time are copied to the new server.
+
+The location and pricing tier values for the restored server remain the same as the original server.
+
+After the restore process finishes, locate the new server and verify that the data is restored as
+expected. The new server has the same server admin login name and password that was valid for the
+existing server at the time the restore was started. The password can be changed from the new
+server's **Overview** page.
+
+The new server created during a restore does not have the VNet service endpoints that existed on the
+original server. These rules must be set up separately for the new server. Firewall rules from the
+original server are restored.
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to back up and restore an Azure Database for PostgreSQL server using PowerShell](how-to-restore-server-powershell.md)
postgresql Tutorial Monitor And Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-monitor-and-tune.md
+
+ Title: 'Tutorial: Monitor and tune - Azure Database for PostgreSQL - Single Server'
+description: This tutorial walks through monitoring and tuning in Azure Database for PostgreSQL - Single Server.
+++++ Last updated : 5/6/2019++
+# Tutorial: Monitor and tune Azure Database for PostgreSQL - Single Server
+
+Azure Database for PostgreSQL has features that help you understand and improve your server performance. In this tutorial you will learn how to:
+> [!div class="checklist"]
+> * Enable query and wait statistics collection
+> * Access and utilize the data collected
+> * View query performance and wait statistics over time
+> * Analyze a database to get performance recommendations
+> * Apply performance recommendations
+
+## Prerequisites
+You need an Azure Database for PostgreSQL server with PostgreSQL version 9.6 or 10. You can follow the steps in the [Create tutorial](tutorial-design-database-using-azure-portal.md) to create a server.
+
+> [!IMPORTANT]
+> **Query Store**, **Query Performance Insight**, and **Performance Recommendation** are in Public Preview.
+
+## Enabling data collection
+The [Query Store](concepts-query-store.md) captures a history of queries and wait statistics on your server and stores it in the **azure_sys** database on your server. It is an opt-in feature. To enable it:
+
+1. Open the Azure portal.
+
+2. Select your Azure Database for PostgreSQL server.
+
+3. Select **Server parameters** which is in the **Settings** section of the menu on the left.
+
+4. Set **pg_qs.query_capture_mode** to **TOP** to start collecting query performance data. Set **pgms_wait_sampling.query_capture_mode** to **ALL** to start collecting wait statistics. Save.
+
+ :::image type="content" source="./media/tutorial-performance-intelligence/query-store-parameters.png" alt-text="Query Store server parameters":::
+
+5. Allow up to 20 minutes for the first batch of data to persist in the **azure_sys** database.
++
+## Performance insights
+The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
+
+1. In the portal page of your Azure Database for PostgreSQL server, select **Query performance Insight** under the **Support + troubleshooting** section of the menu on the left.
+
+2. The **Long running queries** tab shows the top 5 queries by average duration per execution, aggregated in 15 minute intervals.
+
+ :::image type="content" source="./media/tutorial-performance-intelligence/query-performance-insight-landing-page.png" alt-text="Query Performance Insight landing page":::
+
+ You can view more queries by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
+
+3. You can click and drag in the chart to narrow down to a specific time window.
+
+4. Use the zoom in and out icons to view a smaller or larger period of time respectively.
+
+5. View the table below the chart to learn more details about the long-running queries in that time window.
+
+6. Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server.
+
+ :::image type="content" source="./media/tutorial-performance-intelligence/query-performance-insight-wait-statistics.png" alt-text="Query Performance Insight wait statistics":::
++
+## Performance recommendations
+The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance.
+
+1. Open **Performance Recommendations** from the **Support + troubleshooting** section of the menu bar on the Azure portal page for your PostgreSQL server.
+
+ :::image type="content" source="./media/tutorial-performance-intelligence/performance-recommendations-landing-page-1.png" alt-text="Performance Recommendations landing page":::
+
+2. Select **Analyze** and choose a database. This will begin the analysis.
+
+3. Depending on your workload, this may take several minutes to complete. Once the analysis is done, there will be a notification in the portal.
+
+4. The **Performance Recommendations** window will show a list of recommendations if any were found.
+
+5. A recommendation will show information about the relevant **Database**, **Table**, **Column**, and **Index Size**.
+
+ :::image type="content" source="./media/tutorial-performance-intelligence/performance-recommendations-result.png" alt-text="Performance Recommendations result":::
+
+6. To implement the recommendation, copy the query text and run it from your client of choice.
+
+### Permissions
+**Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature.
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-design-database-using-azure-cli.md
- Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure CLI'
-description: This tutorial shows how to create, configure, and query your first Azure Database for PostgreSQL - Single Server using Azure CLI.
------ Previously updated : 01/26/2022 --
-# Tutorial: Design an Azure Database for PostgreSQL - Single Server using Azure CLI
-
-In this tutorial, you use Azure CLI (command-line interface) and other utilities to learn how to:
-> [!div class="checklist"]
->
-> * Create an Azure Database for PostgreSQL server
-> * Configure the server firewall
-> * Use [**psql**](https://www.postgresql.org/docs/9.6/static/app-psql.html) utility to create a database
-> * Load sample data
-> * Query data
-> * Update data
-> * Restore data
----
-## Set parameter values
-
-The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
-
-Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
--
-## Create a resource group
-
-Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
--
-## Create a server
-
-Create a server with the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command.
--
-> [!NOTE]
->
-> * The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. For more information, see [Azure Database for PostgreSQL Naming Rules](../azure-resource-manager/management/resource-name-rules.md#microsoftdbforpostgresql).
-> * The user name for the admin user can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
-> * The password must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
-> * For information about SKUs, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
-
->[!IMPORTANT]
->
-> * The default PostgreSQL version on your server is 9.6. To see all the versions supported, see [Supported PostgreSQL major versions](./concepts-supported-versions.md).
-> * SSL is enabled by default on your server. For more information on SSL, see [Configure SSL connectivity](./concepts-ssl-connection-security.md).
-
-## Configure a server-based firewall rule
-
-Create a firewall rule with the [az postgres server firewall-rule create](./concepts-firewall-rules.md) command to give your local environment access to connect to the server.
--
-> [!TIP]
-> If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
-
-> [!NOTE]
-> To avoid connectivity issues, make sure your network's firewall allows port 5432. Azure Database for PostgreSQL servers use that port.
-
-## List server-based firewall rules
-
-To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command.
--
-The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
-
-## Get the connection information
-
-To connect to your server, provide host information and access credentials.
-
-```azurecli
-az postgres server show --resource-group $resourceGroup --name $server
-```
-
-Make a note of the **administratorLogin** and **fullyQualifiedDomainName** values.
-
-## Connect to the Azure Database for PostgreSQL server by using psql
-
-The [psql](https://www.postgresql.org/docs/current/static/app-psql.html) client is a popular choice for connecting to PostgreSQL servers. You can connect to your server by using `psql` with [Azure Cloud Shell](../cloud-shell/overview.md). You can also use `psql` on your local environment if you have it available. An empty database, **postgres**, is automatically created with a new PostgreSQL server. You can use that database to connect with `psql`, as shown in the following code.
-
-```bash
-psql --host=<server_name>.postgres.database.azure.com --port=5432 --username=<admin_user>@<server_name> --dbname=postgres
-```
-
-> [!TIP]
-> If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example, the connection string for psql would be:
->
-> ```bash
-> psql postgresql://<admin_user>%40<server_name>@<server_name>.postgres.database.azure.com:5432/postgres
-> ```
-
-## Create a blank database
-
-1. Once you are connected to the server, create a blank database at the prompt:
-
- ```sql
- CREATE DATABASE mypgsqldb;
- ```
-
-1. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
-
- ```sql
- \c mypgsqldb
- ```
-
-## Create tables in the database
-
-Now that you know how to connect to the Azure Database for PostgreSQL, you can complete some basic tasks:
-
-First, create a table and load it with some data. For example, create a table that tracks inventory information:
-
-```sql
-CREATE TABLE inventory (
- id serial PRIMARY KEY,
- name VARCHAR(50),
- quantity INTEGER
-);
-```
-
-You can see the newly created table in the list of tables now by typing:
-
-```sql
-\dt
-```
-
-## Load data into the table
-
-Now that there is a table created, insert some data into it. At the open command prompt window, run the following query to insert some rows of data:
-
-```sql
-INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
-INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
-```
-
-You have now added two rows of sample data into the table you created earlier.
-
-## Query and update the data in the tables
-
-Execute the following query to retrieve information from the inventory table:
-
-```sql
-SELECT * FROM inventory;
-```
-
-You can also update the data in the inventory table:
-
-```sql
-UPDATE inventory SET quantity = 200 WHERE name = 'banana';
-```
-
-You can see the updated values when you retrieve the data:
-
-```sql
-SELECT * FROM inventory;
-```
-
-## Restore a database to a previous point in time
-
-Imagine you have accidentally deleted a table. This is something you cannot easily recover from. Azure Database for PostgreSQL allows you to go back to any point-in-time for which your server has backups (determined by the backup retention period you configured) and restore this point-in-time to a new server. You can use this new server to recover your deleted data.
-
-The following command restores the sample server to a point before the table was added:
-
-```azurecli-interactive
-az postgres server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time 2017-04-13T13:59:00Z --source-server mydemoserver
-```
-
-The `az postgres server restore` command needs the following parameters:
-
-| Setting | Suggested value | Description  |
-| | | |
-| resource-group |  myresourcegroup |  The resource group in which the source server exists.  |
-| name | mydemoserver-restored | The name of the new server that is created by the restore command. |
-| restore-point-in-time | 2017-04-13T13:59:00Z | Select a point-in-time to restore to. This date and time must be within the source server's backup retention period. Use ISO8601 date and time format. For example, you may use your own local timezone, such as `2017-04-13T05:59:00-08:00`, or use UTC Zulu format `2017-04-13T13:59:00Z`. |
-| source-server | mydemoserver | The name or ID of the source server to restore from. |
-
-Restoring a server to a point-in-time creates a new server, copied as the original server as of the point in time you specify. The location and pricing tier values for the restored server are the same as the source server.
-
-The command is synchronous, and will return after the server is restored. Once the restore finishes, locate the new server that was created. Verify the data was restored as expected.
-
-## Clean up resources
-
-Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az-vm-extension-set) command - unless you have an ongoing need for these resources. Some of these resources may take a while to create, as well as to delete.
-
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Next steps
-
-In this tutorial, you learned how to use Azure CLI (command-line interface) and other utilities to:
-> [!div class="checklist"]
->
-> * Create an Azure Database for PostgreSQL server
-> * Configure the server firewall
-> * Use the **psql** utility to create a database
-> * Load sample data
-> * Query data
-> * Update data
-> * Restore data
-
-> [!div class="nextstepaction"]
-> [Design your first Azure Database for PostgreSQL using the Azure portal](tutorial-design-database-using-azure-portal.md)
postgresql Tutorial Design Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-design-database-using-azure-portal.md
- Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure portal'
-description: This tutorial shows how to Design your first Azure Database for PostgreSQL - Single Server using the Azure portal.
------ Previously updated : 06/25/2019-
-# Tutorial: Design an Azure Database for PostgreSQL - Single Server using the Azure portal
-
-Azure Database for PostgreSQL is a managed service that enables you to run, manage, and scale highly available PostgreSQL databases in the cloud. Using the Azure portal, you can easily manage your server and design a database.
-
-In this tutorial, you use the Azure portal to learn how to:
-> [!div class="checklist"]
-> * Create an Azure Database for PostgreSQL server
-> * Configure the server firewall
-> * Use [**psql**](https://www.postgresql.org/docs/9.6/static/app-psql.html) utility to create a database
-> * Load sample data
-> * Query data
-> * Update data
-> * Restore data
-
-## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-## Create an Azure Database for PostgreSQL
-
-An Azure Database for PostgreSQL server is created with a defined set of [compute and storage resources](./concepts-pricing-tiers.md). The server is created within an [Azure resource group](../azure-resource-manager/management/overview.md).
-
-Follow these steps to create an Azure Database for PostgreSQL server:
-1. Click **Create a resource** in the upper left-hand corner of the Azure portal.
-2. Select **Databases** from the **New** page, and select **Azure Database for PostgreSQL** from the **Databases** page.
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/1-create-database.png" alt-text="Azure Database for PostgreSQL - Create the database":::
-
-3. Select the **Single server** deployment option.
-
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/select-deployment-option.png" alt-text="Select Azure Database for PostgreSQL - Single server deployment option":::
-
-4. Fill out the **Basics** form with the following information:
-
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/create-basics.png" alt-text="Create a server":::
-
- Setting|Suggested Value|Description
- ||
- Subscription|Your subscription name|The Azure subscription that you want to use for your server. If you have multiple subscriptions, choose the subscription in which you're billed for the resource.
- Resource group|*myresourcegroup*| A new resource group name or an existing one from your subscription.
- Server name |*mydemoserver*|A unique name that identifies your Azure Database for PostgreSQL server. The domain name *postgres.database.azure.com* is appended to the server name you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain at least 3 through 63 characters.
- Data source | *None* | Select *None* to create a new server from scratch. (You would select *Backup* if you were creating a server from a geo-backup of an existing Azure Database for PostgreSQL server).
- Admin username |*myadmin*| Your own login account to use when you connect to the server. The admin login name can't be **azure_superuser**, **azure_pg_admin**, **admin**, **administrator**, **root**, **guest**, or **public**. It can't start with **pg_**.
- Password |Your password| A new password for the server admin account. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).
- Location|The region closest to your users| The location that is closest to your users.
- Version|The latest major version| The latest PostgreSQL major version, unless you have specific requirements otherwise.
- Compute + storage | **General Purpose**, **Gen 5**, **2 vCores**, **5 GB**, **7 days**, **Geographically Redundant** | The compute, storage, and backup configurations for your new server. Select **Configure server**. Next, select the **General Purpose** tab. *Gen 5*, *4 vCores*, *100 GB*, and *7 days* are the default values for **Compute Generation**, **vCore**, **Storage**, and **Backup Retention Period**. You can leave those sliders as is or adjust them. To enable your server backups in geo-redundant storage select **Geographically Redundant** from the **Backup Redundancy Options**. To save this pricing tier selection, select **OK**. The next screenshot captures these selections.
-
- > [!NOTE]
- > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier cannot later be scaled to General Purpose or Memory Optimized. See the [pricing page](https://azure.microsoft.com/pricing/details/postgresql/) for more information.
- >
-
- :::image type="content" source="./media/quickstart-create-database-portal/2-pricing-tier.png" alt-text="The Pricing tier pane":::
-
- > [!TIP]
- > With **auto-growth** enabled your server increases storage when you are approaching the allocated limit, without impacting your workload.
-
-5. Select **Review + create** to review your selections. Select **Create** to provision the server. This operation may take a few minutes.
-
-6. On the toolbar, select the **Notifications** icon (a bell) to monitor the deployment process. Once the deployment is done, you can select **Pin to dashboard**, which creates a tile for this server on your Azure portal dashboard as a shortcut to the server's **Overview** page. Selecting **Go to resource** opens the server's **Overview** page.
-
- :::image type="content" source="./media/quickstart-create-database-portal/3-notifications.png" alt-text="The Notifications pane":::
-
- By default, a **postgres** database is created under your server. The [postgres](https://www.postgresql.org/docs/9.6/static/app-initdb.html) database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You cannot access this database.)
--
-## Configure a server-level firewall rule
-
-The Azure Database for PostgreSQL service uses a firewall at the server-level. By default, this firewall prevents all external applications and tools from connecting to the server and any databases on the server unless a firewall rule is created to open the firewall for a specific IP address range.
-
-1. After the deployment completes, click **All Resources** from the left-hand menu and type in the name **mydemoserver** to search for your newly created server. Click the server name listed in the search result. The **Overview** page for your server opens and provides options for further configuration.
-
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/4-locate.png" alt-text="Azure Database for PostgreSQL - Search for server":::
-
-2. In the server page, select **Connection security**.
-
-3. Click in the text box under **Rule Name,** and add a new firewall rule to specify the IP range for connectivity. Enter your IP range. Click **Save**.
-
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/5-firewall-2.png" alt-text="Azure Database for PostgreSQL - Create Firewall Rule":::
-
-4. Click **Save** and then click the **X** to close the **Connections security** page.
-
- > [!NOTE]
- > Azure PostgreSQL server communicates over port 5432. If you are trying to connect from within a corporate network, outbound traffic over port 5432 may not be allowed by your network's firewall. If so, you cannot connect to your Azure SQL Database server unless your IT department opens port 5432.
- >
-
-## Get the connection information
-
-When you created the Azure Database for PostgreSQL server, the default **postgres** database was also created. To connect to your database server, you need to provide host information and access credentials.
-
-1. From the left-hand menu in the Azure portal, click **All resources** and search for the server you just created.
-
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/4-locate.png" alt-text="Azure Database for PostgreSQL - Search for server":::
-
-2. Click the server name **mydemoserver**.
-
-3. Select the server's **Overview** page. Make a note of the **Server name** and **Server admin login name**.
-
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/6-server-name.png" alt-text="Azure Database for PostgreSQL - Server Admin Login":::
--
-## Connect to PostgreSQL database using psql
-If your client computer has PostgreSQL installed, you can use a local instance of [psql](https://www.postgresql.org/docs/9.6/static/app-psql.html), or the Azure Cloud Console to connect to an Azure PostgreSQL server. Let's now use the psql command-line utility to connect to the Azure Database for PostgreSQL server.
-
-1. Run the following psql command to connect to an Azure Database for PostgreSQL database:
- ```
- psql --host=<servername> --port=<port> --username=<user@servername> --dbname=<dbname>
- ```
-
- For example, the following command connects to the default database called **postgres** on your PostgreSQL server **mydemoserver.postgres.database.azure.com** using access credentials. Enter the `<server_admin_password>` you chose when prompted for password.
-
- ```
- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- ```
-
- > [!TIP]
- > If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example the connection string for psql would be,
- > ```
- > psql postgresql://myadmin%40mydemoserver@mydemoserver.postgres.database.azure.com:5432/postgres
- > ```
-
-2. Once you are connected to the server, create a blank database at the prompt:
- ```sql
- CREATE DATABASE mypgsqldb;
- ```
-
-3. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
- ```sql
- \c mypgsqldb
- ```
-
-## Create tables in the database
-Now that you know how to connect to the Azure Database for PostgreSQL, you can complete some basic tasks:
-
-First, create a table and load it with some data. Let's create a table that tracks inventory information using this SQL code:
-```sql
-CREATE TABLE inventory (
- id serial PRIMARY KEY,
- name VARCHAR(50),
- quantity INTEGER
-);
-```
-
-You can see the newly created table in the list of tables now by typing:
-```sql
-\dt
-```
-
-## Load data into the tables
-Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data.
-```sql
-INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
-INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
-```
-
-You have now two rows of sample data into the inventory table you created earlier.
-
-## Query and update the data in the tables
-Execute the following query to retrieve information from the inventory database table.
-```sql
-SELECT * FROM inventory;
-```
-
-You can also update the data in the table.
-```sql
-UPDATE inventory SET quantity = 200 WHERE name = 'banana';
-```
-
-You can see the updated values when you retrieve the data.
-```sql
-SELECT * FROM inventory;
-```
-
-## Restore data to a previous point in time
-Imagine you have accidentally deleted this table. This situation is something you cannot easily recover from. Azure Database for PostgreSQL allows you to go back to any point-in-time for which your server has backups (determined by the backup retention period you configured) and restore this point-in-time to a new server. You can use this new server to recover your deleted data. The following steps restore the **mydemoserver** server to a point before the inventory table was added.
-
-1. On the Azure Database for PostgreSQL **Overview** page for your server, click **Restore** on the toolbar. The **Restore** page opens.
-
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/9-azure-portal-restore.png" alt-text="Screenshot that shows the Azure Database for PostgreSQL **Overview** page for your server and highlights the Restore button.":::
-
-2. Fill out the **Restore** form with the required information:
-
- :::image type="content" source="./media/tutorial-design-database-using-azure-portal/10-azure-portal-restore.png" alt-text="Azure portal - Restore form options":::
-
- - **Restore point**: Select a point-in-time that occurs before the server was changed
- - **Target server**: Provide a new server name you want to restore to
- - **Location**: You cannot select the region, by default it is same as the source server
- - **Pricing tier**: You cannot change this value when restoring a server. It is same as the source server.
-3. Click **OK** to [restore the server to a point-in-time](./howto-restore-server-portal.md) before the table was deleted. Restoring a server to a different point in time creates a duplicate new server as the original server as of the point in time you specify, provided that it is within the retention period for your [pricing tier](./concepts-pricing-tiers.md).
-
-## Clean up resources
-
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
-
-## Next steps
-In this tutorial, you learned how to use the Azure portal and other utilities to:
-> [!div class="checklist"]
-> * Create an Azure Database for PostgreSQL server
-> * Configure the server firewall
-> * Use the **psql** utility to create a database
-> * Load sample data
-> * Query data
-> * Update data
-> * Restore data
-
-> [!div class="nextstepaction"]
->[Design your first Azure Database for PostgreSQL using Azure CLI](tutorial-design-database-using-azure-cli.md)
postgresql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-design-database-using-powershell.md
- Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure PowerShell'
-description: This tutorial shows how to create, configure, and query your first Azure Database for PostgreSQL - Single Server using Azure PowerShell.
------ Previously updated : 06/08/2020--
-# Tutorial: Design an Azure Database for PostgreSQL - Single Server using PowerShell
-
-Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on PostgreSQL
-Community Edition database engine. In this tutorial, you use PowerShell and other utilities to learn
-how to:
-
-> [!div class="checklist"]
-> - Create an Azure Database for PostgreSQL
-> - Configure the server firewall
-> - Use [**psql**](https://www.postgresql.org/docs/9.6/static/app-psql.html) utility to create a database
-> - Load sample data
-> - Query data
-> - Update data
-> - Restore data
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
-
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)
-cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
-
-> [!IMPORTANT]
-> While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
-> PowerShell module using the following command: `Install-Module -Name Az.PostgreSql -AllowPrerelease`.
-> Once the Az.PostgreSql PowerShell module is generally available, it becomes part of future Az
-> PowerShell module releases and available natively from within Azure Cloud Shell.
-
-If this is your first time using the Azure Database for PostgreSQL service, you must register the
-**Microsoft.DBforPostgreSQL** resource provider.
-
-```azurepowershell-interactive
-Register-AzResourceProvider -ProviderNamespace Microsoft.DBforPostgreSQL
-```
--
-If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
-should be billed. Select a specific subscription ID using the
-[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
-
-```azurepowershell-interactive
-Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
-```
-
-## Create a resource group
-
-Create an
-[Azure resource group](../azure-resource-manager/management/overview.md)
-using the
-[New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)
-cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as
-a group.
-
-The following example creates a resource group named **myresourcegroup** in the **West US** region.
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name myresourcegroup -Location westus
-```
-
-## Create an Azure Database for PostgreSQL server
-
-Create an Azure Database for PostgreSQL server with the `New-AzPostgreSqlServer` cmdlet. A server can manage
-multiple databases. Typically, a separate database is used for each project or for each user.
-
-The following example creates a PostgreSQL server in the **West US** region named **mydemoserver** in the
-**myresourcegroup** resource group with a server admin login of **myadmin**. It is a Gen 5 server in
-the general-purpose pricing tier with 2 vCores and geo-redundant backups enabled. Document the
-password used in the first line of the example as this is the password for the PostgreSQL server admin
-account.
-
-> [!TIP]
-> A server name maps to a DNS name and must be globally unique in Azure.
-
-```azurepowershell-interactive
-$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
-New-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sku GP_Gen5_2 -GeoRedundantBackup Enabled -Location westus -AdministratorUsername myadmin -AdministratorLoginPassword $Password
-```
-
-The **Sku** parameter value follows the convention **pricing-tier\_compute-generation\_vCores** as
-shown in the following examples.
--- `-Sku B_Gen5_1` maps to Basic, Gen 5, and 1 vCore. This option is the smallest SKU available.-- `-Sku GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.-- `-Sku MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.-
-For information about valid **Sku** values by region and for tiers, see
-[Azure Database for PostgreSQL pricing tiers](./concepts-pricing-tiers.md).
-
-Consider using the basic pricing tier if light compute and I/O are adequate for your workload.
-
-> [!IMPORTANT]
-> Servers created in the basic pricing tier cannot be later scaled to general-purpose or memory-
-> optimized and cannot be geo-replicated.
-
-## Configure a firewall rule
-
-Create an Azure Database for PostgreSQL server-level firewall rule using the `New-AzPostgreSqlFirewallRule`
-cmdlet. A server-level firewall rule allows an external application, such as the `psql`
-command-line tool or PostgreSQL Workbench to connect to your server through the Azure Database for PostgreSQL
-service firewall.
-
-The following example creates a firewall rule named **AllowMyIP** that allows connections from a
-specific IP address, 192.168.0.1. Substitute an IP address or range of IP addresses that correspond
-to the location that you are connecting from.
-
-```azurepowershell-interactive
-New-AzPostgreSqlFirewallRule -Name AllowMyIP -ResourceGroupName myresourcegroup -ServerName mydemoserver -StartIPAddress 192.168.0.1 -EndIPAddress 192.168.0.1
-```
-
-> [!NOTE]
-> Connections to Azure Database for PostgreSQL communicate over port 5432. If you try to connect from
-> within a corporate network, outbound traffic over port 5432 might not be allowed. In this
-> scenario, you can only connect to the server if your IT department opens port 5432.
-
-## Get the connection information
-
-To connect to your server, you need to provide host information and access credentials. Use the
-following example to determine the connection information. Make a note of the values for
-**FullyQualifiedDomainName** and **AdministratorLogin**.
-
-```azurepowershell-interactive
-Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Select-Object -Property FullyQualifiedDomainName, AdministratorLogin
-```
-
-```Output
-FullyQualifiedDomainName AdministratorLogin
-
-mydemoserver.postgresql.database.azure.com myadmin
-```
-
-## Connect to PostgreSQL database using psql
-
-If your client computer has PostgreSQL installed, you can use a local instance of
-[psql](https://www.postgresql.org/docs/current/static/app-psql.html) to connect to an Azure
-PostgreSQL server. You can also access a pre-installed version of the `psql` command-line tool in
-Azure Cloud Shell by selecting the **Try It** button on a code sample in this article. Other ways to
-access Azure Cloud Shell are to select the **>_** button on the upper-right toolbar in the Azure
-portal or by visiting [shell.azure.com](https://shell.azure.com/).
-
-1. Connect to your Azure PostgreSQL server using the `psql` command-line utility.
-
- ```azurepowershell-interactive
- psql --host=<servername> --port=<port> --username=<user@servername> --dbname=<dbname>
- ```
-
- For example, the following command connects to the default database called **postgres** on your
- PostgreSQL server `mydemoserver.postgres.database.azure.com` using access credentials. Enter
- the `<server_admin_password>` you chose when prompted for password.
-
- ```azurepowershell-interactive
- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- ```
-
- > [!TIP]
- > If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username
- > with `%40`. For example the connection string for psql would be,
- > `psql postgresql://myadmin%40mydemoserver@mydemoserver.postgres.database.azure.com:5432/postgres`
-
-1. Once you are connected to the server, create a blank database at the prompt.
-
- ```sql
- CREATE DATABASE mypgsqldb;
- ```
-
-1. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
-
- ```sql
- \c mypgsqldb
- ```
-
-## Create tables in the database
-
-Now that you know how to connect to the Azure Database for PostgreSQL database, complete some basic
-tasks.
-
-First, create a table and load it with some data. Let's create a table that stores inventory
-information.
-
-```sql
-CREATE TABLE inventory (
- id serial PRIMARY KEY,
- name VARCHAR(50),
- quantity INTEGER
-);
-```
-
-## Load data into the tables
-
-Now that you have a table, insert some data into it. At the open command prompt window, run the
-following query to insert some rows of data.
-
-```sql
-INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
-INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
-```
-
-Now you have two rows of sample data into the table you created earlier.
-
-## Query and update the data in the tables
-
-Execute the following query to retrieve information from the database table.
-
-```sql
-SELECT * FROM inventory;
-```
-
-You can also update the data in the tables.
-
-```sql
-UPDATE inventory SET quantity = 200 WHERE name = 'banana';
-```
-
-The row gets updated accordingly when you retrieve data.
-
-```sql
-SELECT * FROM inventory;
-```
-
-## Restore a database to a previous point in time
-
-You can restore the server to a previous point-in-time. The restored data is copied to a new server,
-and the existing server is left unchanged. For example, if a table is accidentally dropped, you can
-restore to the time just the drop occurred. Then, you can retrieve the missing table and data from
-the restored copy of the server.
-
-To restore the server, use the `Restore-AzPostgreSqlServer` PowerShell cmdlet.
-
-### Run the restore command
-
-To restore the server, run the following example from PowerShell.
-
-```azurepowershell-interactive
-$restorePointInTime = (Get-Date).AddMinutes(-10)
-Get-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup |
- Restore-AzPostgreSqlServer -Name mydemoserver-restored -ResourceGroupName myresourcegroup -RestorePointInTime $restorePointInTime -UsePointInTimeRestore
-```
-
-When you restore a server to an earlier point-in-time, a new server is created. The original server
-and its databases from the specified point-in-time are copied to the new server.
-
-The location and pricing tier values for the restored server remain the same as the original server.
-
-After the restore process finishes, locate the new server and verify that the data is restored as
-expected. The new server has the same server admin login name and password that was valid for the
-existing server at the time the restore was started. The password can be changed from the new
-server's **Overview** page.
-
-The new server created during a restore does not have the VNet service endpoints that existed on the
-original server. These rules must be set up separately for the new server. Firewall rules from the
-original server are restored.
-
-## Clean up resources
-
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to back up and restore an Azure Database for PostgreSQL server using PowerShell](howto-restore-server-powershell.md)
postgresql Tutorial Monitor And Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-monitor-and-tune.md
- Title: 'Tutorial: Monitor and tune - Azure Database for PostgreSQL - Single Server'
-description: This tutorial walks through monitoring and tuning in Azure Database for PostgreSQL - Single Server.
----- Previously updated : 5/6/2019--
-# Tutorial: Monitor and tune Azure Database for PostgreSQL - Single Server
-
-Azure Database for PostgreSQL has features that help you understand and improve your server performance. In this tutorial you will learn how to:
-> [!div class="checklist"]
-> * Enable query and wait statistics collection
-> * Access and utilize the data collected
-> * View query performance and wait statistics over time
-> * Analyze a database to get performance recommendations
-> * Apply performance recommendations
-
-## Prerequisites
-You need an Azure Database for PostgreSQL server with PostgreSQL version 9.6 or 10. You can follow the steps in the [Create tutorial](tutorial-design-database-using-azure-portal.md) to create a server.
-
-> [!IMPORTANT]
-> **Query Store**, **Query Performance Insight**, and **Performance Recommendation** are in Public Preview.
-
-## Enabling data collection
-The [Query Store](concepts-query-store.md) captures a history of queries and wait statistics on your server and stores it in the **azure_sys** database on your server. It is an opt-in feature. To enable it:
-
-1. Open the Azure portal.
-
-2. Select your Azure Database for PostgreSQL server.
-
-3. Select **Server parameters** which is in the **Settings** section of the menu on the left.
-
-4. Set **pg_qs.query_capture_mode** to **TOP** to start collecting query performance data. Set **pgms_wait_sampling.query_capture_mode** to **ALL** to start collecting wait statistics. Save.
-
- :::image type="content" source="./media/tutorial-performance-intelligence/query-store-parameters.png" alt-text="Query Store server parameters":::
-
-5. Allow up to 20 minutes for the first batch of data to persist in the **azure_sys** database.
--
-## Performance insights
-The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
-
-1. In the portal page of your Azure Database for PostgreSQL server, select **Query performance Insight** under the **Support + troubleshooting** section of the menu on the left.
-
-2. The **Long running queries** tab shows the top 5 queries by average duration per execution, aggregated in 15 minute intervals.
-
- :::image type="content" source="./media/tutorial-performance-intelligence/query-performance-insight-landing-page.png" alt-text="Query Performance Insight landing page":::
-
- You can view more queries by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
-
-3. You can click and drag in the chart to narrow down to a specific time window.
-
-4. Use the zoom in and out icons to view a smaller or larger period of time respectively.
-
-5. View the table below the chart to learn more details about the long-running queries in that time window.
-
-6. Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server.
-
- :::image type="content" source="./media/tutorial-performance-intelligence/query-performance-insight-wait-statistics.png" alt-text="Query Performance Insight wait statistics":::
--
-## Performance recommendations
-The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance.
-
-1. Open **Performance Recommendations** from the **Support + troubleshooting** section of the menu bar on the Azure portal page for your PostgreSQL server.
-
- :::image type="content" source="./media/tutorial-performance-intelligence/performance-recommendations-landing-page.png" alt-text="Performance Recommendations landing page":::
-
-2. Select **Analyze** and choose a database. This will begin the analysis.
-
-3. Depending on your workload, this may take several minutes to complete. Once the analysis is done, there will be a notification in the portal.
-
-4. The **Performance Recommendations** window will show a list of recommendations if any were found.
-
-5. A recommendation will show information about the relevant **Database**, **Table**, **Column**, and **Index Size**.
-
- :::image type="content" source="./media/tutorial-performance-intelligence/performance-recommendations-result.png" alt-text="Performance Recommendations result":::
-
-6. To implement the recommendation, copy the query text and run it from your client of choice.
-
-### Permissions
-**Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature.
-
-## Clean up resources
-
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
search Reference Stopwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/reference-stopwords.md
+
+ Title: Stopwords
+
+description: Reference documentation containing the stopwords list of the Microsoft language analyzers.
+++++++ Last updated : 05/16/2022++
+# Stopwords reference (Microsoft analyzers)
+
+When text is indexed into Azure Cognitive Search, it's processed by analyzers so it can be efficiently stored in a search index. During this [lexical analysis](tutorial-create-custom-analyzer.md#how-analyzers-work) process, [language analyzers](index-add-language-analyzers.md) will remove stopwords specific to that language. Stopwords are non-essential words such as "the" or "an" that can be removed without compromising the lexical integrity of your content.
+
+Stopword removal applies to all supported [Lucene and Microsoft analyzers](index-add-language-analyzers.md#supported-language-analyzers) used in Azure Cognitive Search.
+
+This article lists the stopwords used by the Microsoft analyzer for each language.
+
+For the stopword list for Lucene analyzers, see the [Apache Lucene source code on GitHub](https://github.com/apache/lucene/tree/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis).
+
+> [!TIP]
+> To view the output of any given analyzer, call the [Analyze Text REST API](/rest/api/searchservice/test-analyzer). This API is often helpful for debugging unexpected search results.
+
+## Arabic (ar.microsoft)
+
+`في` `فى` `من` `ان` `أن` `إن` `على` `الى` `إلى` `التي` `التى` `عن` `الذي` `الذى` `مع` `لا` `ما` `هذا` `هذه` `بعد` `لم` `كان` `إنه` `انه` `أنه ` `كل` `او` `أو` `و` `ذلك` `وفي` `وفى` `هو` `قبل` `كما` `منذ` `غير` `كانت` `وكان` `أي` `اي` `اى` `حتى` `وقد` `ولا` `فيها` `قد` `هي` `هى` `وهو` `الذين` `ومن` `حول` `لكن` `له` `دون` `أيضا` `ايضا` `حيث` `الا` `ألا` `إلا` `بعض` `امام` `أمام` `فيه` `اذا` `إذا` `بها` `وان` `وإن` `وأن` `انها` `أنها` `إنها` `ثم` `نحو` `عليه` `لها` `وهي` `وهى` `ولم` `بل` `منها` `وبه` `به` `وكانت` `ومنها` `وعليها` `عليها` `عندما` `هناك` `يمكن` `ليس ` `ولن` `ومثل` `لدى` `وعبر` `وحين` `واما` `وإما` `وأما` `وعند` `وآخر` `وأي` `وأى` `واى` `واذ` `وإذ` `وتلك` `وال` `ومما` `ومنه` `وبأن` `وتحت` `وبما` `والآن` `والان` `وام` `وإم` `وفقط` `لن` `مثل` `وعلى` `عبر` `حين` `علي ` `اما` `أما` `إما` `عند` `آخر` `اخر` `ولكن` `اذ` `تلك` `أي ` `أى` `اي` `اى` `ال` `مما` `وهذا` `منه` `بأن` `تحت` `تكون` `وما` `ولكن ` `بما` `الآن` `فقط` `ام` `ا` `ب` `ت` `ث` `ج` `ح` `خ` `د` `ذ` `ر` `ز` `س` `ش` `ص` `ض` `ك` `ط` `ظ` `ع` `غ` `ف` `ق` `ك` `ل` `م` `ن` `ه` `و` `ي` `أ` `إ` `آ` `ى` `ئ` `ء` `ؤ` `ة`
+
+## Bengali (bn.microsoft)
+
+`во` `бы` `не` `на` `что` `по` `для` `как` `от` `это` `из` `за` `только` `или` `их` `все` `его` `он` `но` `до` `же` `то` `так` `уже` `а` `б` `в` `г` `д` `е` `ё` `ж` `з` `и` `й` `к` `л` `м` `н` `о` `п` `р` `с` `т` `у` `ф` `х` `ц` `ч` `ш` `щ` `ъ` `ы` `ь` `э` `ю` `я`
+
+## Bulgarian (bg.microsoft)
+
+`вж` `до` `е` `и` `на` `от` `то` `у ` `че`
+
+## Catalan (ca.microsoft)
+
+`la` `el` `l` `les` `de` `d` `del` `dels` `i` `un` `una` `uns` `unes` `a` `als` `al` `en` `és` `es` `s` `se` `hi`
+
+## Chinese Simplified (zh-Hans.microsoft)
+
+`?about` `$ 1 2 3 4 5 6 7 8 9 0 _` `a b c d e f g h i j k l m n o p q r s t u v w x y z` `after` `all` `also` `an` `and` `another` `any` `are` `as` `at` `be` `because` `been` `before` `being` `between` `both` `but` `by` `came` `can` `come` `could` `did` `do` `each` `for` `from` `get` `got` `had` `has` `have` `he` `her` `here` `him` `himself` `his` `how` `if` `in` `into` `is` `it` `like` `make` `many` `me` `might` `more` `most` `much` `must` `my` `never` `now` `of` `on` `only` `or` `other` `our` `out` `over` `said` `same` `see` `should` `since` `some` `still` `such` `take` `than` `that` `the` `their` `them` `then` `there` `these` `they` `this` `those` `through` `to` `too` `under` `up` `very` `was` `way` `we` `well` `were` `what` `where` `which` `while` `who` `with` `would` `you` `your` `的` `在` `了` `是` `有` `为` `这` `我` `也` `就` `他` `与` `等` `以` `着` `而` `从` `并` `还` `已` `但` `你` `之` `更` `又` `得` `她` `它` `很` `其` `该` `那` `各`
+
+## Chinese Traditional (zh-Hant.microsoft)
+
+`?about` `$ 1 2 3 4 5 6 7 8 9 0 _` `a b c d e f g h i j k l m n o p q r s t u v w x y z` `after` `all` `also` `an` `and` `another` `any` `are` `as` `at` `be` `because` `been` `before` `being` `between` `both` `but` `by` `came` `can` `come` `could` `did` `do` `each` `for` `from` `get` `got` `had` `has` `have` `he` `her` `here` `him` `himself` `his` `how` `if` `in` `into` `is` `it` `like` `make` `many` `me` `might` `more` `most` `much` `must` `my` `never` `now` `of` `on` `only` `or` `other` `our` `out` `over` `said` `same` `see` `should` `since` `some` `still` `such` `take` `than` `that` `the` `their` `them` `then` `there` `these` `they` `this` `those` `through` `to` `too` `under` `up` `very` `was` `way` `we` `well` `were` `what` `where` `which` `while` `who` `with` `would` `you` `your` `的` `一` `不` `在` `人` `有` `是` `為` `以` `於` `上` `他` `而` `後` `之` `來` `及` `了` `因` `下` `可` `到` `由` `這` `與` `也` `此` `但` `並` `個` `其` `已` `無` `小` `我` `們` `起` `最` `再` `今` `去` `好` `只` `又` `或` `很` `亦` `某` `把` `那` `你` `乃` `它`
+
+## Croatian (hr.microsoft)
+
+`i` `u` `je` `se` `na` `za` `da` `su` `o` `od` `a` `s`
+
+## Czech (cs.microsoft)
+
+`na` `se` `je` `že` `do` `to` `ve` `ale` `za` `si` `pro` `po` `by` `od` `už` `který` `bude` `jako` `tak` `jsou` `jsem` `jsme` `však` `podle` `až` `jen` `ze` `před` `také` `jeho` `má` `když` `byl` `co` `jak` `nebo` `při` `ještě` `aby` `než` `budou` `ani` `jaké` `další` `kteří` `není` `bylo` `mezi` `v` `a` `i` `ač` `ačkoli` `přece` `no` `ne` `ano` `která` `které` `kterou` `kterými` `budu` `budeme` `budete` `byli` `byly` `o` `ať` `Á` `á` `Ě` `ě` `É` `é` `Í` `í` `Ó` `ó` `Ú` `ú` `Ů` `ů` `Ď` `ď` `Ň` `ň` `Č` `č` `Ř` `Š` `š` `ř` `Ť` `ť` `Ž` `ž` `Ý` `ý` `a` `b` `c` `d` `e` `f` `g` `h` `i` `j` `k` `l` `m` `n` `o` `p` `q` `r` `s` `t` `u` `v` `w` `x` `y` `z` `A` `B` `C` `D` `E` `F` `G` `H` `I` `J` `K` `L` `M` `N` `O` `P` `Q` `R` `S` `T` `U` `V` `W` `X` `Y` `Z` `0` `1` `2` `3` `4` `5` `6` `7` `8` `9` `I` `II` `III` `IV` `V` `VI` `VII` `VIII` `IX` `X` `XI` `XII` `XIII` `XIV` `XVI` `XVII` `XX` `X` `I` `V` `L` `D` `M` `C` `LD` `CCC` `IV` `VII` `XXXII` `LXXI` `VIII` `DCII` `LXX` `DCII.` `III` `DCCLII` `CDIX` `XXVII` `LIV` `MMVI` `MCMLXXII` `LXX` `DCC` `LII` `DCCIV` `DCV` `LXXIV` `LXV` `IXX` `XVIII`
+
+## Danish (da.microsoft)
+
+`af` `alting` `at` `begge` `countries` `de` `dem` `den` `dén` `denne` `der` `deres` `det` `dét` `dette` `dig` `din` `dine` `disse` `dit` `du` `eder` `eders` `en` `én` `enhver` `ens` `er` `et` `ét` `ethvert` `for` `fra` `ham` `han` `hans` `har` `hende` `hendes` `hin` `hinanden` `hun` `hverandre` `hvis` `hvo` `I` `ikke` `ingen` `ingenting` `jeg` `jer` `jeres` `man` `med` `men` `mig` `min` `mine` `mit` `nogen` `nogle` `og` `om` `os` `på` `sig` `sin` `sine` `sit` `som` `somme` `til` `være` `vi` `vor` `Vor` `vore` `vores`
+
+## Dutch (nl.microsoft)
+
+`de` `van` `en` `het` `in` `een` `is` `zijn` `de` `met` `op` `te` `voor` `die` `door` `dat` `aan` `tot` `als` `of` `in` `hij` `werd` `het` `uit` `bij` `ook` `niet` `wordt` `worden` `was` `er` `naar` `om` `zich` `maar` `heeft` `dan` `over` `deze` `nog` `meer` `kan` `ze` `hebben` `hun` `onder` `een` `kunnen` `tussen` `tegen` `dit` `na` `hij` `andere` `al` `zij` `veel` `men` `geen` `werden` `wel` `waar` `zie` `vooral` `weer` `deel` `je` `wat` `nu` `ten` `alle` `op` `van` `had` `waren` `maar` `moet` `zo` `zeer` `hem` `bij` `ook`
+
+## English (en.microsoft)
+
+`is` `and` `in` `it` `of` `the` `to` `that` `this` `these` `those` `is` `was` `for` `on` `be` `with` `as` `by` `at` `have` `are` `this` `not` `but` `had` `from` `or` `I` `my` `me` `mine` `myself` `you` `your` `yours` `yourself` `he` `him` `his` `himself` `she` `her` `hers` `herself` `it` `its` `itself` `we` `our` `ours` `ourselves` `they` `them` `their` `theirs` `themselves` `A` `B` `C` `D` `E` `F` `G` `H` `J` `K` `L` `M` `N` `O` `P` `Q` `R` `S` `T` `U` `V` `W` `X` `Y` `Z` `a` `b` `c` `d` `e` `f` `g` `h` `i` `j` `k` `l` `m` `n` `o` `p` `q` `r` `s` `t` `u` `v` `w` `x` `y` `z`
+
+## Finnish (fi.microsoft)
+
+`ai` `ainoa` `ainoaa` `ainoaan` `ainoaksi` `ainoalla` `ainoalle` `ainoalta` `ainoan` `ainoassa` `ainoasta` `ali` `alitse` `alla` `alle` `alta` `edellä` `edelle` `edeltä` `ehkä` `ei` `enemmän` `eniten` `ennen` `entä` `entäs` `erääkseen` `erääksi` `eräällä` `eräälle` `eräältä` `erään` `eräässä` `eräästä` `eräs` `eräskin` `erästä` `että` `hän` `häneen` `häneksi` `hänellä` `hänelle` `häneltä` `hänen` `hänessä` `hänestä` `harva` `harvaa` `harvaan` `harvaksi` `harvalla` `harvalle` `harvalta` `harvan` `harvassa` `harvasta` `harvat` `harvoihin` `harvoiksi` `harvoilla` `harvoille` `harvoilta` `harvoissa` `harvoista` `harvoja` `harvojen` `he` `heidän` `heihin` `heiksi` `heillä` `heille` `heiltä` `heissä` `heistä` `hiukan` `huolimatta` `ilman` `itse` `itseäni` `itseeni` `itseensä` `itsekseni` `itselläni` `itselleni` `itseni` `itsessäni` `itsestäni` `ja` `jälkeen` `johon` `johonkin` `joiden` `joidenkin` `joihin` `joihinkin` `joiksikin` `joilla` `joillakin` `joille` `joillekin` `joilta` `joiltakin` `joissa` `joissakin` `joista` `joistakin` `joita` `joka` `jokainen` `jokaiseksi` `jokaisella` `jokaiselle` `jokaiselta` `jokaisen` `jokaisessa` `jokaisesta` `jokaista` `jokin` `joko` `joksikin` `jokunen` `jolaiseen` `jolla` `jollainen` `jollaiseen` `jollaiseksi` `jollaisella` `jollaiselle` `jollaiselta` `jollaisen` `jollaisessa` `jollaisesta` `jollaiset` `jollaisia` `jollaisiin` `jollaisiksi` `jollaisilla` `jollaisille` `jollaisilta` `jollaisissa` `jollaisista` `jollaista` `jollaisten` `jollakin` `jolle` `jollekin` `jolta` `jonka` `jonkin` `jonkinlainen` `jonkinlaiseen` `jonkinlaiseksi` `jonkinlaisella` `jonkinlaiselle` `jonkinlaiselta` `jonkinlaisen` `jonkinlaisessa` `jonkinlaisesta` `jonkinlaiset` `jonkinlaisia` `jonkinlaisiin` `jonkinlaisiksi` `jonkinlaisilla` `jonkinlaisille` `jonkinlaisilta` `jonkinlaisissa` `jonkinlaisista` `jonkinlaista` `jonkinlaisten` `jos` `jossa` `jossakin` `josta` `jostakin` `jota` `jotakin` `jotka` `jotkin` `jotta` `kaikki` `kanssa` `kauas` `kaukana` `kautta` `keiden` `keitä` `kenen` `kera` `keskellä` `keskelle` `ketä` `ketkä` `kohti` `koska` `kuhunkin` `kuin` `kuinka` `kuitenkin` `kuka` `kukin` `kullakin` `kullekin` `kuluessa` `kuluttua` `kummallakin` `kummallekin` `kummaltakin` `kummankin` `kummassakin` `kummastakin` `kummatkin` `kumpaakin` `kumpaankin` `kumpanakin` `kumpiakin` `kumpienkin` `kumpikin` `kun` `kunkin` `kussakin` `kutakin` `kyllä` `kylläkin` `lähellä` `lähelle` `läpi` `lisäksi` `lukuunottamatta` `luokse` `luona` `luota` `me` `meidän` `meihin` `meiksi` `meillä` `meille` `meiltä` `meissä` `meistä` `meitä` `mikä` `miksei` `miksi` `milloin` `minä` `minua` `minuksi` `minulla` `minulle` `minulta` `minun` `minussa` `minusta` `minuun` `mistä` `miten` `mitkä` `molemmat` `molemmiksi` `molemmilla` `molemmille` `molemmissa` `molemmista` `molempia` `molempien` `molempiin` `moneen` `moneksi` `monella` `monelle` `monelta` `monen` `monenlainen` `monenlaiseen` `monenlaiseksi` `monenlaisella` `monenlaiselle` `monenlaiselta` `monenlaisen` `monenlaisessa` `monenlaisesta` `monenlaiset` `monenlaisia` `monenlaisiin` `monenlaisiksi` `monenlaisilla` `monenlaisille` `monenlaisilta` `monenlaisissa` `monenlaisista` `monenlaista` `monenlaisten` `monessa` `monesta` `moni` `monia` `monien` `moniin` `moniksi` `monilla` `monille` `monilta` `monissa` `monista` `monta` `montaa` `muihin` `muiksi` `muilla` `muille` `muilta` `muissa` `muista` `muita` `mukana` `mukanaan` `mutta` `muu` `muuan` `muuhun` `muuksi` `muulla` `muulle` `muulta` `muun` `muussa` `muusta` `muut` `muuta` `muutaista` `muutama` `muutamaa` `muutamaan` `muutamaksi` `muutamalla` `muutamalle` `muutamalta` `muutaman` `muutamassa` `muutamasta` `muutamat` `muutamia` `muutamien` `muutamiin` `muutamiksi` `muutamilla` `muutamille` `muutamilta` `muutamissa` `muutamista` `myös` `myöten` `näet` `näiden` `näihin` `näiksi` `näille` `nämä` `ne` `niiden` `niihin` `niiksi` `niille` `nimittäin` `no` `noiden` `noihin` `noiksi` `noille` `nuo` `ohhoh` `ohi` `ohitse` `paikkeilla` `päin` `paitsi` `paljon` `pitkin` `pois` `poispäin` `sama` `samaa` `samaan` `samaksi` `samalla` `samalle` `samalta` `saman` `samassa` `samasta` `samat` `samoihin` `samoiksi` `samoilla` `samoille` `samoilta` `samoissa` `samoista` `samoja` `samojen` `se` `sellainen` `sellaiseen` `sellaiseksi` `sellaisella` `sellaiselle` `sellaiselta` `sellaisen` `sellaisessa` `sellaisesta` `sellaiset` `sellaisia` `sellaisiin` `sellaisiksi` `sellaisilla` `sellaisille` `sellaisilta` `sellaisissa` `sellaisista` `sellaista` `sellaisten` `sen` `siihen` `siinä` `siis` `siitä` `siksi` `sillä` `sille` `siltä` `silti` `sinä` `sinua` `sinuksi` `sinulla` `sinulle` `sinulta` `sinun` `sinussa` `sinusta` `sinuun` `sitä` `taakse` `taas` `tähän` `tai` `takaa` `takana` `täksi` `tällainen` `tällaiseen` `tällaiseksi` `tällaisella` `tällaiselle` `tällaiselta` `tällaisen` `tällaisessa` `tällaisesta` `tällaiset` `tällaisia` `tällaisiin` `tällaisiksi` `tällaisilla` `tällaisille` `tällaisilta` `tällaisissa` `tällaisista` `tällaista` `tällaisten` `tälle` `tämä` `tämän` `te` `teidän` `toki` `tosin` `tuo` `tuohon` `tuoksi` `tuolle` `usea` `useaa` `useaan` `useaksi` `usealla` `usealle` `usealta` `usean` `useassa` `useasta` `useat` `useiden` `useiksi` `useilla` `useille` `useilta` `useisiin` `useissa` `useista` `useita` `vähän` `vähemmän` `vähiten` `vai` `vaikka` `vailla` `varten` `vasten` `vastoin` `vielä` `vieläkin` `voi` `yli` `yllä` `ylle` `yltä` `ympäri` `ympärillä` `ympärille`
+
+## French (fr.microsoft)
+
+`ces` `cet` `cette` `de` `des` `du` `es` `et` `la` `le` `les` `on` `un` `une`
+
+## German (de.microsoft)
+
+`aber` `alle` `aller` `alles` `als` `am` `an` `auch` `auf` `aus` `bei` `bis` `dann` `das` `daß` `dein` `dem` `den` `der` `deren` `des` `dessen` `die` `diese` `dieser` `dieses` `du` `durch` `ein` `eine` `einem` `einen` `einer` `eines` `einige` `einigem` `einigen` `einiger` `einiges` `er` `es` `etliche` `etlichem` `etlichen` `etlicher` `etliches` `euer` `eurer` `für` `gegen` `habe` `haben` `hat` `hatte` `ich` `ihr` `ihre` `im` `immer` `in` `ist` `jede` `jedem` `jeden` `jeder` `jedes` `jene` `jener` `jenes` `kann` `kein` `keine` `keinem` `keinen` `können` `man` `manche` `manchem` `manchen` `mancher` `manches` `mehr` `mein` `mit` `nach` `nicht` `noch` `nur` `oder` `schon` `sei` `sein` `seine` `seiner` `sich` `sie` `sind` `so` `soll` `über` `um` `und` `unser` `unter` `vom` `von` `vor` `war` `welche` `welcher` `welches` `wenn` `werden` `wessen` `wie` `wieder` `wir` `wird` `worden` `wurde` `zu` `zum` `zur` `zwei` `zwischen` `a` `ä` `b` `c` `d` `e` `f` `g` `h` `i` `j` `k` `l` `m` `n` `o` `ö` `p` `q` `r` `s` `t` `u` `ü` `v` `w` `x` `y` `z` `ß` `A` `Ä` `B` `C` `D` `E` `F` `G` `H` `I` `J` `K` `L` `M` `N` `O` `Ö` `P` `Q` `R` `S` `T` `U` `Ü` `V` `W` `X` `Y` `Z` `é`
+
+## Greek (el.microsoft)
+
+`ο` `η` `το` `στο` `οι` `τα` `του` `εμάς` `εσάς` `εσένα` `εμένα` `σου` `αι` `ημών` `μένα` `σένα` `ων` `όντας` `εμέ` `σας` `μας` `σεις` `τοις` `τω` `υμάς` `υμείς` `υμών` `εσέ` `μείς` `μού` `σού` `τού` `τής` `τόν` `τήν` `μου` `τό` `μάς` `σάς` `τούς` `τά` `δικό` `δικός` `δικές` `δικών` `δική` `δικής` `δικήν` `δικιά` `δικιάν` `δικά` `δικιάς` `δικιές` `δικοί` `δικού` `δικούς` `δικόν` `της` `των` `τον` `την` `το` `τους` `τις` `τα` `τη` `ένας` `μια` `ένα` `ενός` `μιας` `με` `σε` `αν` `εάν` `να` `δια` `εκ` `εξ` `επί` `προ` `υπέρ` `από` `προς` `και` `ούτε` `μήτε` `ουδέ` `μηδέ` `ή` `είτε` `μα` `αλλά` `παρά` `όμως` `ωστόσο` `ενώ` `μολονότι` `μόνο` `πώς` `που` `πριν` `οτί` `λοιπόν` `ώστε` `άρα` `επομένως` `όταν` `σαν` `καθώς` `αφού` `αφότου` `πριν` `μόλις` `άμα` `προτού` `ως` `ώσπου` `όσο` `ωσότου` `όποτε` `κάθε` `γιατί` `επειδή` `ίσως` `παρά` `θα` `ας` `τι` `αντί` `μετά` `κατά` `από` `προς` `αλλά` `για` `τες` `κι` `σ'` `απ'` `γι'` `συ` `μ'` `κατ'` `ουτ'` `στ'` `παρ'` `τόσο` `τι` `όσο` `ό,τι` `θα` `όπου` `δε` `εάν` `εγώ` `εσύ` `αυτός` `αυτή` `αυτό` `εμείς` `εσείς` `αυτοί` `αυτές` `αυτά` `αυτών` `αυτούς` `αυτές` `πιο` `εδώ` `εκεί` `έτσι` `στα` `στων` `πλέον` `ακόμα` `τώρα` `τότε` `όταν` `ούτως` `άλλως` `αλλιώς` `συνεπώς` `εξής` `τούδε` `εφεξής` `όθεν` `οσοδήποτε` `εντούτοις` `μολαταύτα` `έστω` `παρόλο` `πια` `καθόλου` `καν` `χωρίς` `οποτεδήποτε` `πράγματι` `όντως` `άραγε` `μολοντούτο` `απολύτως` `παρομοίως` `σάμπως` `άκρως` `υπό` `ειδεμή` `δηλάδή` `ήτοι` `μέσω` `περί` `περίπου` `α` `A` `β` `B` `γ` `Γ` `δ` `Δ` `ε` `Ε` `ζ` `Ζ` `η` `H` `θ` `Θ` `ι` `Ι` `κ` `Κ` `λ` `Λ` `μ` `Μ` `ν` `Ν` `ξ` `Ξ` `ο` `Ο` `π` `Π` `ρ` `Ρ` `σ` `ς` `Σ` `τ` `Τ` `υ` `Υ` `φ` `Φ` `χ` `Χ` `ψ` `Ψ` `ω` `Ω` `ϊ` `ΐ` `Ϊ` `ϋ` `ΰ` `Ϋ`
+
+## Gujarati (gu.microsoft)
+
+`માં` `તે` `એ` `આ` `છે` `અને` `નો` `ની` `નું`
+
+## Hebrew (he.microsoft)
+
+`של` `אל` `את`
+
+## Hindi (hi.microsoft)
+
+`है` `हैं` `में` `और` `का` `की` `के` `वह` `यह`
+
+## Hungarian (hu.microsoft)
+
+`a` `az` `és` `hogy` `nem` `is` `de` `szerint` `már` `csak` `meg` `még` `ez` `volt` `mint` `azt` `vagy` `pedig` `aztán` `ha` `akkor` `izé` `szintén` `ki` `után` `kell` `majd` `van` `aki` `azonban` `lesz` `mert` `illetve` `amely` `akkor` `lehet` `nagyon` `miatt` `ami` `sem` `a` `á` `b` `c` `cs` `d` `dz` `dzs` `e` `é` `f` `g` `gy` `h` `i` `í` `j` `k` `l` `ly` `m` `n` `ny` `o` `ó` `ö` `ő` `p` `q` `r` `s` `sz` `t` `ty` `u` `ú` `ü` `ű` `v` `w` `x` `y` `z` `zs` `A` `Á` `B` `C` `Cs` `D` `Dz` `Dzs` `E` `É` `F` `G` `Gy` `H` `I` `Í` `J` `K` `L` `Ly` `M` `N` `Ny` `O` `Ó` `Ö` `Ő` `P` `Q` `R` `S` `Sz` `T` `Ty` `U` `Ú` `Ü` `Ű` `V` `W` `X` `Y` `Z` `Zs`
+
+## Icelandic (is.microsoft)
+
+`að` `í` `á` `það` `eru` `er` `og`
+
+## Indonesian (Bahasa) (id.microsoft)
+
+`ah` `di` `dong` `ialah` `ini` `itu` `juga` `ke` `sih`
+
+## Italian (it.microsoft)
+
+`a` `agli` `ai` `al` `all'` `alla` `alle` `allo` `d'` `degli` `dei` `del` `dell'` `della` `delle` `dello` `di` `e` `è` `gli` `i` `il` `in` `l'` `la` `le` `lo` `negli` `nei` `nel` `nell'` `nella` `nelle` `nello` `un'` `un` `una` `uno`
+
+## Japanese (ja.microsoft)
+
+`a` `and` `in` `is` `it` `of` `the` `to` `の` `を` `に` `は` `が` `と` `で`
+
+## Kannada (kn.microsoft)
+
+`ಅ` `ಈ` `ಮತ್ತು` `ಮತ್ತೆ` `ಇರುತ್ತಾನೆ` `ಇರುತ್ತಾಳೆ` `ಇರುತ್ತದೆ` `ಇದು` `ಇದನ್ನು` `ಇದರ` `ಅದು` `ಅದನ್ನು` `ಅದರ`
+
+## Latvian (lv.microsoft)
+
+`no` `un` `uz` `ir`
+
+## Lithuanian (lt.microsoft)
+
+`ir` `arba` `yra` `jis` `ji`
+
+## Malayalam (ml.microsoft)
+
+`ഒരു` `ആണ്` `ആണ്‍` `ഇത്` `അത്` `ഈ` `ആ`
+
+## Malay (Latin) (ms.microsoft)
+
+`adalah` `atau` `dalam` `dan` `di` `ia` `ialah` `ini` `itu` `juga` `lah`
+
+## Marathi (mr.microsoft)
+
+`होते` `त्या` `होता` `होती` `होतो` `होतं` `हा` `हे` `ही` `ह्या` `हें` `हीं` `ती` `ते` `ती` `त्या` `तें` `तीं` `त्यां`
+
+## Norwegian (nb.microsoft)
+
+`av` `og` `en` `ei` `et` `til` `i` `er` `den` `det`
+
+## Polish (pl.microsoft)
+
+`a` `do` `i` `jest` `na` `o` `ta` `te` `to` `w` `we` `z` `za` `się` `że` `od` `przez` `po` `dla` `jak` `tym` `ale` `ma` `co` `czy` `oraz` `może` `tego` `tylko` `jednak` `jego` `już` `lub` `ich` `ze` `tak` `być` `być` `jestem` `jesteś` `jest` `jesteśmy` `jesteście` `są` `będę` `będziesz` `będzie` `będziemy` `będziecie` `będą` `byłem` `byłam` `byłeś` `byłaś` `był` `była` `było` `byliśmy` `byłyśmy` `byliście` `byłyście` `byli` `były` `byłbym` `byłabym` `byłbyś` `byłabyś` `byłby` `byłaby` `byłoby` `bylibyśmy` `byłybyśmy` `bylibyście` `byłybyście` `byliby` `byłyby` `bądź` `bądźmy` `bądźcie` `będąc` `byłże` `byłaże` `byłoże` `byliże` `byłyże` `które` `która` `też` `także` `który` `tej` `przed` `można` `jej` `przy` `ten` `pod` `jeszcze` `gdy` `jako` `by` `bardzo` `bo` `jeśli` `tych` `więc` `bez` `również` `nawet` `temu` `tm` `tymi` `ja` `mnie` `mię` `mi` `mną` `ty` `ciebie` `cię` `tobie` `ci` `tobą` `on` `go` `mu` `jemu` `niego` `nim` `niemu` `my` `nas` `nam` `nami` `wy` `was` `wam` `wami` `oni` `im` `nich` `nimi` `mój` `mojego` `mego` `mojemu` `memu` `moim` `mym` `moja` `mojej` `mej` `moją` `mą` `moi` `moje` `moich` `mych` `me` `moimi` `mymi` `twój` `twojego` `twego` `twojemu` `twemu` `twoim` `twym` `twoja` `twa` `twojej` `twej` `twoją` `twą` `twoi` `twoje` `twe` `twoich` `twych` `twoimi` `twymi` `nasz` `naszego` `naszemu` `naszym` `nasza` `naszej` `naszą` `nasi` `nasze` `naszych` `naszymi` `wasz` `waszego` `waszemu` `waszym` `wasza` `waszej` `waszą` `wasi` `wasze` `waszych` `waszymi` `one` `ą` `b` `c` `ć` `d` `e` `ę` `f` `g` `h` `j` `k` `l` `ł` `m` `n` `ń` `ó` `p` `q` `r` `s` `ś` `t` `u` `v` `x` `y` `ź` `ż` `A` `Ą` `B` `C` `Ć` `D` `E` `Ę` `F` `G` `H` `I` `J` `K` `L` `Ł` `M` `N` `Ń` `O` `Ó` `P` `Q` `R` `S` `Ś` `T` `U` `V` `W` `X` `Y` `Z` `Ź` `Ż`
+
+## Portuguese (Brazil) (pt-Br.microsoft)
+
+`à` `às` `é` `a` `ao` `aos` `as` `da` `das` `de` `do` `dos` `e` `em` `na` `nas` `no` `nos` `o` `os` `para` `um` `uma` `umas` `uns`
+
+## Portuguese (Portugal) (pt-Pt.microsoft)
+
+`à` `às` `é` `a` `ao` `aos` `as` `da` `das` `de` `do` `dos` `e` `em` `na` `nas` `no` `nos` `o` `os` `para` `um` `uma` `umas` `uns`
+
+## Punjabi (pa.microsoft)
+
+`ਹੈ` `ਹਨ` `ਦਾ` `ਦੇ` `ਦੀ` `ਦੀਆਂ` `ਵਿਚ` `ਅਤੇ` `ਇਹ` `ਉਹ`
+
+## Romanian (ro.microsoft)
+
+`a` `ai` `al` `ale` `alor` `de` `din` `este` `în` `într` `la` `o` `pe` `şi` `un` `unei` `unor` `unui`
+
+## Russian (ru.microsoft)
+
+`во` `бы` `не` `на` `что` `по` `для` `как` `от` `это` `из` `за` `только` `или` `их` `все` `его` `он` `но` `до` `же` `то` `так` `уже` `а` `б` `в` `г` `д` `е` `ё` `ж` `з` `и` `й` `к` `л` `м` `н` `о` `п` `р` `с` `т` `у` `ф` `х` `ц` `ч` `ш` `щ` `ъ` `ы` `ь` `э` `ю` `я`
+
+## Serbian (Cyrillic) (sr-cyrillic.microsoft)
+
+`и` `је` `у` `да` `се` `на` `су` `за` `од` `са` `а` `из` `о`
+
+## Serbian (Latin) (sr-latin.microsoft)
+
+`i` `je` `u` `se` `su` `a`
+
+## Slovak (sk.microsoft)
+
+`z` `od` `na` `k` `o` `na` `a` `k` `v` `vo` `na` `je` `ono`
+
+## Slovenian (sl.microsoft)
+
+`in` `k` `h` `v` `je` `ono` `onega` `onemu` `onem` `onim`
+
+## Spanish (es.microsoft)
+
+`a` `al` `de` `del` `e` `el` `en` `es` `la` `las` `lo` `los` `un` `una` `unas` `unos` `y`
+
+## Swedish (sv.microsoft)
+
+`av` `och` `en` `ett` `till` `i` `är` `den` `det`
+
+## Tamil (ta.microsoft)
+
+`அது` `இது` `ஒரு` `அல்லது` `இந்த` `அந்த`
+
+## Telugu (te.microsoft)
+
+`అతను` `ఆయన` `వారు` `వాండ్ళు` `వాళ్ళు` `ఆమె` `ఆనిడ` `అది` `అవి` `ఇతను` `ఈయవ` `వీరు` `వీండ్ళు` `ఈమె` `ఈనిడ`
+
+## Thai (th.microsoft)
+
+`นะ` `ครับ` `ค่ะ` `ละ` `ฮะ` `ฮิ` `จ๊ะ` `วะ` `มั๊ย` `สิ` `เออ` `ฮึ` `ซิ` `นะจ๊ะ` `นะค่ะ` `เถอะ` `เถอะนะ` `เถอะน่า` `หรอก` `อุ๊ย` `อ่า` `ซะ` `หนิ` `หนะ` `หน่า` `นิ` `แหละ` `จ๊อกๆ` `ติ๋งๆ` `เหรอะ` `บรึม` `อึ๋ย` `แหนะ` `เฮ้ย` `โว้ย` `โอย` `เจี๊ยก` `หร๊อก` `บ๊ะ` `แน่ะ` `โอ` `แฮ้` `เหม่` `เอ๊ะ` `แฮ` `แฮะ`
+
+## Turkish (tr.microsoft)
+
+`ama` `ancak` `bazı` `bir` `çok` `da` `daha` `de` `değil` `diye` `en` `gibi` `göre` `hem` `her` `için` `ile` `ise` `kadar` `ki` `sadece` `üzere` `ve` `veya` `ya` `a` `b` `c` `ç` `d` `e` `f` `g` `ğ` `h` `ı` `i` `j` `k` `l` `m` `n` `o` `ö` `p` `r` `s` `ş` `t` `u` `ü` `v` `y` `z` `A` `B` `C` `Ç` `D` `E` `F` `G` `Ğ` `H` `I` `İ` `J` `K` `L` `M` `N` `O` `Ö` `P` `R` `S` `Ş` `T` `U` `Ü` `V` `Y` `Z` `ben` `sen` `o` `biz` `siz` `onlar` `bu` `şu` `hangi ` `kendi` `bazı` `çok` `kim` `beni ` `bana` `bende ` `benden` `benim` `benimle` `bensiz` `seni ` `sana` `sende` `senden` `seninle` `sensiz` `onu ` `ona` `onda` `ondan` `onunla` `onsuz` `bizi ` `bize` `bizde` `bizden` `bizimle` `bizsiz` `sizi` `size` `sizde` `sizden` `sizinle` `sizsiz` `onları` `onlara` `onlarda` `onlardan` `onların` `onlarla` `onlarsız` `bazısı` `bazısını` `bazısına` `bazısında` `bazısından` `bazısının` `bazısıyla` `bazılarımız` `bazılarınız` `bazılarına` `bazılarını` `bazılarından` `bazılarıyla` `bazılarımızı` `bazılarınızı` `bazılarımıza` `bazılarınıza` `bazılarımızda` `bazılarınızda` `bazılarımızdan` `bazılarınızdan` `bazılarımızla` `bazılarınızla` `kimileri` `kimilerini` `kimilerine` `kimilerinde` `kimilerinden` `kimileriyle` `kimilerimiz` `kimilerimizi` `kimilerimize` `kimilerimizde` `kimilerimizden` `kimilerimizin` `kimilerimizle` `kimilerimizsiz` `kimileriniz` `kimilerinizi` `kimilerinize` `kimilerinizde` `kimilerinizden` `kimilerinizin` `kimilerinizle` `kimilerinizsiz` `kendim` `kendimi` `kendime` `kendimde` `kendimden` `kendimin` `kendimle` `kendimsiz` `kendin` `kendini` `kendine` `kendinde` `kendinden` `kendinin` `kendinle` `kendinsiz` `kendisi` `kendisini` `kendisine` `kendisinde` `kendisinden` `kendisiyle` `kendileri` `kendilerini` `kendilerine` `kendilerinde` `kendilerinden` `kendileriyle` `kaçımız` `kaçımızı` `kaçımıza` `kaçımızda` `kaçımızdan` `kaçımızla` `hangimiz` `hangimizi` `hangimize` `hangimizde` `hangimizden` `hangimizle` `hangimizsiz` `bunu` `buna` `bunda` `bundan` `bunun` `bununla` `bunsuz` `bunlar` `bunları` `bunlara` `bunlarda` `bunlardan` `bunların` `bunlarla` `bunlarsız` `şunu ` `şuna ` `şunda` `şundan` `şunun` `şununla` `şunsuz` `şunlar` `şunları` `şunlara ` `şunlarda` `şunlardan` `şunların` `şunlarla` `şunlarsız`
+
+## Ukrainian (uk.microsoft)
+
+`і` `й` `до` `в` `у` `є` `з` `із`
+
+## Urdu (ur.microsoft)
+
+`ہے` `ہیں` `میں` `اور` `کا` `کی` `کے` `یہ` `وہ` `اس` `ان`
+
+## See also
+++ [Tutorial: Create a custom analyzer for phone numbers](tutorial-create-custom-analyzer.md)++ [Add language analyzers to string fields](index-add-language-analyzers.md)++ [Add custom analyzers to string fields](index-add-custom-analyzers.md)++ [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md)++ [Analyzers for text processing in Azure Cognitive Search](search-analyzers.md)
search Search Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-analyzers.md
An *analyzer* is a component of the [full text search engine](search-lucene-query-architecture.md) that's responsible for processing strings during indexing and query execution. Text processing (also known as lexical analysis) is transformative, modifying a string through actions such as these:
-+ Remove non-essential words ([stopwords](https://github.com/Azure-Samples/azure-search-sample-dat)) and punctuation
++ Remove non-essential words ([stopwords](reference-stopwords.md)) and punctuation + Split up phrases and hyphenated words into component parts + Lower-case any upper-case words + Reduce words into primitive root forms for storage efficiency and so that matches can be found regardless of tense
For more background on lexical analysis, listen to the following video clip for
In Azure Cognitive Search, an analyzer is automatically invoked on all string fields marked as searchable.
-By default, Azure Cognitive Search uses the [Apache Lucene Standard analyzer (standard lucene)](https://lucene.apache.org/core/6_6_1/core/org/apache/lucene/analysis/standard/StandardAnalyzer.html), which breaks text into elements following the ["Unicode Text Segmentation"](https://unicode.org/reports/tr29/) rules. Additionally, the standard analyzer converts all characters to their lower case form. Both indexed documents and search terms go through the analysis during indexing and query processing.
+By default, Azure Cognitive Search uses the [Apache Lucene Standard analyzer (standard lucene)](https://lucene.apache.org/core/6_6_1/core/org/apache/lucene/analysis/standard/StandardAnalyzer.html), which breaks text into elements following the ["Unicode Text Segmentation"](https://unicode.org/reports/tr29/) rules. The standard analyzer converts all characters to their lower case form. Both indexed documents and search terms go through the analysis during indexing and query processing.
You can override the default on a field-by-field basis. Alternative analyzers can be a [language analyzer](index-add-language-analyzers.md) for linguistic processing, a [custom analyzer](index-add-custom-analyzers.md), or a built-in analyzer from the [list of available analyzers](index-add-custom-analyzers.md#built-in-analyzers).
To add a new field to an existing index, call [Update Index](/rest/api/searchser
To add a custom analyzer to an existing index, pass the "allowIndexDowntime" flag in [Update Index](/rest/api/searchservice/update-index) if you want to avoid this error:
-*"Index update not allowed because it would cause downtime. In order to add new analyzers, tokenizers, token filters, or character filters to an existing index, set the 'allowIndexDowntime' query parameter to 'true' in the index update request. Note that this operation will put your index offline for at least a few seconds, causing your indexing and query requests to fail. Performance and write availability of the index can be impaired for several minutes after the index is updated, or longer for very large indexes."*
+`"Index update not allowed because it would cause downtime. In order to add new analyzers, tokenizers, token filters, or character filters to an existing index, set the 'allowIndexDowntime' query parameter to 'true' in the index update request. Note that this operation will put your index offline for at least a few seconds, causing your indexing and query requests to fail. Performance and write availability of the index can be impaired for several minutes after the index is updated, or longer for very large indexes."`
## Recommendations for working with analyzers
This section offers advice on how to work with analyzers.
### One analyzer for read-write unless you have specific requirements
-Azure Cognitive Search lets you specify different analyzers for indexing and search via additional indexAnalyzer and searchAnalyzer field properties. If unspecified, the analyzer set with the analyzer property is used for both indexing and searching. If the analyzer is unspecified, the default Standard Lucene analyzer is used.
+Azure Cognitive Search lets you specify different analyzers for indexing and search through the "indexAnalyzer" and "searchAnalyzer" field properties. If unspecified, the analyzer set with the analyzer property is used for both indexing and searching. If the analyzer is unspecified, the default Standard Lucene analyzer is used.
A general rule is to use the same analyzer for both indexing and querying, unless specific requirements dictate otherwise. Be sure to test thoroughly. When text processing differs at search and indexing time, you run the risk of mismatch between query terms and indexed terms when the search and indexing analyzer configurations are not aligned.
The examples below show analyzer definitions for a few key scenarios.
### Custom analyzer example
-This example illustrates an analyzer definition with custom options. Custom options for char filters, tokenizers, and token filters are specified separately as named constructs, and then referenced in the analyzer definition. Predefined elements are used as-is and simply referenced by name.
+This example illustrates an analyzer definition with custom options. Custom options for char filters, tokenizers, and token filters are specified separately as named constructs, and then referenced in the analyzer definition. Predefined elements are used as-is and referenced by name.
Walking through this example:
The "analyzer" element overrides the Standard analyzer on a field-by-field basis
### Mixing analyzers for indexing and search operations
-The APIs include additional index attributes for specifying different analyzers for indexing and search. The searchAnalyzer and indexAnalyzer attributes must be specified as a pair, replacing the single analyzer attribute.
+The APIs include index attributes for specifying different analyzers for indexing and search. The searchAnalyzer and indexAnalyzer attributes must be specified as a pair, replacing the single analyzer attribute.
```json
search Search Lucene Query Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-lucene-query-architecture.md
Lexical analyzers process *term queries* and *phrase queries* after the query tr
The most common form of lexical analysis is *linguistic analysis* which transforms query terms based on rules specific to a given language: * Reducing a query term to the root form of a word
-* Removing non-essential words ([stopwords](https://github.com/Azure-Samples/azure-search-sample-dat), such as "the" or "and" in English)
+* Removing non-essential words ([stopwords](reference-stopwords.md), such as "the" or "and" in English)
* Breaking a composite word into component parts * Lower casing an upper case word
search Tutorial Create Custom Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-create-custom-analyzer.md
Keyword tokenizers always output the same text it was given as a single term.
### Token filters
-Token filters will filter out or modify the tokens generated by the tokenizer. One common use of a token filter is to lowercase all characters using a lowercase token filter. Another common use is filtering out stopwords such as `the`, `and`, or `is`.
+Token filters will filter out or modify the tokens generated by the tokenizer. One common use of a token filter is to lowercase all characters using a lowercase token filter. Another common use is filtering out [stopwords](reference-stopwords.md) such as `the`, `and`, or `is`.
While we don't need to use either of those filters for this scenario, we'll use an nGram token filter to allow for partial searches of phone numbers.
sentinel Investigate Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-cases.md
To use the investigation graph:
![Explore more details](media/investigate-cases/exploration-cases.png)
- For example, on a computer you can request related alerts. If you select an exploration query, the resulting entitles are added back to the graph. In this example, selecting **Related alerts** returned the following alerts into the graph:
+ For example, you can request related alerts. If you select an exploration query, the resulting entitles are added back to the graph. In this example, selecting **Related alerts** returned the following alerts into the graph:
:::image type="content" source="media/investigate-cases/related-alerts.png" alt-text="Screenshot: view related alerts" lightbox="media/investigate-cases/related-alerts.png":::
+ See that the related alerts appear connected to the entity by dotted lines.
+ 1. For each exploration query, you can select the option to open the raw event results and the query used in Log Analytics, by selecting **Events\>**. 1. In order to understand the incident, the graph gives you a parallel timeline.
To use the investigation graph:
:::image type="content" source="media/investigate-cases/use-timeline.png" alt-text="Screenshot: use timeline in map to investigate alerts.'" lightbox="media/investigate-cases/use-timeline.png":::
+## Focus your investigation
+
+Learn how you can broaden or narrow the scope of your investigation by either [adding alerts to your incidents or removing alerts from incidents](relate-alerts-to-incidents.md).
+ ## Similar incidents (preview) As a security operations analyst, when investigating an incident you'll want to pay attention to its larger context. For example, you'll want to see if other incidents like this have happened before or are happening now.
sentinel Relate Alerts To Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/relate-alerts-to-incidents.md
+
+ Title: Relate alerts to incidents in Microsoft Sentinel | Microsoft Docs
+description: This article shows you how to relate alerts to your incidents in Microsoft Sentinel.
++ Last updated : 05/12/2022+++
+# Relate alerts to incidents in Microsoft Sentinel
++
+This article shows you how to relate alerts to your incidents in Microsoft Sentinel. This feature allows you to manually or automatically add alerts to, or remove them from, existing incidents as part of your investigation processes, refining the incident scope as the investigation unfolds.
+
+> [!IMPORTANT]
+> Incident expansion is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Expand the scope and power of your incidents
+
+One thing that this feature allows you to do is to include alerts from one data source in incidents generated by another data source. For example, you can add alerts from Microsoft Defender for Cloud, or from various third-party data sources, to incidents imported into Microsoft Sentinel from Microsoft 365 Defender.
+
+This feature is built into the latest version of the Microsoft Sentinel API, which means that it's available to the Logic Apps connector for Microsoft Sentinel. So you can use playbooks to automatically add an alert to an incident if certain conditions are met.
+
+You can also use this automation to create custom correlations, or to define custom criteria for grouping alerts into incidents when they're created.
+
+## Add alerts using the investigation graph
+
+The [investigation graph](investigate-cases.md) is a visual, intuitive tool that presents connections and patterns and enables your analysts to ask the right questions and follow leads. You can use it to add alerts to and remove them from your incidents, broadening or narrowing the scope of your investigation.
+
+1. From the Microsoft Sentinel navigation menu, select **Incidents**.
+
+ :::image type="content" source="media/investigate-cases/incident-severity.png" alt-text="Screenshot of incidents queue displayed in a grid." lightbox="media/investigate-cases/incident-severity.png":::
+
+1. Select an incident to investigate. In the incident details panel, select the **Actions** button and choose **Investigate** from the pop-up menu. This will open the investigation graph.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/investigation-map.png" alt-text="Screenshot of incidents with alerts in investigation graph." lightbox="media/investigate-cases/incident-severity.png":::
+
+1. Hover over any entity to reveal the list of **exploration queries** to its side. Select **Related alerts**.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/see-alert-options.png" alt-text="Screenshot of alert exploration queries in investigation graph.":::
+
+ The related alerts will appear connected to the entity by dotted lines.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/related-alerts.png" alt-text="Screenshot of related alerts appearing in investigation graph.":::
+
+1. Hover over one of the related alerts until a menu pops out to its side. Select **Add alert to incident (Preview)**.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/add-alert-to-incident.png" alt-text="Screenshot of adding an alert to an incident in the investigation graph.":::
+
+1. The alert is added to the incident, and for all purposes is part of the incident, along with all its entities and details. You'll see two visual representations of this:
+
+ - The line connecting it to the entity in the investigation graph has changed from dotted to solid, and connections to entities in the added alert have been added to the graph.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/alert-joined-to-incident.png" alt-text="Screenshot showing an alert added to an incident." lightbox="media/relate-alerts-to-incidents/alert-joined-to-incident.png":::
+
+ - The alert now appears in this incident's timeline, together with the alerts that were already there.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/two-alerts.png" alt-text="Screenshot showing an alert added to an incident's timeline.":::
+
+### Special situations
+
+When adding an alert to an incident, depending on the circumstances, you might be asked to confirm your request or to choose between different options. The following are some examples of these situations, the choices you will be asked to make, and their implications.
+
+- The alert you want to add already belongs to another incident.
+
+ In this case you'll see a message that the alert is part of another incident or incidents, and asked if you want to proceed. Select **OK** to add the alert or **Cancel** to leave things as they were.
+
+ Adding the alert to this incident *will not remove it* from any other incidents. Alerts can be related to more than one incident. If you want, you can remove the alert manually from the other incident(s) by following the link(s) in the message prompt above.
+
+- The alert you want to add belongs to another incident, and it's the only alert in the other incident.
+
+ This is different from the case above, since if the alert is alone in the other incident, tracking it in this incident could make the other incident irrelevant. So in this case, you'll see this dialog:
+
+ :::image type="content" source="media/relate-alerts-to-incidents/keep-or-close-other-incident.png" alt-text="Screenshot asking whether to keep or close other incident.":::
+
+ - **Keep other incident** preserves the other incident as is, while also adding the alert to this one.
+
+ - **Close other incident** adds the alert to this incident and closes the other incident, adding the closing reason "Undetermined" and the comment "Alert was added to another incident" with the open incident's number.
+
+ - **Cancel** leaves the status quo. It makes no changes to either the open incident or any other referenced incident.
+
+ Which of these options you choose depends on your particular needs; we don't recommend one choice over the other.
+
+### Limitations
+
+- Microsoft Sentinel imports both alerts and incidents from Microsoft 365 Defender. For the most part, you can treat these alerts and incidents like regular Microsoft Sentinel alerts and incidents.
+
+ However, you can only add Defender alerts to Defender incidents (or remove them) in the Defender portal, not in the Sentinel portal. If you try doing this in Microsoft Sentinel, you will get an error message. You can pivot to the incident in the Microsoft 365 Defender portal using the link in the Microsoft Sentinel incident. Don't worry, though - any changes you make to the incident in the Microsoft 365 Defender portal are [synchronized](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-365-defender-incidents-in-microsoft-sentinel-and-bi-directional-sync) with the parallel incident in Microsoft Sentinel, so you'll still see the added alerts in the incident in the Sentinel portal.
+
+ You *can* add Microsoft 365 Defender alerts to non-Defender incidents, and non-Defender alerts to Defender incidents, in the Microsoft Sentinel portal.
+
+- An incident can contain a maximum of 150 alerts. If you try to add an alert to an incident with 150 alerts in it, you will get an error message.
+
+## Add/remove alerts using playbooks
+
+Adding and removing alerts to incidents are also available as Logic Apps actions in the Microsoft Sentinel connector, and therefore in Microsoft Sentinel playbooks. You need to supply the **incident ARM ID** and the **system alert ID** as parameters, and you can find them both in the playbook schema for both the alert and incident triggers.
+
+Microsoft Sentinel supplies a sample playbook template in the templates gallery, that shows you how to work with this capability:
++
+Here's how the **Add alert to incident (Preview)** action is used in this playbook, as an example for how you can use it elsewhere:
++
+## Add/remove alerts using the API
+
+You're not limited to the portal to use this feature. It's also accessible through the Microsoft Sentinel API, through the [Incident relations](/rest/api/securityinsights/preview/incident-relations) operation group. It allows you to get, create, update, and delete relationships between alerts and incidents.
+
+### Create a relationship
+
+You add an alert to an incident by creating a relationship between them. Use the following endpoint to add an alert to an existing incident. After this request is made, the alert joins the incident and will be visible in the list of alerts in the incident in the portal.
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/providers/Microsoft.SecurityInsights/incidents/{incidentId}/relations/{relationName}?api-version=2021-10-01-preview
+```
+
+The request body looks like this:
+
+```json
+{
+ "properties": {
+ "relatedResourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/providers/Microsoft.SecurityInsights/entities/{systemAlertId}"
+ }
+}
+```
+
+### Delete a relationship
+
+You remove an alert from an incident by deleting the relationship between them. Use the following endpoint to remove an alert from an existing incident. After this request is made, the alert will no longer be connected to or appear in the incident.
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/providers/Microsoft.SecurityInsights/incidents/{incidentId}/relations/{relationName}?api-version=2021-10-01-preview
+```
+
+### List alert relationships
+
+You can also list all the alerts that are related to a particular incident, with this endpoint and request:
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/providers/Microsoft.SecurityInsights/incidents/{incidentId}/relations?api-version=2021-10-01-preview
+```
+
+### Specific error codes
+
+The [general API documentation](/rest/api/securityinsights/preview/incident-relations) lists expected response codes for the [Create](/rest/api/securityinsights/preview/incident-relations/create-or-update#response), [Delete](/rest/api/securityinsights/preview/incident-relations/delete#response), and [List](/rest/api/securityinsights/preview/incident-relations/list#response) operations mentioned above. Error codes are only mentioned there as a general category. Here are the possible specific error codes and messages listed there under the category of "Other Status Codes":
+
+| Code | Message |
+| - | - |
+| **400 Bad Request** | Failed to create relation. Different relation type with name {relationName} already exists in incident {incidentIdentifier}. |
+| **400 Bad Request** | Failed to create relation. Alert {systemAlertId} already exists in incident {incidentIdentifier}. |
+| **400 Bad Request** | Failed to create relation. Related resource and incident should belong to the same workspace. |
+| **400 Bad Request** | Failed to create relation. Microsoft 365 Defender alerts cannot be added to Microsoft 365 Defender incidents. |
+| **400 Bad Request** | Failed to delete relation. Microsoft 365 Defender alerts cannot be removed from Microsoft 365 Defender incidents. |
+| **404 Not found** | Resource '{systemAlertId}' does not exist. |
+| **404 Not found** | Incident doesnΓÇÖt exist. |
+| **409 Conflict** | Failed to create relation. Relation with name {relationName} already exists in incident {incidentIdentifier} to different alert {systemAlertId}. |
+
+## Next steps
+In this article, you learned how to add alerts to incidents and remove them using the Microsoft Sentinel portal and API. For more information, see:
+
+- [Investigate incidents with Microsoft Sentinel](investigate-cases.md)
+- [Incident relations group in the Microsoft Sentinel REST API](/rest/api/securityinsights/preview/incident-relations)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
> > You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+## May 2022
+
+- [Relate alerts to incidents](#relate-alerts-to-incidents-preview)
+- [Similar incidents](#similar-incidents-preview)
+
+### Relate alerts to incidents (Preview)
+
+You can now add alerts to, or remove alerts from, existing incidents, either manually or automatically, as part of your investigation processes. This allows you to refine the incident scope as the investigation unfolds. For example, relate Microsoft Defender for Cloud alerts, or alerts from third-party products, to incidents synchronized from Microsoft 365 Defender. Use this feature from the investigation graph, the API, or through automation playbooks.
+
+Learn more about [relating alerts to incidents](relate-alerts-to-incidents.md).
+
+### Similar incidents (Preview)
+
+When triaging or investigating an incident, the context of the entirety of incidents in your SOC can be extremely useful. For example, other incidents involving the same entities can represent useful context that will allow you to reach the right decision faster. Now there's a new tab in the incident page that lists other incidents that are similar to the incident you are investigating. Some common use cases for using similar incidents are:
+
+- Finding other incidents that might be part of a larger attack story.
+- Using a similar incident as a reference for incident handling. The way the previous incident was handled can act as a guide for handling the current one.
+- Finding relevant people in your SOC that have handled similar incidents for guidance or consult.
+
+Learn more about [similar incidents](investigate-cases.md#similar-incidents-preview).
+ ## March 2022 - [Automation rules now generally available](#automation-rules-now-generally-available)
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
description: Learn how to mount an Azure file share over SMB on Linux. See the l
Previously updated : 05/05/2021 Last updated : 05/16/2022
storageAccountKey=$(az storage account keys list \
--account-name $storageAccountName \ --query "[0].value" --output tsv | tr -d '"')
-sudo mount -t cifs $smbPath $mntPath -o username=$storageAccountName,password=$storageAccountKey,serverino
+sudo mount -t cifs $smbPath $mntPath -o username=$storageAccountName,password=$storageAccountKey,serverino,nosharesock,actimeo=30
``` # [SMB 3.0](#tab/smb30)
storageAccountKey=$(az storage account keys list \
--account-name $storageAccountName \ --query "[0].value" --output tsv | tr -d '"')
-sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,password=$storageAccountKey,serverino
+sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,password=$storageAccountKey,serverino,nosharesock,actimeo=30
``` # [SMB 2.1](#tab/smb21)
storageAccountKey=$(az storage account keys list \
--account-name $storageAccountName \ --query "[0].value" --output tsv | tr -d '"')
-sudo mount -t cifs $smbPath $mntPath -o vers=2.1,username=$storageAccountName,password=$storageAccountKey,serverino
+sudo mount -t cifs $smbPath $mntPath -o vers=2.1,username=$storageAccountName,password=$storageAccountKey,serverino,nosharesock,actimeo=30
```
httpEndpoint=$(az storage account show \
smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName if [ -z "$(grep $smbPath\ $mntPath /etc/fstab)" ]; then
- echo "$smbPath $mntPath cifs nofail,credentials=$smbCredentialFile,serverino" | sudo tee -a /etc/fstab >
+ echo "$smbPath $mntPath cifs nofail,credentials=$smbCredentialFile,serverino,nosharesock,actimeo=30" | sudo tee -a /etc/fstab >
else echo "/etc/fstab was not modified to avoid conflicting entries as this Azure file share was already present. You may want to double check /etc/fstab to ensure the configuration is as desired." fi
virtual-machine-scale-sets Orchestration Modes Api Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/orchestration-modes-api-comparison.md
This article compares the API differences between Uniform and [Flexible orchestr
| Uniform API | Flexible alternative | |-|-| | [Deallocate](/rest/api/compute/virtualmachinescalesetvms/deallocate) | [Invoke Single VM API - Deallocate](/rest/api/compute/virtualmachines/deallocate) |
-| [Delete](/rest/api/compute/virtualmachinescalesetvms/delete) | [Invoke Single VM API -Delete](/rest/api/compute/virtualmachines/delete) |
+| [Delete](/rest/api/compute/virtualmachinescalesetvms/delete) | VMSS Batch delete API supported by VMSS in Flexible Orchestration Mode |
| [Get Instance View](/rest/api/compute/virtualmachinescalesetvms/getinstanceview) | [Invoke Single VM API - Instance View](/rest/api/compute/virtualmachines/instanceview) | | [Perform Maintenance](/rest/api/compute/virtualmachinescalesetvms/performmaintenance) | [Invoke Single VM API - Perform Maintenance](/rest/api/compute/virtualmachines/performmaintenance) | | [Power Off](/rest/api/compute/virtualmachinescalesetvms/poweroff) | [Invoke Single VM API - Power Off](/rest/api/compute/virtualmachines/poweroff) |
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Spot instances and pricing  | Yes, you can have both Spot and Regular priority instances | Yes, instances must either be all Spot or all Regular | No, Regular priority instances only | | Mix operating systems | Yes, Linux and Windows can reside in the same Flexible scale set | No, instances are the same operating system | Yes, Linux and Windows can reside in the same availability set | | Disk Types | Managed disks only, all storage types | Managed and unmanaged disks, all storage types | Managed and unmanaged disks, Ultradisk not supported |
+| Disk Server Side Encryption with Customer Managed Keys | Yes | Yes | Yes |
| Write Accelerator  | No | Yes | Yes | | Proximity Placement Groups  | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes | | Azure Dedicated Hosts  | No | Yes | Yes |
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Add/remove existing VM to the group | No | No | No | | Service Fabric | No | Yes | No | | Azure Kubernetes Service (AKS) / AKE | No | Yes | No |
-| UserData | Partial, UserData can be specified for individual VMs | Yes | UserData can be specified for individual VMs |
+| UserData | Yes | Yes | UserData can be specified for individual VMs |
### Autoscaling and instance orchestration
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Instance Protection | No, use [Azure resource lock](../azure-resource-manager/management/lock-resources.md) | Yes | No | | Scale In Policy | No | Yes | No | | VMSS Get Instance View | No | Yes | N/A |
-| VM Batch Operations (Start all, Stop all, delete subset, etc.) | No (can trigger operations on each instance using VM API) | Yes | No |
+| VM Batch Operations (Start all, Stop all, delete subset, etc.) | Partial, Batch delete is supported. Other operations can be triggered on each instance using VM API) | Yes | No |
### High availability 
virtual-machines Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts.md
Azure Dedicated Host is a service that provides physical servers - able to host
Reserving the entire host provides the following benefits: - Hardware isolation at the physical server level. No other VMs will be placed on your hosts. Dedicated hosts are deployed in the same data centers and share the same network and underlying storage infrastructure as other, non-isolated hosts.-- Control over maintenance events initiated by the Azure platform. While the majority of maintenance events have little to no impact on your virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt-in to a maintenance window to reduce the impact to your service.
+- Control over maintenance events initiated by the Azure platform. While the majority of maintenance events have little to no impact on your virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt in to a maintenance window to reduce the impact to your service.
- With the Azure hybrid benefit, you can bring your own licenses for Windows and SQL to Azure. Using the hybrid benefits provides you with additional benefits. For more information, see [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/).
The infrastructure supporting your virtual machines may occasionally be updated
**Maintenance Control** provides customers with an option to skip regular platform updates scheduled on their dedicated hosts, then apply it at the time of their choice within a 35-day rolling window. Within the maintenance window, you can apply maintenance directly at the host level, in any order. Once the maintenance window is over, Microsoft will move forward and apply the pending maintenance to the hosts in an order which may not follow the user defined fault domains.
-For more information, see [Managing platform updates with Maintenance Control](./maintenance-control.md).
+For more information, see [Managing platform updates with Maintenance Control](./maintenance-configurations.md).
## Capacity considerations
virtual-machines Flexible Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/flexible-virtual-machine-scale-sets.md
The following tables list the Flexible orchestration mode features and links to
| Spot instances and pricing  | Yes, you can have both Spot and Regular priority instances | | Mix operating systems | Yes, Linux and Windows can reside in the same Flexible scale set | | Disk Types | Managed disks only, all storage types |
+| Disk Server Side Encryption with Customer Managed Keys | Yes |
| Write Accelerator  | No | | Proximity Placement Groups  | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | | Azure Dedicated Hosts  | No |
The following tables list the Flexible orchestration mode features and links to
| Add/remove existing VM to the group | No | | Service Fabric | No | | Azure Kubernetes Service (AKS) / AKE | No |
-| UserData | Partial, UserData can be specified for individual VMs |
+| UserData | Yes |
### Autoscaling and instance orchestration
The following tables list the Flexible orchestration mode features and links to
| Instance Protection | No, use [Azure resource lock](../azure-resource-manager/management/lock-resources.md) | | Scale In Policy | No | | VMSS Get Instance View | No |
-| VM Batch Operations (Start all, Stop all, delete subset, etc.) | No (can trigger operations on each instance using VM API) |
+| VM Batch Operations (Start all, Stop all, delete subset, etc.) | Partial, Batch delete is supported. Other operations can be triggered on each instance using VM API) |
### High availability 
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
Previously updated : 03/24/2022 Last updated : 05/13/2022
Generalizing removes machine specific information so the image can be used to cr
## Linux
+Distribution specific instructions for preparing Linux images for Azure are available here:
+- [Generic steps](./linux/create-upload-generic.md)
+- [CentOS](./linux/create-upload-centos.md)
+- [Debian](./linux/debian-create-upload-vhd.md)
+- [Flatcar](./linux/flatcar-create-upload-vhd.md)
+- [FreeBSD](./linux/freebsd-intro-on-azure.md)
+- [Oracle Linux](./linux/oracle-create-upload-vhd.md)
+- [OpenBSD](./linux/create-upload-openbsd.md)
+- [Red Hat](./linux/redhat-create-upload-vhd.md)
+- [SUSE](./linux/suse-create-upload-vhd.md)
+- [Ubuntu](./linux/create-upload-ubuntu.md)
+
+The following instructions only cover setting the VM to generalized. We recommend you follow the distro specific instructions for production workloads.
+ First you'll deprovision the VM by using the Azure VM agent to delete machine-specific files and data. Use the `waagent` command with the `-deprovision+user` parameter on your source Linux VM. For more information, see the [Azure Linux Agent user guide](./extensions/agent-linux.md). This process can't be reversed. 1. Connect to your Linux VM with an SSH client.
az vm generalize \
--name myVM ```
-## Windows
+## Windows
Sysprep removes all your personal account and security information, and then prepares the machine to be used as an image. For information about Sysprep, see [Sysprep overview](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview).
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
Previously updated : 04/26/2022 Last updated : 05/13/2022
Allowed characters for the image version are numbers and periods. Numbers must b
When working through this article, replace the resource names where needed.
+For [generalized](generalize.md) images, see the OS specific guidance before capturing the image:
+
+ - **Linux**
+ - [Generic steps](./linux/create-upload-generic.md)
+ - [CentOS](./linux/create-upload-centos.md)
+ - [Debian](./linux/debian-create-upload-vhd.md)
+ - [Flatcar](./linux/flatcar-create-upload-vhd.md)
+ - [FreeBSD](./linux/freebsd-intro-on-azure.md)
+ - [Oracle Linux](./linux/oracle-create-upload-vhd.md)
+ - [OpenBSD](./linux/create-upload-openbsd.md)
+ - [Red Hat](./linux/redhat-create-upload-vhd.md)
+ - [SUSE](./linux/suse-create-upload-vhd.md)
+ - [Ubuntu](./linux/create-upload-ubuntu.md)
+
+ - **Windows**
+
+ If you plan to run Sysprep before uploading your virtual hard disk (VHD) to Azure for the first time, make sure you have [prepared your VM](./windows/prepare-for-upload-vhd-image.md).
+ ## Community gallery (preview) > [!IMPORTANT]
When working through this article, replace the resource names where needed.
If you will be sharing your images using a [community gallery (preview)](azure-compute-gallery.md#community), make sure that you create your gallery, image definitions, and image versions in the same region.
-When users search for community gallery images, only the latest version of an image are shown.
+When users search for community gallery images, only the latest version of an image is shown.
## Create an image
You can also capture an existing VM as an image, from the portal. For more infor
Image definitions create a logical grouping for images. They are used to manage information about the image versions that are created within them.
-Create an image definition in a gallery using [az sig image-definition create](/cli/azure/sig/image-definition#az-sig-image-definition-create). Make sure your image definition is the right type. If you have generalized the VM (using Sysprep for Windows, or waagent -deprovision for Linux) then you should create a generalized image definition using `--os-state generalized`. If you want to use the VM without removing existing user accounts, create a specialized image definition using `--os-state specialized`.
+Create an image definition in a gallery using [az sig image-definition create](/cli/azure/sig/image-definition#az-sig-image-definition-create). Make sure your image definition is the right type. If you have [generalized](generalize.md) the VM (using `waagent -deprovision` for Linux, or Sysprep for Windows) then you should create a generalized image definition using `--os-state generalized`. If you want to use the VM without removing existing user accounts, create a specialized image definition using `--os-state specialized`.
For more information about the parameters you can specify for an image definition, see [Image definitions](shared-image-galleries.md#image-definitions).
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
Title: Create and upload a Linux VHD
-description: Learn to create and upload an Azure virtual hard disk (VHD) that contains a Linux operating system.
+ Title: Prepare Linux for imaging
+description: Learn to prepare a Linux system to be used for an image in Azure.
Previously updated : 11/17/2021 Last updated : 05/13/2022
-# Information for Non-endorsed Distributions
+# Information for community supported and non-endorsed distributions
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
The Azure platform SLA applies to virtual machines running the Linux OS only whe
* [Linux on Azure - Endorsed Distributions](endorsed-distros.md) * [Support for Linux images in Microsoft Azure](https://support.microsoft.com/kb/2941892)
-All distributions running on Azure have a number of prerequisites. This article can't be comprehensive, as every distribution is different. Even if you meet all the criteria below, you may need to significantly tweak your Linux system for it to run properly.
-
-We recommend that you start with one of the [Linux on Azure Endorsed Distributions](endorsed-distros.md). The following articles show you how to prepare the various endorsed Linux distributions that are supported on Azure:
--- [CentOS-based Distributions](create-upload-centos.md)-- [Debian Linux](debian-create-upload-vhd.md)-- [Flatcar Container Linux](flatcar-create-upload-vhd.md)-- [Oracle Linux](oracle-create-upload-vhd.md)-- [Red Hat Enterprise Linux](redhat-create-upload-vhd.md)-- [SLES & openSUSE](suse-create-upload-vhd.md)-- [Ubuntu](create-upload-ubuntu.md)
+All other, non-Azure Marketplace, distributions running on Azure have a number of prerequisites. This article can't be comprehensive, as every distribution is different. Even if you meet all the criteria below, you may need to significantly tweak your Linux system for it to run properly.
This article focuses on general guidance for running your Linux distribution on Azure.
This article focuses on general guidance for running your Linux distribution on
7. Don't configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on the temporary resource disk, as described in the following steps. 8. All VHDs on Azure must have a virtual size aligned to 1 MB (1024 &times; 1024 bytes). When converting from a raw disk to VHD you must ensure that the raw disk size is a multiple of 1 MB before conversion, as described in the following steps.
+9. Use the most up-to-date distribution version, packages, and software.
+8. Remove users and system accounts, public keys, sensitive data, unnecessary software and application.
-> [!NOTE]
-> Make sure **'udf'** (cloud-init >= 21.2) and **'vfat'** modules are enable. Blocklisting the udf module will cause a provisioning failure and backlisting vfat module will cause both provisioning and boot failures. **_Cloud-init < 21.2 are not affected and does not require this change._**
->
### Installing kernel modules without Hyper-V
The mechanism for rebuilding the initrd or initramfs image may vary depending on
### Resizing VHDs VHD images on Azure must have a virtual size aligned to 1 MB. Typically, VHDs created using Hyper-V are aligned correctly. If the VHD isn't aligned correctly, you may receive an error message similar to the following when you try to create an image from your VHD.-
-* The VHD http:\//\<mystorageaccount>.blob.core.windows.net/vhds/MyLinuxVM.vhd has an unsupported virtual size of 21475270656 bytes. The size must be a whole number (in MBs).
-
+ ```outout
+ The VHD http:\//\<mystorageaccount>.blob.core.windows.net/vhds/MyLinuxVM.vhd has an unsupported virtual size of 21475270656 bytes. The size must be a whole number (in MBs).
+ ```
In this case, resize the VM using either the Hyper-V Manager console or the [Resize-VHD](/powershell/module/hyper-v/resize-vhd) PowerShell cmdlet. If you aren't running in a Windows environment, we recommend using `qemu-img` to convert (if needed) and resize the VHD. > [!NOTE]
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
* In some cases, the Azure Linux Agent may not be compatible with NetworkManager. Many of the RPM/deb packages provided by distributions configure NetworkManager as a conflict to the waagent package. In these cases, it will uninstall NetworkManager when you install the Linux agent package. * The Azure Linux Agent must be at or above the [minimum supported version](https://support.microsoft.com/en-us/help/4049215/extensions-and-virtual-machine-agent-minimum-version-support).
+> [!NOTE]
+> Make sure **'udf'** (cloud-init >= 21.2) and **'vfat'** modules are enable. Blocklisting the udf module will cause a provisioning failure and backlisting vfat module will cause both provisioning and boot failures. **_Cloud-init < 21.2 are not affected and does not require this change._**
+>
+ ## General Linux System Requirements 1. Modify the kernel boot line in GRUB or GRUB2 to include the following parameters, so that all console messages are sent to the first serial port. These messages can assist Azure support with debugging any issues. ```
- console=ttyS0,115200n8 earlyprintk=ttyS0,115200 rootdelay=300
+ GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
``` We also recommend *removing* the following parameters if they exist. ```
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
``` Graphical and quiet boot isn't useful in a cloud environment, where we want all logs sent to the serial port. The `crashkernel` option may be left configured if needed, but note that this parameter reduces the amount of available memory in the VM by at least 128 MB, which may be problematic for smaller VM sizes.
+2. After you are done editing /etc/default/grub, run the following command to rebuild the grub configuration:
+ ```bash
+ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
+ ```
+1. Add Hyper-V modules both initrd and initramfs instructions (Dracut).
+1. Rebuild initrd or initramfs
+ **Initramfs**
+ ```bash
+ cp /boot/initramfs-$(uname -r).img /boot/initramfs-[latest kernel version ].img.bak
+ dracut -f -v /boot/initramfs-[latest kernel version ].img [depending on the version of grub]
+ grub-mkconfig -o /boot/grub/grub.cfg
+ grub2-mkconfig -o /boot/grub2/grub.cfg
+ ```
+
+ **Initrd**
+ ```bash
+ mv /boot/[initrd kernel] /boot/[initrd kernel]-old
+ mkinitrd /boot/initrd.img-[initrd kernel]-generic /boot/[initrd kernel]-generic-old
+ update-initramfs -c -k [initrd kernel]
+ update-grub
+ ```
+1. Ensure that the SSH server is installed, and configured to start at boot time. This configuration is usually the default.
1. Install the Azure Linux Agent.
-
- The Azure Linux Agent is required for provisioning a Linux image on Azure. Many distributions provide the agent as an RPM or .deb package (the package is typically called WALinuxAgent or walinuxagent). The agent can also be installed manually by following the steps in the [Linux Agent Guide](../extensions/agent-linux.md).
-
-1. Ensure that the SSH server is installed, and configured to start at boot time. This configuration is usually the default.
-
-1. Don't create swap space on the OS disk.
-
- The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (step 2 above), modify the following parameters in /etc/waagent.conf as needed.
- ```
- ResourceDisk.Format=y
- ResourceDisk.Filesystem=ext4
- ResourceDisk.MountPoint=/mnt/resource
- ResourceDisk.EnableSwap=y
- ResourceDisk.SwapSizeMB=2048 ## NOTE: Set this to your desired size.
+ The Azure Linux Agent is required for provisioning a Linux image on Azure. Many distributions provide the agent as an RPM or .deb package (the package is typically called WALinuxAgent or walinuxagent). The agent can also be installed manually by following the steps in the [Linux Agent Guide](../extensions/agent-linux.md).
+
+ Install the Azure Linux Agent, cloud-init and other necessary utilities by running the following command:
+
+ **Redhat/Centos**
+ ```bash
+ sudo yum install -y [waagent] cloud-init cloud-utils-growpart gdisk hyperv-daemons
+ ```
+ **Ubuntu/Debian**
+ ```bash
+ sudo apt install walinuxagent cloud-init cloud-utils-growpart gdisk hyperv-daemons
+ ```
+ **Suse**
+ ```bash
+ sudo zypper install python-azure-agent cloud-init cloud-utils-growpart gdisk hyperv-daemons
+ ```
+ Then enable the agent and cloud-init on all distributions using:
+ ```bash
+ sudo systemctl enable waagent.service
+ sudo systemctl enable cloud-init.service
+ ```
++
+1. Don't create swap space on the OS disk. The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a temporary disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent, modify the following parameters in /etc/waagent.conf as needed.
```
-1. Run the following commands to deprovision the virtual machine.
-
+ ResourceDisk.Format=y
+ ResourceDisk.Filesystem=ext4
+ ResourceDisk.MountPoint=/mnt/resource
+ ResourceDisk.EnableSwap=y
+ ResourceDisk.SwapSizeMB=2048 ## NOTE: Set this to your desired size.
+ ```
+
+9. Configure cloud-init to handle the provisioning:
+ 1. Configure waagent for cloud-init:
+ ```bash
+ sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=cloud-init/g' /etc/waagent.conf
+ sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
+ sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
+ ```
+ If you are migrating a specific virtual machine and do not wish to create a generalized image, set `Provisioning.Agent=disabled` in the `/etc/waagent.conf` config.
+ 1. Configure mounts:
+ ```
+ echo "Adding mounts and disk_setup to init stage"
+ sed -i '/ - mounts/d' /etc/cloud/cloud.cfg
+ sed -i '/ - disk_setup/d' /etc/cloud/cloud.cfg
+ sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg
+ sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg
+ 2. Configure Azure datasource:
+ ```
+ echo "Allow only Azure datasource, disable fetching network setting via IMDS"
+ cat > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg <<EOF
+ datasource_list: [ Azure ]
+ datasource:
+ Azure:
+ apply_network_config: False
+ EOF
+ ```
+ 3. If configured, remove existing swapfile:
+ ```
+ if [[ -f /mnt/resource/swapfile ]]; then
+ echo "Removing swapfile" #RHEL uses a swapfile by defaul
+ swapoff /mnt/resource/swapfile
+ rm /mnt/resource/swapfile -f
+ fi
+ ```
+ 4. Configure cloud-init logging:
+ ```
+ echo "Add console log file"
+ cat >> /etc/cloud/cloud.cfg.d/05_logging.cfg <<EOF
+
+ # This tells cloud-init to redirect its stdout and stderr to
+ # 'tee -a /var/log/cloud-init-output.log' so the user can see output
+ # there without needing to look on the console.
+ output: {all: '| tee -a /var/log/cloud-init-output.log'}
+ EOF
+ ```
+
+10. Swap configuration. Do not create swap space on the operating system disk.
+ Previously, the Azure Linux Agent automatically configured swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this is now handled by cloud-init, you must not use the Linux Agent to format the resource disk create the swap file, modify the following parameters in /etc/waagent.conf appropriately:
+ ```
+ ResourceDisk.Format=n
+ ResourceDisk.EnableSwap=n
```
- sudo waagent -force -deprovision
- export HISTSIZE=0
- logout
- ```
+ If you want to mount, format and create swap you can either:
+ 1. Pass this in as a cloud-init config every time you create a VM through `customdata`. This is the recommended method.
+ 2. Use a cloud-init directive baked into the image that will do this every time the VM is created.
+ ```
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
+ #cloud-config
+ # Generated by Azure cloud image build
+ disk_setup:
+ ephemeral0:
+ table_type: mbr
+ layout: [66, [33, 82]]
+ overwrite: True
+ fs_setup:
+ - device: ephemeral0.1
+ filesystem: ext4
+ - device: ephemeral0.2
+ filesystem: swap
+ mounts:
+ - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
+ EOF
+ ```
+
+1. Deprovision.
+ > [!CAUTION]
+ > If you are migrating a specific virtual machine and do not wish to create a generalized image, skip the deprovision step. Running the command waagent -force -deprovision+user will render the source machine unusable, this step is intended only to create a generalized image.
+
+ Run the following commands to deprovision the virtual machine.
+
+ ```
+ # sudo rm -f /var/log/waagent.log
+ # sudo cloud-init clean
+ # waagent -force -deprovision+user
+ # rm -f ~/.bash_history
+ # export HISTSIZE=0
+ # logout
+ ```
+
> [!NOTE] > On Virtualbox you may see the following error after running `waagent -force -deprovision` that says `[Errno 5] Input/output error`. This error message is not critical and can be ignored.
-* Shut down the virtual machine and upload the VHD to Azure.
+1. Shut down the virtual machine and upload the VHD to Azure.
+
+## Next Steps
+[Create a Linux VM from a custom disk with the Azure CLI](upload-vhd.md).
virtual-machines Disks Enable Customer Managed Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-customer-managed-keys-cli.md
# Use the Azure CLI to enable server-side encryption with customer-managed keys for managed disks
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
Azure Disk Storage allows you to manage your own keys when using server-side encryption (SSE) for managed disks, if you choose. For conceptual information on SSE with customer managed keys, as well as other managed disk encryption types, see the [Customer-managed keys](../disk-encryption.md#customer-managed-keys) section of our disk encryption article.
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-and-updates.md
Azure periodically updates its platform to improve the reliability, performance,
Updates rarely affect the hosted VMs. When updates do have an effect, Azure chooses the least impactful method for updates: - If the update doesn't require a reboot, the VM is paused while the host is updated, or the VM is live-migrated to an already updated host. -- If maintenance requires a reboot, you're notified of the planned maintenance. Azure also provides a time window in which you can start the maintenance yourself, at a time that works for you. The self-maintenance window is typically 35 days unless the maintenance is urgent. Azure is investing in technologies to reduce the number of cases in which planned platform maintenance requires the VMs to be rebooted. For instructions on managing planned maintenance, see Handling planned maintenance notifications using the Azure [CLI](maintenance-notifications-cli.md), [PowerShell](maintenance-notifications-powershell.md) or [portal](maintenance-notifications-portal.md).
+- If maintenance requires a reboot, you're notified of the planned maintenance. Azure also provides a time window in which you can start the maintenance yourself, at a time that works for you. The self-maintenance window is typically 35 days (for Host machines) unless the maintenance is urgent. Azure is investing in technologies to reduce the number of cases in which planned platform maintenance requires the VMs to be rebooted. For instructions on managing planned maintenance, see Handling planned maintenance notifications using the Azure [CLI](maintenance-notifications-cli.md), [PowerShell](maintenance-notifications-powershell.md) or [portal](maintenance-notifications-portal.md).
This page describes how Azure performs both types of maintenance. For more information about unplanned events (outages), see [Manage the availability of VMs for Windows](./availability.md) or the corresponding article for [Linux](./availability.md).
These maintenance operations that don't require a reboot are applied one fault d
These types of updates can affect some applications. When the VM is live-migrated to a different host, some sensitive workloads might show a slight performance degradation in the few minutes leading up to the VM pause. To prepare for VM maintenance and reduce impact during Azure maintenance, try [using Scheduled Events for Windows](./windows/scheduled-events.md) or [Linux](./linux/scheduled-events.md) for such applications.
-For greater control on all maintenance activities including zero-impact and rebootless updates, you can use Maintenance Control feature. You must be using either [Azure Dedicated Hosts](./dedicated-hosts.md) or an [isolated VM](../security/fundamentals/isolation-choices.md). Maintenance control gives you the option to skip all platform updates and apply the updates at your choice of time within a 35-day rolling window. For more information, see [Control updates with Maintenance Control and the Azure CLI](maintenance-control.md).
+For greater control on all maintenance activities including zero-impact and rebootless updates, you can create a Maintenance Configuration feature. Creating a Maintenance Configuration gives you the option to skip all platform updates and apply the updates at your choice of time. For more information, see [Managing platform updates with Maintenance Configurations](maintenance-configurations.md).
### Live migration
virtual-machines Maintenance Configurations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-cli.md
+
+ Title: Maintenance Configurations for Azure virtual machines using CLI
+description: Learn how to control when maintenance is applied to your Azure VMs using Maintenance configurations and CLI.
+++++ Last updated : 11/20/2020+
+#pmcontact: shants
++
+# Control updates with Maintenance Configurations and the Azure CLI
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+Maintenance Configurations lets you decide when to apply platform updates to various Azure resources. This topic covers the Azure CLI options for Dedicated Hosts and Isolated VMs. For more about benefits of using Maintenance Configurations, its limitations, and other management options, see [Managing platform updates with Maintenance Configurations](maintenance-configurations.md).
+
+> [!IMPORTANT]
+> There are different **scopes** which support certain machine types and schedules, so please ensure you are selecting the right scope for your virtual machine.
+
+## Create a maintenance configuration
+
+Use `az maintenance configuration create` to create a maintenance configuration. This example creates a maintenance configuration named *myConfig* scoped to the host.
+
+```azurecli-interactive
+az group create \
+ --location eastus \
+ --name myMaintenanceRG
+az maintenance configuration create \
+ -g myMaintenanceRG \
+ --resource-name myConfig \
+ --maintenance-scope host\
+ --location eastus
+```
+
+Copy the configuration ID from the output to use later.
+
+Using `--maintenance-scope host` ensures that the maintenance configuration is used for controlling updates to the host infrastructure.
+
+If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
+
+You can query for available maintenance configurations using `az maintenance configuration list`.
+
+```azurecli-interactive
+az maintenance configuration list --query "[].{Name:name, ID:id}" -o table
+```
+
+### Create a maintenance configuration with scheduled window
+You can also declare a scheduled window when Azure will apply the updates on your resources. This example creates a maintenance configuration named myConfig with a scheduled window of 5 hours on the fourth Monday of every month. Once you create a scheduled window you no longer have to apply the updates manually.
+
+```azurecli-interactive
+az maintenance configuration create \
+ -g myMaintenanceRG \
+ --resource-name myConfig \
+ --maintenance-scope host \
+ --location eastus \
+ --maintenance-window-duration "05:00" \
+ --maintenance-window-recur-every "Month Fourth Monday" \
+ --maintenance-window-start-date-time "2020-12-30 08:00" \
+ --maintenance-window-time-zone "Pacific Standard Time"
+```
+
+> [!IMPORTANT]
+> Maintenance **duration** must be *2 hours* or longer.
++
+Maintenance recurrence can be expressed as daily, weekly or monthly. Some examples are:
+- **daily**- maintenance-window-recur-every: "Day" **or** "3Days"
+- **weekly**- maintenance-window-recur-every: "3Weeks" **or** "Week Saturday,Sunday"
+- **monthly**- maintenance-window-recur-every: "Month day23,day24" **or** "Month Last Sunday" **or** "Month Fourth Monday"
++
+## Assign the configuration
+
+Use `az maintenance assignment create` to assign the configuration to your machine.
+
+### Isolated VM
+
+Apply the configuration to a VM using the ID of the configuration. Specify `--resource-type virtualMachines` and supply the name of the VM for `--resource-name`, and the resource group for to the VM in `--resource-group`, and the location of the VM for `--location`.
+
+```azurecli-interactive
+az maintenance assignment create \
+ --resource-group myMaintenanceRG \
+ --location eastus \
+ --resource-name myVM \
+ --resource-type virtualMachines \
+ --provider-name Microsoft.Compute \
+ --configuration-assignment-name myConfig \
+ --maintenance-configuration-id "/subscriptions/1111abcd-1a11-1a2b-1a12-123456789abc/resourcegroups/myMaintenanceRG/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig"
+```
+
+### Dedicated host
+
+To apply a configuration to a dedicated host, you need to include `--resource-type hosts`, `--resource-parent-name` with the name of the host group, and `--resource-parent-type hostGroups`.
+
+The parameter `--resource-id` is the ID of the host. You can use [az-vm-host-get-instance-view](/cli/azure/vm/host#az-vm-host-get-instance-view) to get the ID of your dedicated host.
+
+```azurecli-interactive
+az maintenance assignment create \
+ -g myDHResourceGroup \
+ --resource-name myHost \
+ --resource-type hosts \
+ --provider-name Microsoft.Compute \
+ --configuration-assignment-name myConfig \
+ --maintenance-configuration-id "/subscriptions/1111abcd-1a11-1a2b-1a12-123456789abc/resourcegroups/myDhResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig" \
+ -l eastus \
+ --resource-parent-name myHostGroup \
+ --resource-parent-type hostGroups
+```
+
+## Check configuration
+
+You can verify that the configuration was applied correctly, or check to see what configuration is currently applied using `az maintenance assignment list`.
+
+### Isolated VM
+
+```azurecli-interactive
+az maintenance assignment list \
+ --provider-name Microsoft.Compute \
+ --resource-group myMaintenanceRG \
+ --resource-name myVM \
+ --resource-type virtualMachines \
+ --query "[].{resource:resourceGroup, configName:name}" \
+ --output table
+```
+
+### Dedicated host
+
+```azurecli-interactive
+az maintenance assignment list \
+ --resource-group myDHResourceGroup \
+ --resource-name myHost \
+ --resource-type hosts \
+ --provider-name Microsoft.Compute \
+ --resource-parent-name myHostGroup \
+ --resource-parent-type hostGroups
+ --query "[].{ResourceGroup:resourceGroup,configName:name}" \
+ -o table
+```
++
+## Check for pending updates
+
+Use `az maintenance update list` to see if there are pending updates. Update --subscription to be the ID for the subscription that contains the VM.
+
+If there are no updates, the command will return an error message, which will contain the text: `Resource not found...StatusCode: 404`.
+
+If there are updates, only one will be returned, even if there are multiple updates pending. The data for this update will be returned in an object:
+
+```text
+[
+ {
+ "impactDurationInSec": 9,
+ "impactType": "Freeze",
+ "maintenanceScope": "Host",
+ "notBefore": "2020-03-03T07:23:04.905538+00:00",
+ "resourceId": "/subscriptions/9120c5ff-e78e-4bd0-b29f-75c19cadd078/resourcegroups/DemoRG/providers/Microsoft.Compute/hostGroups/demoHostGroup/hosts/myHost",
+ "status": "Pending"
+ }
+]
+ ```
+
+### Isolated VM
+
+Check for pending updates for an isolated VM. In this example, the output is formatted as a table for readability.
+
+```azurecli-interactive
+az maintenance update list \
+ -g myMaintenanceRg \
+ --resource-name myVM \
+ --resource-type virtualMachines \
+ --provider-name Microsoft.Compute \
+ -o table
+```
+
+### Dedicated host
+
+To check for pending updates for a dedicated host. In this example, the output is formatted as a table for readability. Replace the values for the resources with your own.
+
+```azurecli-interactive
+az maintenance update list \
+ --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
+ -g myHostResourceGroup \
+ --resource-name myHost \
+ --resource-type hosts \
+ --provider-name Microsoft.Compute \
+ --resource-parentname myHostGroup \
+ --resource-parent-type hostGroups \
+ -o table
+```
+
+## Apply updates
+
+Use `az maintenance apply update` to apply pending updates. On success, this command will return JSON containing the details of the update. Apply update calls can take upto 2 hours to complete.
+
+### Isolated VM
+
+Create a request to apply updates to an isolated VM.
+
+```azurecli-interactive
+az maintenance applyupdate create \
+ --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
+ --resource-group myMaintenanceRG \
+ --resource-name myVM \
+ --resource-type virtualMachines \
+ --provider-name Microsoft.Compute
+```
++
+### Dedicated host
+
+Apply updates to a dedicated host.
+
+```azurecli-interactive
+az maintenance applyupdate create \
+ --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
+ --resource-group myHostResourceGroup \
+ --resource-name myHost \
+ --resource-type hosts \
+ --provider-name Microsoft.Compute \
+ --resource-parent-name myHostGroup \
+ --resource-parent-type hostGroups
+```
+
+## Check the status of applying updates
+
+You can check on the progress of the updates using `az maintenance applyupdate get`.
+
+You can use `default` as the update name to see results for the last update, or replace `myUpdateName` with the name of the update that was returned when you ran `az maintenance applyupdate create`.
+
+```text
+Status : Completed
+ResourceId : /subscriptions/12ae7457-4a34-465c-94c1-17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
+ute/virtualMachines/DXT-test-04-iso
+LastUpdateTime : 1/1/2020 12:00:00 AM
+Id : /subscriptions/12ae7457-4a34-465c-94c1-17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
+ute/virtualMachines/DXT-test-04-iso/providers/Microsoft.Maintenance/applyUpdates/default
+Name : default
+Type : Microsoft.Maintenance/applyUpdates
+```
+LastUpdateTime will be the time when the update got complete, either initiated by you or by the platform in case self-maintenance window was not used. If there has never been an update applied through maintenance control it will show default value.
+
+### Isolated VM
+
+```azurecli-interactive
+az maintenance applyupdate get \
+ --resource-group myMaintenanceRG \
+ --resource-name myVM \
+ --resource-type virtualMachines \
+ --provider-name Microsoft.Compute \
+ --apply-update-name default
+```
+
+### Dedicated host
+
+```azurecli-interactive
+az maintenance applyupdate get \
+ --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
+ --resource-group myMaintenanceRG \
+ --resource-name myHost \
+ --resource-type hosts \
+ --provider-name Microsoft.Compute \
+ --resource-parent-name myHostGroup \
+ --resource-parent-type hostGroups \
+ --apply-update-name myUpdateName \
+ --query "{LastUpdate:lastUpdateTime, Name:name, ResourceGroup:resourceGroup, Status:status}" \
+ --output table
+```
++
+## Delete a maintenance configuration
+
+Use `az maintenance configuration delete` to delete a maintenance configuration. Deleting the configuration removes the maintenance control from the associated resources.
+
+```azurecli-interactive
+az maintenance configuration delete \
+ --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
+ -g myResourceGroup \
+ --resource-name myConfig
+```
+
+## Next steps
+To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Maintenance Configurations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-portal.md
+
+ Title: Maintenance Configurations for Azure virtual machines using the Azure portal
+description: Learn how to control when maintenance is applied to your Azure VMs using Maintenance Configurations and the Azure portal.
+++++ Last updated : 03/24/2022+
+#pmcontact: shants
++
+# Control updates with Maintenance Configurations and the Azure portal
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+With Maintenance Configurations, you can now take more control over when to apply updates to various Azure resources. This topic covers the Azure portal options for creating Maintenance Configurations. For more about benefits of using Maintenance Configurations, its limitations, and other management options, see [Managing platform updates with Maintenance Configurations](maintenance-configurations.md).
+
+## Create a Maintenance Configuration
+
+1. Sign in to the Azure portal.
+
+1. Search for **Maintenance Configurations**.
+
+ :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-search-bar.png" alt-text="Screenshot showing how to open Maintenance Configurations":::
+
+1. Click **Create**.
+
+ :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-add-2.png" alt-text="Screenshot showing how to add a maintenance configuration":::
+
+1. In the Basics tab, choose a subscription and resource group, provide a name for the configuration, choose a region, and select one of the scopes we offer which you wish to apply updates for. Click **Add a schedule** to add or modify the schedule for your configuration.
+
+ > [!IMPORTANT]
+ > There are different **scopes** which support certain machine types and schedules, so please ensure you are selecting the right scope for your virtual machine.
+
+ :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-basics-tab.png" alt-text="Screenshot showing Maintenance Configuration basics":::
+
+1. In the Schedule tab, declare a scheduled window when Azure will apply the updates on your resources. Set a start date, maintenance window, and recurrence if your resource requires it. Once you create a scheduled window you no longer have to apply the updates manually. Click **Next**.
+
+ > [!IMPORTANT]
+ > Maintenance window **duration** must be *2 hours* or longer.
+
+ :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-schedule-tab.png" alt-text="Screenshot showing Maintenance Configuration schedule":::
+
+1. In the Machines tab, assign resources now or skip this step and assign resources later after maintenance configuration deployment. Click **Next**.
+
+1. Add tags and values. Click **Next**.
+
+ :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-tags-tab.png" alt-text="Screenshot showing how to add tags to a maintenance configuration":::
+
+1. Review the summary. Click **Create**.
+
+1. After the deployment is complete, click **Go to resource**.
++
+## Assign the configuration
+
+On the details page of the maintenance configuration, click Machines and then click **Add Machine**.
+
+![Screenshot showing how to assign a resource](media/virtual-machines-maintenance-control-portal/maintenance-configurations-add-assignment.png)
+
+Select the resources that you want the maintenance configuration assigned to and click **Ok**. The VM needs to be running to assign the configuration. An error occurs if you try to assign a configuration to a VM that is stopped.
+
+<!Shantanu to add details about the error case>
+
+![Screenshot showing how to select a resource](media/virtual-machines-maintenance-control-portal/maintenance-configurations-select-resource.png)
+
+## Check configuration
+
+You can verify that the configuration was applied correctly or check to see any maintenance configuration that is currently assigned to a machine by going to the **Maintenance Configurations** and checking under the **Machines** tab. You should see any machine you have assigned the configuration in this tab.
+
+![Screenshot showing how to check a maintenance configuration](media/virtual-machines-maintenance-control-portal/maintenance-configurations-host-type.png)
+
+<!-- You can also check the configuration for a specific virtual machine on its properties page. Click **Maintenance** to see the configuration assigned to that virtual machine.
+
+![Screenshot showing how to check Maintenance for a host](media/virtual-machines-maintenance-control-portal/maintenance-configurations-check-config.png) -->
+
+## Check for pending updates
+
+You can check if there are any updates pending for a maintenance configuration. In **Maintenance Configurations**, on the details for the configuration, click **Machines** and check **Maintenance status**.
+
+![Screenshot showing how to check pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-pending.png)
+
+<!-- You can also check a specific host using **Virtual Machines** or properties of the dedicated host.
+
+![Screenshot that shows the highlighted maintenance state.](media/virtual-machines-maintenance-control-portal/maintenance-configurations-pending-vm.png) -->
+
+<!-- ## Apply updates
+
+You can apply pending updates on demand. On the VM or Azure Dedicated Host details, click **Maintenance** and click **Apply maintenance now**. Apply update calls can take upto 2 hours to complete.
+
+![Screenshot showing how to apply pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-apply-updates-now.png)
+
+## Check the status of applying updates
+
+You can check on the progress of the updates for a configuration in **Maintenance Configurations** or using **Virtual Machines**. On the VM details, click **Maintenance**. In the following example, the **Maintenance state** shows an update is **Pending**.
+
+![Screenshot showing how to check status of pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-status.png) -->
+
+## Delete a maintenance configuration
+
+To delete a configuration, open the configuration details and click **Delete**.
+
+![Screenshot that shows how to delete a configuration.](media/virtual-machines-maintenance-control-portal/maintenance-configurations-delete.png)
++
+## Next steps
+
+To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Maintenance Configurations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-powershell.md
+
+ Title: Maintenance Configurations for Azure virtual machines using PowerShell
+description: Learn how to control when maintenance is applied to your Azure VMs using Maintenance Configurations and PowerShell.
+++++ Last updated : 11/19/2020++
+#pmcontact: shants
++
+# Control updates with Maintenance Configurations and Azure PowerShell
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+Creating a Maintenance Configurations lets you decide when to apply platform updates to various Azure resources. This topic covers the Azure PowerShell options for Dedicated Hosts and Isolated VMs. For more about benefits of using Maintenance Configurations, its limitations, and other management options, see [Managing platform updates with Maintenance Configurations](maintenance-configurations.md).
+
+If you are looking for information about Maintenance Configurations for scale sets, see [Maintenance Control for virtual machine scale sets](virtual-machine-scale-sets-maintenance-control.md).
+
+> [!IMPORTANT]
+> There are different **scopes** which support certain machine types and schedules, so please ensure you are selecting the right scope for your virtual machine.
+
+## Enable the PowerShell module
+
+Make sure `PowerShellGet` is up to date.
+
+```azurepowershell-interactive
+Install-Module -Name PowerShellGet -Repository PSGallery -Force
+```
+
+Install the `Az.Maintenance` PowerShell module.
+
+```azurepowershell-interactive
+Install-Module -Name Az.Maintenance
+```
+
+If you are installing locally, make sure you open your PowerShell prompt as an administrator.
+
+You may also be asked to confirm that you want to install from an *untrusted repository*. Type `Y` or select **Yes to All** to install the module.
++
+## Create a maintenance configuration
+
+Create a resource group as a container for your configuration. In this example, a resource group named *myMaintenanceRG* is created in *eastus*. If you already have a resource group that you want to use, you can skip this part and replace the resource group name with your own in the rest of the examples.
+
+```azurepowershell-interactive
+New-AzResourceGroup `
+ -Location eastus `
+ -Name myMaintenanceRG
+```
+
+Use [New-AzMaintenanceConfiguration](/powershell/module/az.maintenance/new-azmaintenanceconfiguration) to create a maintenance configuration. This example creates a maintenance configuration named *myConfig* scoped to the host.
+
+```azurepowershell-interactive
+$config = New-AzMaintenanceConfiguration `
+ -ResourceGroup myMaintenanceRG `
+ -Name myConfig `
+ -MaintenanceScope host `
+ -Location eastus
+```
+
+Using `-MaintenanceScope host` ensures that the maintenance configuration is used for controlling updates to the host.
+
+If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
+
+You can query for available maintenance configurations using [Get-AzMaintenanceConfiguration](/powershell/module/az.maintenance/get-azmaintenanceconfiguration).
+
+```azurepowershell-interactive
+Get-AzMaintenanceConfiguration | Format-Table -Property Name,Id
+```
+
+### Create a maintenance configuration with scheduled window
+
+You can also declare a scheduled window when Azure will apply the updates on your resources. This example creates a maintenance configuration named myConfig with a scheduled window of 5 hours on the fourth Monday of every month. Once you create a scheduled window you no longer have to apply the updates manually.
+
+```azurepowershell-interactive
+$config = New-AzMaintenanceConfiguration `
+ -ResourceGroup $RGName `
+ -Name $MaintenanceConfig `
+ -MaintenanceScope Host `
+ -Location $location `
+ -StartDateTime "2020-10-01 00:00" `
+ -TimeZone "Pacific Standard Time" `
+ -Duration "05:00" `
+ -RecurEvery "Month Fourth Monday"
+```
+> [!IMPORTANT]
+> Maintenance **duration** must be *2 hours* or longer.
+
+Maintenance **recurrence** can be expressed as daily, weekly or monthly. Some examples are:
+ - **daily**- RecurEvery "Day" **or** "3Days"
+ - **weekly**- RecurEvery "3Weeks" **or** "Week Saturday,Sunday"
+ - **monthly**- RecurEvery "Month day23,day24" **or** "Month Last Sunday" **or** "Month Fourth Monday"
+
+
+## Assign the configuration
+
+Use [New-AzConfigurationAssignment](/powershell/module/az.maintenance/new-azconfigurationassignment) to assign the configuration to your isolated VM or Azure Dedicated Host.
+
+### Isolated VM
+
+Apply the configuration to a VM using the ID of the configuration. Specify `-ResourceType VirtualMachines` and supply the name of the VM for `-ResourceName`, and the resource group of the VM for `-ResourceGroupName`.
+
+```azurepowershell-interactive
+New-AzConfigurationAssignment `
+ -ResourceGroupName myResourceGroup `
+ -Location eastus `
+ -ResourceName myVM `
+ -ResourceType VirtualMachines `
+ -ProviderName Microsoft.Compute `
+ -ConfigurationAssignmentName $config.Name `
+ -MaintenanceConfigurationId $config.Id
+```
+
+### Dedicated host
+
+To apply a configuration to a dedicated host, you also need to include `-ResourceType hosts`, `-ResourceParentName` with the name of the host group, and `-ResourceParentType hostGroups`.
++
+```azurepowershell-interactive
+New-AzConfigurationAssignment `
+ -ResourceGroupName myResourceGroup `
+ -Location eastus `
+ -ResourceName myHost `
+ -ResourceType hosts `
+ -ResourceParentName myHostGroup `
+ -ResourceParentType hostGroups `
+ -ProviderName Microsoft.Compute `
+ -ConfigurationAssignmentName $config.Name `
+ -MaintenanceConfigurationId $config.Id
+```
+
+## Check for pending updates
+
+Use [Get-AzMaintenanceUpdate](/powershell/module/az.maintenance/get-azmaintenanceupdate) to see if there are pending updates. Use `-subscription` to specify the Azure subscription of the VM if it is different from the one that you are logged into.
+
+If there are no updates to show, this command will return nothing. Otherwise, it will return a PSApplyUpdate object:
+
+```json
+{
+ "maintenanceScope": "Host",
+ "impactType": "Freeze",
+ "status": "Pending",
+ "impactDurationInSec": 9,
+ "notBefore": "2020-02-21T16:47:44.8728029Z",
+ "properties": {
+ "resourceId": "/subscriptions/39c6cced-4d6c-4dd5-af86-57499cd3f846/resourcegroups/Ignite2019/providers/Microsoft.Compute/virtualMachines/MCDemo3"
+}
+```
+
+### Isolated VM
+
+Check for pending updates for an isolated VM. In this example, the output is formatted as a table for readability.
+
+```azurepowershell-interactive
+Get-AzMaintenanceUpdate `
+ -ResourceGroupName myResourceGroup `
+ -ResourceName myVM `
+ -ResourceType VirtualMachines `
+ -ProviderName Microsoft.Compute | Format-Table
+```
++
+### Dedicated host
+
+To check for pending updates for a dedicated host. In this example, the output is formatted as a table for readability. Replace the values for the resources with your own.
+
+```azurepowershell-interactive
+Get-AzMaintenanceUpdate `
+ -ResourceGroupName myResourceGroup `
+ -ResourceName myHost `
+ -ResourceType hosts `
+ -ResourceParentName myHostGroup `
+ -ResourceParentType hostGroups `
+ -ProviderName Microsoft.Compute | Format-Table
+```
++
+## Apply updates
+
+Use [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) to apply pending updates. Apply update calls can take upto 2 hours to complete.
+
+### Isolated VM
+
+Create a request to apply updates to an isolated VM.
+
+```azurepowershell-interactive
+New-AzApplyUpdate `
+ -ResourceGroupName myResourceGroup `
+ -ResourceName myVM `
+ -ResourceType VirtualMachines `
+ -ProviderName Microsoft.Compute
+```
+
+On success, this command will return a `PSApplyUpdate` object. You can use the Name attribute in the `Get-AzApplyUpdate` command to check the update status. See [Check update status](#check-update-status).
+
+### Dedicated host
+
+Apply updates to a dedicated host.
+
+```azurepowershell-interactive
+New-AzApplyUpdate `
+ -ResourceGroupName myResourceGroup `
+ -ResourceName myHost `
+ -ResourceType hosts `
+ -ResourceParentName myHostGroup `
+ -ResourceParentType hostGroups `
+ -ProviderName Microsoft.Compute
+```
+
+## Check update status
+Use [Get-AzApplyUpdate](/powershell/module/az.maintenance/get-azapplyupdate) to check on the status of an update. The commands shown below show the status of the latest update by using `default` for the `-ApplyUpdateName` parameter. You can substitute the name of the update (returned by the [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) command) to get the status of a specific update.
+
+```text
+Status : Completed
+ResourceId : /subscriptions/12ae7457-4a34-465c-94c1-17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
+ute/virtualMachines/DXT-test-04-iso
+LastUpdateTime : 1/1/2020 12:00:00 AM
+Id : /subscriptions/12ae7457-4a34-465c-94c1-17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
+ute/virtualMachines/DXT-test-04-iso/providers/Microsoft.Maintenance/applyUpdates/default
+Name : default
+Type : Microsoft.Maintenance/applyUpdates
+```
+LastUpdateTime will be the time when the update got complete, either initiated by you or by the platform in case self-maintenance window was not used. If there has never been an update applied through maintenance configurations it will show default value.
+
+### Isolated VM
+
+Check for updates to a specific virtual machine.
+
+```azurepowershell-interactive
+Get-AzApplyUpdate `
+ -ResourceGroupName myResourceGroup `
+ -ResourceName myVM `
+ -ResourceType VirtualMachines `
+ -ProviderName Microsoft.Compute `
+ -ApplyUpdateName default
+```
+
+### Dedicated host
+
+Check for updates to a dedicated host.
+
+```azurepowershell-interactive
+Get-AzApplyUpdate `
+ -ResourceGroupName myResourceGroup `
+ -ResourceName myHost `
+ -ResourceType hosts `
+ -ResourceParentName myHostGroup `
+ -ResourceParentType hostGroups `
+ -ProviderName Microsoft.Compute `
+ -ApplyUpdateName myUpdateName
+```
+
+## Remove a maintenance configuration
+
+Use [Remove-AzMaintenanceConfiguration](/powershell/module/az.maintenance/remove-azmaintenanceconfiguration) to delete a maintenance configuration.
+
+```azurepowershell-interactive
+Remove-AzMaintenanceConfiguration `
+ -ResourceGroupName myResourceGroup `
+ -Name $config.Name
+```
+
+## Next steps
+To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
+
+ Title: Overview of Maintenance Configurations for Azure virtual machines
+description: Learn how to control when maintenance is applied to your Azure VMs using Maintenance Control.
+++++ Last updated : 10/06/2021+
+#pmcontact: shants
++
+# Managing platform updates with Maintenance Configurations
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+Manage platform updates, that don't require a reboot, using Maintenance Configurations. Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users. Some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Creating a Maintenance Configuration gives you the option to wait on platform updates and apply them within a 35-day rolling window.
++
+With Maintenance Configurations, you can:
+- Batch updates into one update package.
+- Wait up to 35 days to apply updates for **Host** machines.
+- Automate platform updates by configuring your maintenance schedule.
+- Maintenance Configurations work across subscriptions and resource groups.
+
+## Limitations
+
+- Maintenance window duration can vary month over month and sometimes it can take up to 2 hours to apply the pending updates once it is initiated by the user.
+- After 35 days, an update will automatically be applied to your **Host** machines.
+- Rack level maintenance cannot be controlled through maintenance configurations.
+- User must have **Resource Contributor** access.
+- Users need to know the nuances of the scopes required for their machine.
+
+## Management options
+
+You can create and manage maintenance configurations using any of the following options:
+
+- [Azure CLI](maintenance-configurations-cli.md)
+- [Azure PowerShell](maintenance-configurations-powershell.md)
+- [Azure portal](maintenance-configurations-portal.md)
+
+For an Azure Functions sample, see [Scheduling Maintenance Updates with Maintenance Configurations and Azure Functions](https://github.com/Azure/azure-docs-powershell-samples/tree/master/maintenance-auto-scheduler).
+
+## Next steps
+
+To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Maintenance Control Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-control-cli.md
- Title: Maintenance control for Azure virtual machines using CLI
-description: Learn how to control when maintenance is applied to your Azure VMs using Maintenance control and CLI.
----- Previously updated : 11/20/2020-
-#pmcontact: shants
--
-# Control updates with Maintenance Control and the Azure CLI
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-Maintenance control lets you decide when to apply platform updates to host infrastructure of your isolated VMs and Azure dedicated hosts. This topic covers the Azure CLI options for Maintenance control. For more about benefits of using Maintenance control, its limitations, and other management options, see [Managing platform updates with Maintenance Control](maintenance-control.md).
-
-## Create a maintenance configuration
-
-Use `az maintenance configuration create` to create a maintenance configuration. This example creates a maintenance configuration named *myConfig* scoped to the host.
-
-```azurecli-interactive
-az group create \
- --location eastus \
- --name myMaintenanceRG
-az maintenance configuration create \
- -g myMaintenanceRG \
- --resource-name myConfig \
- --maintenance-scope host\
- --location eastus
-```
-
-Copy the configuration ID from the output to use later.
-
-Using `--maintenance-scope host` ensures that the maintenance configuration is used for controlling updates to the host infrastructure.
-
-If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
-
-You can query for available maintenance configurations using `az maintenance configuration list`.
-
-```azurecli-interactive
-az maintenance configuration list --query "[].{Name:name, ID:id}" -o table
-```
-
-### Create a maintenance configuration with scheduled window
-You can also declare a scheduled window when Azure will apply the updates on your resources. This example creates a maintenance configuration named myConfig with a scheduled window of 5 hours on the fourth Monday of every month. Once you create a scheduled window you no longer have to apply the updates manually.
-
-```azurecli-interactive
-az maintenance configuration create \
- -g myMaintenanceRG \
- --resource-name myConfig \
- --maintenance-scope host \
- --location eastus \
- --maintenance-window-duration "05:00" \
- --maintenance-window-recur-every "Month Fourth Monday" \
- --maintenance-window-start-date-time "2020-12-30 08:00" \
- --maintenance-window-time-zone "Pacific Standard Time"
-```
-
-> [!IMPORTANT]
-> Maintenance **duration** must be *2 hours* or longer. Maintenance **recurrence** must be set to at least occur once in 35-days.
-
-Maintenance recurrence can be expressed as daily, weekly or monthly. Some examples are:
-- **daily**- maintenance-window-recur-every: "Day" **or** "3Days"-- **weekly**- maintenance-window-recur-every: "3Weeks" **or** "Week Saturday,Sunday"-- **monthly**- maintenance-window-recur-every: "Month day23,day24" **or** "Month Last Sunday" **or** "Month Fourth Monday"--
-## Assign the configuration
-
-Use `az maintenance assignment create` to assign the configuration to your isolated VM or Azure Dedicated Host.
-
-### Isolated VM
-
-Apply the configuration to a VM using the ID of the configuration. Specify `--resource-type virtualMachines` and supply the name of the VM for `--resource-name`, and the resource group for to the VM in `--resource-group`, and the location of the VM for `--location`.
-
-```azurecli-interactive
-az maintenance assignment create \
- --resource-group myMaintenanceRG \
- --location eastus \
- --resource-name myVM \
- --resource-type virtualMachines \
- --provider-name Microsoft.Compute \
- --configuration-assignment-name myConfig \
- --maintenance-configuration-id "/subscriptions/1111abcd-1a11-1a2b-1a12-123456789abc/resourcegroups/myMaintenanceRG/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig"
-```
-
-### Dedicated host
-
-To apply a configuration to a dedicated host, you need to include `--resource-type hosts`, `--resource-parent-name` with the name of the host group, and `--resource-parent-type hostGroups`.
-
-The parameter `--resource-id` is the ID of the host. You can use [az vm host get-instance-view](/cli/azure/vm/host#az-vm-host-get-instance-view) to get the ID of your dedicated host.
-
-```azurecli-interactive
-az maintenance assignment create \
- -g myDHResourceGroup \
- --resource-name myHost \
- --resource-type hosts \
- --provider-name Microsoft.Compute \
- --configuration-assignment-name myConfig \
- --maintenance-configuration-id "/subscriptions/1111abcd-1a11-1a2b-1a12-123456789abc/resourcegroups/myDhResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig" \
- -l eastus \
- --resource-parent-name myHostGroup \
- --resource-parent-type hostGroups
-```
-
-## Check configuration
-
-You can verify that the configuration was applied correctly, or check to see what configuration is currently applied using `az maintenance assignment list`.
-
-### Isolated VM
-
-```azurecli-interactive
-az maintenance assignment list \
- --provider-name Microsoft.Compute \
- --resource-group myMaintenanceRG \
- --resource-name myVM \
- --resource-type virtualMachines \
- --query "[].{resource:resourceGroup, configName:name}" \
- --output table
-```
-
-### Dedicated host
-
-```azurecli-interactive
-az maintenance assignment list \
- --resource-group myDHResourceGroup \
- --resource-name myHost \
- --resource-type hosts \
- --provider-name Microsoft.Compute \
- --resource-parent-name myHostGroup \
- --resource-parent-type hostGroups
- --query "[].{ResourceGroup:resourceGroup,configName:name}" \
- -o table
-```
--
-## Check for pending updates
-
-Use `az maintenance update list` to see if there are pending updates. Update --subscription to be the ID for the subscription that contains the VM.
-
-If there are no updates, the command will return an error message, which will contain the text: `Resource not found...StatusCode: 404`.
-
-If there are updates, only one will be returned, even if there are multiple updates pending. The data for this update will be returned in an object:
-
-```text
-[
- {
- "impactDurationInSec": 9,
- "impactType": "Freeze",
- "maintenanceScope": "Host",
- "notBefore": "2020-03-03T07:23:04.905538+00:00",
- "resourceId": "/subscriptions/9120c5ff-e78e-4bd0-b29f-75c19cadd078/resourcegroups/DemoRG/providers/Microsoft.Compute/hostGroups/demoHostGroup/hosts/myHost",
- "status": "Pending"
- }
-]
- ```
-
-### Isolated VM
-
-Check for pending updates for an isolated VM. In this example, the output is formatted as a table for readability.
-
-```azurecli-interactive
-az maintenance update list \
- -g myMaintenanceRg \
- --resource-name myVM \
- --resource-type virtualMachines \
- --provider-name Microsoft.Compute \
- -o table
-```
-
-### Dedicated host
-
-To check for pending updates for a dedicated host. In this example, the output is formatted as a table for readability. Replace the values for the resources with your own.
-
-```azurecli-interactive
-az maintenance update list \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
- -g myHostResourceGroup \
- --resource-name myHost \
- --resource-type hosts \
- --provider-name Microsoft.Compute \
- --resource-parentname myHostGroup \
- --resource-parent-type hostGroups \
- -o table
-```
-
-## Apply updates
-
-Use `az maintenance apply update` to apply pending updates. On success, this command will return JSON containing the details of the update. Apply update calls can take upto 2 hours to complete.
-
-### Isolated VM
-
-Create a request to apply updates to an isolated VM.
-
-```azurecli-interactive
-az maintenance applyupdate create \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
- --resource-group myMaintenanceRG \
- --resource-name myVM \
- --resource-type virtualMachines \
- --provider-name Microsoft.Compute
-```
--
-### Dedicated host
-
-Apply updates to a dedicated host.
-
-```azurecli-interactive
-az maintenance applyupdate create \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
- --resource-group myHostResourceGroup \
- --resource-name myHost \
- --resource-type hosts \
- --provider-name Microsoft.Compute \
- --resource-parent-name myHostGroup \
- --resource-parent-type hostGroups
-```
-
-## Check the status of applying updates
-
-You can check on the progress of the updates using `az maintenance applyupdate get`.
-
-You can use `default` as the update name to see results for the last update, or replace `myUpdateName` with the name of the update that was returned when you ran `az maintenance applyupdate create`.
-
-```text
-Status : Completed
-ResourceId : /subscriptions/12ae7457-4a34-465c-94c1-17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
-ute/virtualMachines/DXT-test-04-iso
-LastUpdateTime : 1/1/2020 12:00:00 AM
-Id : /subscriptions/12ae7457-4a34-465c-94c1-17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
-ute/virtualMachines/DXT-test-04-iso/providers/Microsoft.Maintenance/applyUpdates/default
-Name : default
-Type : Microsoft.Maintenance/applyUpdates
-```
-LastUpdateTime will be the time when the update got complete, either initiated by you or by the platform in case self-maintenance window was not used. If there has never been an update applied through maintenance control it will show default value.
-
-### Isolated VM
-
-```azurecli-interactive
-az maintenance applyupdate get \
- --resource-group myMaintenanceRG \
- --resource-name myVM \
- --resource-type virtualMachines \
- --provider-name Microsoft.Compute \
- --apply-update-name default
-```
-
-### Dedicated host
-
-```azurecli-interactive
-az maintenance applyupdate get \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
- --resource-group myMaintenanceRG \
- --resource-name myHost \
- --resource-type hosts \
- --provider-name Microsoft.Compute \
- --resource-parent-name myHostGroup \
- --resource-parent-type hostGroups \
- --apply-update-name myUpdateName \
- --query "{LastUpdate:lastUpdateTime, Name:name, ResourceGroup:resourceGroup, Status:status}" \
- --output table
-```
--
-## Delete a maintenance configuration
-
-Use `az maintenance configuration delete` to delete a maintenance configuration. Deleting the configuration removes the maintenance control from the associated resources.
-
-```azurecli-interactive
-az maintenance configuration delete \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
- -g myResourceGroup \
- --resource-name myConfig
-```
-
-## Next steps
-To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Maintenance Control Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-control-portal.md
- Title: Maintenance control for Azure virtual machines using the Azure portal
-description: Learn how to control when maintenance is applied to your Azure VMs using Maintenance control and the Azure portal.
----- Previously updated : 04/22/2020-
-#pmcontact: shants
--
-# Control updates with Maintenance Control and the Azure portal
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-Maintenance control lets you decide when to apply updates to your isolated VMs and Azure Dedicated Hosts. This topic covers the Azure portal options for Maintenance control. For more about benefits of using Maintenance control, its limitations, and other management options, see [Managing platform updates with Maintenance Control](maintenance-control.md).
-
-## Create a maintenance configuration
-
-1. Sign in to the Azure portal.
-
-1. Search for **Maintenance Configurations**.
-
- :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-search-bar.png" alt-text="Screenshot showing how to open Maintenance Configurations":::
-
-1. Click **Add**.
-
- :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-add-2.png" alt-text="Screenshot showing how to add a maintenance configuration":::
-
-1. In the Basics tab, choose a subscription and resource group, provide a name for the configuration, choose a region, and select *Host* for the scope. Click **Next**.
-
- :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-basics-tab.png" alt-text="Screenshot showing Maintenance Configuration basics":::
-
-1. In the Schedule tab, declare a scheduled window when Azure will apply the updates on your resources. Set a start date, maintenance window, and recurrence. Once you create a scheduled window you no longer have to apply the updates manually. Click **Next**.
-
- > [!IMPORTANT]
- > Maintenance window **duration** must be *2 hours* or longer. Maintenance **recurrence** must be set to repeat at least once in 35-days.
-
- :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-schedule-tab.png" alt-text="Screenshot showing Maintenance Configuration schedule":::
-
-1. In the Assignment tab, assign resources now or skip this step and assign resources later after maintenance configuration deployment. Click **Next**.
-
-1. Add tags and values. Click **Next**.
-
- :::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-tags-tab.png" alt-text="Screenshot showing how to add tags to a maintenance configuration":::
-
-1. Review the summary. Click **Create**.
-
-1. After the deployment is complete, click **Go to resource**.
--
-## Assign the configuration
-
-On the details page of the maintenance configuration, click Assignments and then click **Assign resource**.
-
-![Screenshot showing how to assign a resource](media/virtual-machines-maintenance-control-portal/maintenance-configurations-add-assignment.png)
-
-Select the resources that you want the maintenance configuration assigned to and click **Ok**. The **Type** column shows whether the resource is an isolated VM or Azure Dedicated Host. The VM needs to be running to assign the configuration. An error occurs if you try to assign a configuration to a VM that is stopped.
-
-<!Shantanu to add details about the error case>
-
-![Screenshot showing how to select a resource](media/virtual-machines-maintenance-control-portal/maintenance-configurations-select-resource.png)
-
-## Check configuration
-
-You can verify that the configuration was applied correctly or check to see any maintenance configuration that is currently assigned using **Maintenance Configurations**. The **Type** column shows whether the configuration is assigned to an isolated VM or Azure Dedicated Host.
-
-![Screenshot showing how to check a maintenance configuration](media/virtual-machines-maintenance-control-portal/maintenance-configurations-host-type.png)
-
-You can also check the configuration for a specific virtual machine on its properties page. Click **Maintenance** to see the configuration assigned to that virtual machine.
-
-![Screenshot showing how to check Maintenance for a host](media/virtual-machines-maintenance-control-portal/maintenance-configurations-check-config.png)
-
-## Check for pending updates
-
-There are also two ways to check if updates are pending for a maintenance configuration. In **Maintenance Configurations**, on the details for the configuration, click **Assignments** and check **Maintenance status**.
-
-![Screenshot showing how to check pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-pending.png)
-
-You can also check a specific host using **Virtual Machines** or properties of the dedicated host.
-
-![Screenshot that shows the highlighted maintenance state.](media/virtual-machines-maintenance-control-portal/maintenance-configurations-pending-vm.png)
-
-## Apply updates
-
-You can apply pending updates on demand. On the VM or Azure Dedicated Host details, click **Maintenance** and click **Apply maintenance now**. Apply update calls can take upto 2 hours to complete.
-
-![Screenshot showing how to apply pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-apply-updates-now.png)
-
-## Check the status of applying updates
-
-You can check on the progress of the updates for a configuration in **Maintenance Configurations** or using **Virtual Machines**. On the VM details, click **Maintenance**. In the following example, the **Maintenance state** shows an update is **Pending**.
-
-![Screenshot showing how to check status of pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-status.png)
-
-## Delete a maintenance configuration
-
-To delete a configuration, open the configuration details and click **Delete**.
-
-![Screenshot that shows how to delete a configuration.](media/virtual-machines-maintenance-control-portal/maintenance-configurations-delete.png)
--
-## Next steps
-
-To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Maintenance Control Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-control-powershell.md
- Title: Maintenance control for Azure virtual machines using PowerShell
-description: Learn how to control when maintenance is applied to your Azure VMs using Maintenance control and PowerShell.
----- Previously updated : 11/19/2020--
-#pmcontact: shants
--
-# Control updates with Maintenance Control and Azure PowerShell
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-Maintenance control lets you decide when to apply platform updates to the host infrastructure of your isolated VMs and Azure dedicated hosts. This topic covers the Azure PowerShell options for Maintenance control. For more about benefits of using Maintenance control, its limitations, and other management options, see [Managing platform updates with Maintenance Control](maintenance-control.md).
-
-If you are looking for information about Maintenance Control for scale sets, see [Maintenance Control for virtual machine scale sets](virtual-machine-scale-sets-maintenance-control.md).
-
-## Enable the PowerShell module
-
-Make sure `PowerShellGet` is up to date.
-
-```azurepowershell-interactive
-Install-Module -Name PowerShellGet -Repository PSGallery -Force
-```
-
-Install the `Az.Maintenance` PowerShell module.
-
-```azurepowershell-interactive
-Install-Module -Name Az.Maintenance
-```
-
-If you are installing locally, make sure you open your PowerShell prompt as an administrator.
-
-You may also be asked to confirm that you want to install from an *untrusted repository*. Type `Y` or select **Yes to All** to install the module.
--
-## Create a maintenance configuration
-
-Create a resource group as a container for your configuration. In this example, a resource group named *myMaintenanceRG* is created in *eastus*. If you already have a resource group that you want to use, you can skip this part and replace the resource group name with you own in the rest of the examples.
-
-```azurepowershell-interactive
-New-AzResourceGroup `
- -Location eastus `
- -Name myMaintenanceRG
-```
-
-Use [New-AzMaintenanceConfiguration](/powershell/module/az.maintenance/new-azmaintenanceconfiguration) to create a maintenance configuration. This example creates a maintenance configuration named *myConfig* scoped to the host.
-
-```azurepowershell-interactive
-$config = New-AzMaintenanceConfiguration `
- -ResourceGroup myMaintenanceRG `
- -Name myConfig `
- -MaintenanceScope host `
- -Location eastus
-```
-
-Using `-MaintenanceScope host` ensures that the maintenance configuration is used for controlling updates to the host.
-
-If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
-
-You can query for available maintenance configurations using [Get-AzMaintenanceConfiguration](/powershell/module/az.maintenance/get-azmaintenanceconfiguration).
-
-```azurepowershell-interactive
-Get-AzMaintenanceConfiguration | Format-Table -Property Name,Id
-```
-
-### Create a maintenance configuration with scheduled window
-
-You can also declare a scheduled window when Azure will apply the updates on your resources. This example creates a maintenance configuration named myConfig with a scheduled window of 5 hours on the fourth Monday of every month. Once you create a scheduled window you no longer have to apply the updates manually.
-
-```azurepowershell-interactive
-$config = New-AzMaintenanceConfiguration `
- -ResourceGroup $RGName `
- -Name $MaintenanceConfig `
- -MaintenanceScope Host `
- -Location $location `
- -StartDateTime "2020-10-01 00:00" `
- -TimeZone "Pacific Standard Time" `
- -Duration "05:00" `
- -RecurEvery "Month Fourth Monday"
-```
-> [!IMPORTANT]
-> Maintenance **duration** must be *2 hours* or longer. Maintenance **recurrence** must be set to at least occur once in 35-days.
-
-Maintenance **recurrence** can be expressed as daily, weekly or monthly. Some examples are:
-
-
-## Assign the configuration
-
-Use [New-AzConfigurationAssignment](/powershell/module/az.maintenance/new-azconfigurationassignment) to assign the configuration to your isolated VM or Azure Dedicated Host.
-
-### Isolated VM
-
-Apply the configuration to a VM using the ID of the configuration. Specify `-ResourceType VirtualMachines` and supply the name of the VM for `-ResourceName`, and the resource group of the VM for `-ResourceGroupName`.
-
-```azurepowershell-interactive
-New-AzConfigurationAssignment `
- -ResourceGroupName myResourceGroup `
- -Location eastus `
- -ResourceName myVM `
- -ResourceType VirtualMachines `
- -ProviderName Microsoft.Compute `
- -ConfigurationAssignmentName $config.Name `
- -MaintenanceConfigurationId $config.Id
-```
-
-### Dedicated host
-
-To apply a configuration to a dedicated host, you also need to include `-ResourceType hosts`, `-ResourceParentName` with the name of the host group, and `-ResourceParentType hostGroups`.
--
-```azurepowershell-interactive
-New-AzConfigurationAssignment `
- -ResourceGroupName myResourceGroup `
- -Location eastus `
- -ResourceName myHost `
- -ResourceType hosts `
- -ResourceParentName myHostGroup `
- -ResourceParentType hostGroups `
- -ProviderName Microsoft.Compute `
- -ConfigurationAssignmentName $config.Name `
- -MaintenanceConfigurationId $config.Id
-```
-
-## Check for pending updates
-
-Use [Get-AzMaintenanceUpdate](/powershell/module/az.maintenance/get-azmaintenanceupdate) to see if there are pending updates. Use `-subscription` to specify the Azure subscription of the VM if it is different from the one that you are logged into.
-
-If there are no updates to show, this command will return nothing. Otherwise, it will return a PSApplyUpdate object:
-
-```json
-{
- "maintenanceScope": "Host",
- "impactType": "Freeze",
- "status": "Pending",
- "impactDurationInSec": 9,
- "notBefore": "2020-02-21T16:47:44.8728029Z",
- "properties": {
- "resourceId": "/subscriptions/39c6cced-4d6c-4dd5-af86-57499cd3f846/resourcegroups/Ignite2019/providers/Microsoft.Compute/virtualMachines/MCDemo3"
-}
-```
-
-### Isolated VM
-
-Check for pending updates for an isolated VM. In this example, the output is formatted as a table for readability.
-
-```azurepowershell-interactive
-Get-AzMaintenanceUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myVM `
- -ResourceType VirtualMachines `
- -ProviderName Microsoft.Compute | Format-Table
-```
--
-### Dedicated host
-
-To check for pending updates for a dedicated host. In this example, the output is formatted as a table for readability. Replace the values for the resources with your own.
-
-```azurepowershell-interactive
-Get-AzMaintenanceUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myHost `
- -ResourceType hosts `
- -ResourceParentName myHostGroup `
- -ResourceParentType hostGroups `
- -ProviderName Microsoft.Compute | Format-Table
-```
--
-## Apply updates
-
-Use [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) to apply pending updates. Apply update calls can take upto 2 hours to complete.
-
-### Isolated VM
-
-Create a request to apply updates to an isolated VM.
-
-```azurepowershell-interactive
-New-AzApplyUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myVM `
- -ResourceType VirtualMachines `
- -ProviderName Microsoft.Compute
-```
-
-On success, this command will return a `PSApplyUpdate` object. You can use the Name attribute in the `Get-AzApplyUpdate` command to check the update status. See [Check update status](#check-update-status).
-
-### Dedicated host
-
-Apply updates to a dedicated host.
-
-```azurepowershell-interactive
-New-AzApplyUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myHost `
- -ResourceType hosts `
- -ResourceParentName myHostGroup `
- -ResourceParentType hostGroups `
- -ProviderName Microsoft.Compute
-```
-
-## Check update status
-Use [Get-AzApplyUpdate](/powershell/module/az.maintenance/get-azapplyupdate) to check on the status of an update. The commands shown below show the status of the latest update by using `default` for the `-ApplyUpdateName` parameter. You can substitute the name of the update (returned by the [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) command) to get the status of a specific update.
-
-```text
-Status : Completed
-ResourceId : /subscriptions/12ae7457-4a34-465c-94c1-17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
-ute/virtualMachines/DXT-test-04-iso
-LastUpdateTime : 1/1/2020 12:00:00 AM
-Id : /subscriptions/12ae7457-4a34-465c-94c1-17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
-ute/virtualMachines/DXT-test-04-iso/providers/Microsoft.Maintenance/applyUpdates/default
-Name : default
-Type : Microsoft.Maintenance/applyUpdates
-```
-LastUpdateTime will be the time when the update got complete, either initiated by you or by the platform in case self-maintenance window was not used. If there has never been an update applied through maintenance control it will show default value.
-
-### Isolated VM
-
-Check for updates to a specific virtual machine.
-
-```azurepowershell-interactive
-Get-AzApplyUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myVM `
- -ResourceType VirtualMachines `
- -ProviderName Microsoft.Compute `
- -ApplyUpdateName default
-```
-
-### Dedicated host
-
-Check for updates to a dedicated host.
-
-```azurepowershell-interactive
-Get-AzApplyUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myHost `
- -ResourceType hosts `
- -ResourceParentName myHostGroup `
- -ResourceParentType hostGroups `
- -ProviderName Microsoft.Compute `
- -ApplyUpdateName myUpdateName
-```
-
-## Remove a maintenance configuration
-
-Use [Remove-AzMaintenanceConfiguration](/powershell/module/az.maintenance/remove-azmaintenanceconfiguration) to delete a maintenance configuration.
-
-```azurepowershell-interactive
-Remove-AzMaintenanceConfiguration `
- -ResourceGroupName myResourceGroup `
- -Name $config.Name
-```
-
-## Next steps
-To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Maintenance Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-control.md
- Title: Overview of Maintenance control for Azure virtual machines using the Azure portal
-description: Learn how to control when maintenance is applied to your Azure VMs using Maintenance Control.
----- Previously updated : 10/06/2021-
-#pmcontact: shants
--
-# Managing platform updates with Maintenance Control
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-Manage platform updates, that don't require a reboot, using maintenance control. Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users. Some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Maintenance control gives you the option to wait on platform updates and apply them within a 35-day rolling window.
-
-Maintenance control lets you decide when to apply updates to your isolated VMs and Azure dedicated hosts.
-
-With maintenance control, you can:
-- Batch updates into one update package.-- Wait up to 35 days to apply updates. -- Automate platform updates by configuring a maintenance schedule.-- Maintenance configurations work across subscriptions and resource groups. -
-## Limitations
--- VMs must be on a [dedicated host](./dedicated-hosts.md), or be created using an [isolated VM size](isolation.md).-- Maintenance window duration can vary month over month and sometimes it can take upto 2 hours to apply the pending updates once it is initiated by the user. -- After 35 days, an update will automatically be applied.-- Rack level maintenance cannot be controlled through maintenance control.-- User must have **Resource Contributor** access.-
-## Management options
-
-You can create and manage maintenance configurations using any of the following options:
--- [Azure CLI](maintenance-control-cli.md)-- [Azure PowerShell](maintenance-control-powershell.md)-- [Azure portal](maintenance-control-portal.md)-
-For an Azure Functions sample, see [Scheduling Maintenance Updates with Maintenance Control and Azure Functions](https://github.com/Azure/azure-docs-powershell-samples/tree/master/maintenance-auto-scheduler).
-
-## Next steps
-
-To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Move Region Maintenance Configuration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-region-maintenance-configuration-resources.md
Last updated 03/04/2020
Follow this article to move resources associated with a Maintenance Control configuration to a different Azure region. You might want to move a configuration for a number of reasons. For example, to take advantage of a new region, to deploy features or services available in a specific region, to meet internal policy and governance requirements, or in response to capacity planning.
-[Maintenance control](maintenance-control.md), with customized maintenance configurations, allows you to control how platform updates are applied to VMs, and to Azure Dedicated Hosts. There are a couple of scenarios for moving maintenance control across regions:
+[Maintenance control](maintenance-configurations.md), with customized maintenance configurations, allows you to control how platform updates are applied to VMs, and to Azure Dedicated Hosts. There are a couple of scenarios for moving maintenance control across regions:
- To move the resources associated with a maintenance configuration, but not the configuration itself, follow this article. - To move your maintenance control configuration, but not the resources associated with the configuration, follow [these instructions](move-region-maintenance-configuration.md).
Before you begin moving the resources associated with a Maintenance Control conf
## Move 1. [Follow these instructions](../site-recovery/azure-to-azure-tutorial-migrate.md?toc=/azure/virtual-machines/windows/toc.json&bc=/azure/virtual-machines/windows/breadcrumb/toc.json) to move the Azure VMs to the new region.
-2. After the resources are moved, reapply maintenance configurations to the resources in the new region as appropriate, depending on whether you moved the maintenance configurations. You can apply a maintenance configuration to a resource using [PowerShell](../virtual-machines/maintenance-control-powershell.md) or [CLI](../virtual-machines/maintenance-control-cli.md).
+2. After the resources are moved, reapply maintenance configurations to the resources in the new region as appropriate, depending on whether you moved the maintenance configurations. You can apply a maintenance configuration to a resource using [PowerShell](../virtual-machines/maintenance-configurations-powershell.md) or [CLI](../virtual-machines/maintenance-configurations-cli.md).
## Verify the move
virtual-machines Move Region Maintenance Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-region-maintenance-configuration.md
Last updated 03/04/2020
Follow this article to move a Maintenance Control configuration to a different Azure region. You might want to move a configuration for a number of reasons. For example, to take advantage of a new region, to deploy features or services available in a specific region, to meet internal policy and governance requirements, or in response to capacity planning.
-[Maintenance control](maintenance-control.md), with customized maintenance configurations, allows you to control how platform updates are applied to VMs, and to Azure Dedicated Hosts. There are a couple of scenarios for moving maintenance control across regions:
+[Maintenance control](maintenance-configurations.md), with customized maintenance configurations, allows you to control how platform updates are applied to VMs, and to Azure Dedicated Hosts. There are a couple of scenarios for moving maintenance control across regions:
- To move your maintenance control configuration, but not the resources associated with the configuration, follow the instructions in this article. - To move the resources associated with a maintenance configuration, but not the configuration itself, follow [these instructions](move-region-maintenance-configuration-resources.md).
Before you begin moving a maintenance control configuration:
3. Save your list for reference. As you move the configurations, it helps you to verify that everything's been moved. 4. As a reference, map each configuration/resource group to the new resource group in the new region.
-5. Create new maintenance configurations in the new region using [PowerShell](../virtual-machines/maintenance-control-powershell.md#create-a-maintenance-configuration), or [CLI](../virtual-machines/maintenance-control-cli.md#create-a-maintenance-configuration).
-6. Associate the configurations with the resources in the new region, using [PowerShell](../virtual-machines/maintenance-control-powershell.md#assign-the-configuration), or [CLI](../virtual-machines/maintenance-control-cli.md#assign-the-configuration).
+5. Create new maintenance configurations in the new region using [PowerShell](../virtual-machines/maintenance-configurations-powershell.md#create-a-maintenance-configuration), or [CLI](../virtual-machines/maintenance-configurations-cli.md#create-a-maintenance-configuration).
+6. Associate the configurations with the resources in the new region, using [PowerShell](../virtual-machines/maintenance-configurations-powershell.md#assign-the-configuration), or [CLI](../virtual-machines/maintenance-configurations-cli.md#assign-the-configuration).
## Verify the move
After moving the configurations, compare configurations and resources in the new
## Clean up source resources
-After the move, consider deleting the moved maintenance configurations in the source region, [PowerShell](../virtual-machines/maintenance-control-powershell.md#remove-a-maintenance-configuration), or [CLI](../virtual-machines/maintenance-control-cli.md#delete-a-maintenance-configuration).
+After the move, consider deleting the moved maintenance configurations in the source region, [PowerShell](../virtual-machines/maintenance-configurations-powershell.md#remove-a-maintenance-configuration), or [CLI](../virtual-machines/maintenance-configurations-cli.md#delete-a-maintenance-configuration).
## Next steps
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md
The properties of an image version are:
## Generalized and specialized images
-There are two operating system states supported by Azure Compute Gallery. Typically images require that the VM used to create the image has been generalized before taking the image. Generalizing is a process that removes machine and user specific information from the VM. For Windows, the Sysprep tool is used. For Linux, you can use [waagent](https://github.com/Azure/WALinuxAgent) `-deprovision` or `-deprovision+user` parameters.
+There are two operating system states supported by Azure Compute Gallery. Typically images require that the VM used to create the image has been [generalized](generalize.md) before taking the image. Generalizing is a process that removes machine and user specific information from the VM. For Linux, you can use [waagent](https://github.com/Azure/WALinuxAgent) `-deprovision` or `-deprovision+user` parameters. For Windows, the Sysprep tool is used.
Specialized VMs have not been through a process to remove machine specific information and accounts. Also, VMs created from specialized images do not have an `osProfile` associated with them. This means that specialized images will have some limitations in addition to some benefits.
virtual-machines Updates Maintenance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/updates-maintenance-overview.md
You can use [Update Management in Azure Automation](../automation/update-managem
## Maintenance control
-Manage platform updates, that don't require a reboot, using [maintenance control](maintenance-control.md). Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users. Some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Maintenance control gives you the option to wait on platform updates and apply them within a 35-day rolling window.
+Manage platform updates, that don't require a reboot, using [maintenance control](maintenance-configurations.md). Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users. Some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Maintenance control gives you the option to wait on platform updates and apply them within a 35-day rolling window.
Maintenance control lets you decide when to apply updates to your isolated VMs and Azure dedicated hosts.
-With [maintenance control](maintenance-control.md), you can:
+With [maintenance control](maintenance-configurations.md), you can:
- Batch updates into one update package.-- Wait up to 35 days to apply updates.
+- Wait up to 35 days to apply updates for Host machines.
- Automate platform updates by configuring a maintenance schedule or by using [Azure Functions](https://github.com/Azure/azure-docs-powershell-samples/tree/master/maintenance-auto-scheduler). - Maintenance configurations work across subscriptions and resource groups.
virtual-machines Disks Enable Customer Managed Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-enable-customer-managed-keys-powershell.md
# Azure PowerShell - Enable customer-managed keys with server-side encryption - managed disks
-**Applies to:** :heavy_check_mark: Windows VMs
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
Azure Disk Storage allows you to manage your own keys when using server-side encryption (SSE) for managed disks, if you choose. For conceptual information on SSE with customer-managed keys, and other managed disk encryption types, see the [Customer-managed keys](../disk-encryption.md#customer-managed-keys) section of our disk encryption article.
$diskEncryptionSet = Get-AzDiskEncryptionSet -ResourceGroupName $rgName -Name $d
New-AzDiskUpdateConfig -EncryptionType "EncryptionAtRestWithCustomerKey" -DiskEncryptionSetId $diskEncryptionSet.Id | Update-AzDisk -ResourceGroupName $rgName -DiskName $diskName ```
-### Encrypt an existing virtual machine scale set with SSE and customer-managed keys
+### Encrypt an existing virtual machine scale set (uniform orchestration mode) with SSE and customer-managed keys
+
+This script will work for scale sets in uniform orchestration mode only. For scale sets in flexible orchestration mode, follow the Encrypt existing managed disks for each VM.
Copy the script, replace all the example values with your own parameters, and then run it: