Updates from: 05/18/2022 01:14:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Azure Ad Pim Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md
As a delegated approver, you'll receive an email notification when an Azure AD r
In the **Requests for role activations** section, you'll see a list of requests pending your approval.
-## View pending requests using Graph API
+## View pending requests using Microsoft Graph API
### HTTP request ````HTTP
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests/filterByCurrentUser(on='approver')?$filter=status eq 'PendingApproval'
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests/filterByCurrentUser(on='approver')?$filter=status eq 'PendingApproval'
```` ### HTTP response ````HTTP {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#Collection(unifiedRoleAssignmentScheduleRequest)",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#Collection(unifiedRoleAssignmentScheduleRequest)",
"value": [ { "@odata.type": "#microsoft.graph.unifiedRoleAssignmentScheduleRequest",
GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentSche
![Approve notification showing request was approved](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
-## Approve pending requests using Graph API
+## Approve pending requests using Microsoft Graph API
### Get IDs for the steps that require approval
active-directory Pim Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-apis.md
You can perform Privileged Identity Management (PIM) tasks using the Microsoft G
For requests and other details about PIM APIs, check out: -- [PIM for Azure AD roles API reference](/graph/api/resources/unifiedroleeligibilityschedulerequest?view=graph-rest-beta&preserve-view=true)
+- [PIM for Azure AD roles API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)
- [PIM for Azure resource roles API reference](/rest/api/authorization/roleeligibilityschedulerequests) ## PIM API history
Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the
### Iteration 2 ΓÇô Supports Azure AD roles and Azure resource roles
-Under the /beta/privilegedAccess endpoint, Microsoft supported both /aadRoles and /azureResources. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
+Under the `/beta/privilegedAccess` endpoint, Microsoft supported both `/aadRoles` and `/azureResources`. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
### Current iteration ΓÇô Azure AD roles in Microsoft Graph and Azure resource roles in Azure Resource Manager
-Now in beta, Microsoft has the final iteration of the PIM API before we release the API to general availability. Based on customer feedback, the Azure AD PIM API is now under the unifiedRoleManagement set of API and the Azure Resource PIM API is now under the Azure Resource Manager role assignment API. These locations also provide a few additional benefits including:
+Currently in general availability, this is the final iteration of the PIM API. Based on customer feedback, the PIM API for managing Azure AD roles is now under the **unifiedRoleManagement** set of APIs and the Azure Resource PIM API is now under the Azure Resource Manager role assignment API. These locations also provide a few additional benefits including:
- Alignment of the PIM API for regular role assignment API for both Azure AD roles and Azure Resource roles. - Reducing the need to call additional PIM API to onboard a resource, get a resource, or get role definition.
In the current iteration, there is no API support for PIM alerts and privileged
### Azure AD roles
- To call the PIM Graph API for Azure AD roles, you will need at least one of the following permissions:
+To understand the permissions that you need to call the PIM Microsoft Graph API for Azure AD roles, see [Role management permissions](/graph/permissions-reference#role-management-permissions).
-- RoleManagement.ReadWrite.Directory-- RoleManagement.Read.Directory-
- The easiest way to specify the required permissions is to use the Azure AD consent framework.
+The easiest way to specify the required permissions is to use the Azure AD consent framework.
### Azure resource roles
- The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+ The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Microsoft Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
## Calling PIM API with an app-only token
In the current iteration, there is no API support for PIM alerts and privileged
PIM API consists of two categories that are consistent for both the API for Azure AD roles and Azure resource roles: assignment and activation API requests, and policy settings.
-### Assignment and activation API
+### Assignment and activation APIs
-To make eligible assignments, time-bound eligible/active assignments, and to activate assignments, PIM provides the following entities:
+To make eligible assignments, time-bound eligible or active assignments, and to activate eligible assignments, PIM provides the following resources:
-- RoleAssignmentSchedule-- RoleEligibilitySchedule-- RoleAssignmentScheduleInstance-- RoleEligibilityScheduleInstance-- RoleAssignmentScheduleRequest-- RoleEligibilityScheduleRequest
+- [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest)
+- [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest)
-These entities work alongside pre-existing roleDefinition and roleAssignment entities for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
+These entities work alongside pre-existing **roleDefinition** and **roleAssignment** resources for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
- If you are trying to create or retrieve a persistent (active) role assignment that does not have a schedule (start or end time), you should avoid these PIM entities and focus on the read/write operations under the roleAssignment entity -- To create an eligible assignment with or without an expiration time you can use the write operation on roleEligibilityScheduleRequest
+- To create an eligible assignment with or without an expiration time you can use the write operation on the [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest) resource
+
+- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on the [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest) resource
+
+- To activate an eligible assignment, you should also use the [write operation on roleAssignmentScheduleRequest](/graph/api/rbacapplication-post-roleassignmentschedulerequests) with a `selfActivate` **action** property.
-- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on roleAssignmentScheduleRequest
+Each of the request objects would create the following read-only objects:
-- To activate an eligible assignment, you should also use the write operation on roleAssignmentScheduleRequest with a modified action parameter called selfActivate
+- [unifiedRoleAssignmentSchedule](/graph/api/resources/unifiedroleassignmentschedule)
+- [unifiedRoleEligibilitySchedule](/graph/api/resources/unifiedroleeligibilityschedule)
+- [unifiedRoleAssignmentScheduleInstance](/graph/api/resources/unifiedroleassignmentscheduleinstance)
+- [unifiedRoleEligibilityScheduleInstance](/graph/api/resources/unifiedroleeligibilityscheduleinstance)
-Each of the request objects would either create a roleAssignmentSchedule or a roleEligibilitySchedule object. These objects are read-only and show a schedule of all the current and future assignments.
+The **unifiedRoleAssignmentSchedule** and **unifiedRoleEligibilitySchedule** objects show a schedule of all the current and future assignments.
-When an eligible assignment is activated, the roleEligibilityScheduleInstance continues to exist. The roleAssignmentScheduleRequest for the activation would create a separate roleAssignmentSchedule and roleAssignmentScheduleInstance for that activated duration.
+When an eligible assignment is activated, the **unifiedRoleEligibilityScheduleInstance** continues to exist. The **unifiedRoleAssignmentScheduleRequest** for the activation would create a separate **unifiedRoleAssignmentSchedule** object and a **unifiedRoleAssignmentScheduleInstance** for that activated duration.
The instance objects are the actual assignments that currently exist whether it is an eligible assignment or an active assignment. You should use the GET operation on the instance entity to retrieve a list of eligible assignments / active assignments to a role/user.
-### Policy setting API
+For more information about assignment and activation APIs, see [PIM API for managing role assignments and eligibilities](/graph/api/resources/privilegedidentitymanagementv3-overview#pim-api-for-managing-role-assignment).
+
+### Policy settings APIs
+
+To manage the settings of Azure AD roles, we provide the following entities:
-To manage the setting, we provide the following entities:
+- [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy)
+- [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment)
-- roleManagementPolicy-- roleManagementPolicyAssignment
+The [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy) resource through it's **rules** relationship defines the rules or settings of the Azure AD role. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment) object attaches the policy to a specific role.
-The *role management policy* defines the setting of the rule. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The *policy assignment* attaches the policy to a specific role.
+Use the APIs supported by these resources retrieve role management policy assignments for all Azure AD role or filter the list by a **roleDefinitionId**, and then update the rules or settings in the policy associated with the Azure AD role.
-Use this API is to get a list of all the roleManagementPolicyAssignments, filter it by the roleDefinitionID you want to modify, and then update the policy associated with the policyAssignment.
+For more information about the policy settings APIs, see [role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim).
## Relationship between PIM entities and role assignment entities
-The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the roleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and roleAssignmentScheduleInstance would both include:
+The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the unifiedRoleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and unifiedRoleAssignmentScheduleInstance would both include:
- Persistent (active) assignments made outside of PIM - Persistent (active) assignments with a schedule made inside PIM
The only link between the PIM entity and the role assignment entity for persiste
## Next steps -- [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagement-root?view=graph-rest-beta&preserve-view=true)
+- [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
When you need to assume an Azure AD role, you can request activation by opening
![Activation request is pending approval notification](./media/pim-resource-roles-activate-your-roles/resources-my-roles-activate-notification.png)
-## Activate a role using Graph API
+## Activate a role using Microsoft Graph API
+
+For more information about Microsoft Graph APIs for PIM, see [Overview of role management through the privileged identity management (PIM) API](/graph/api/resources/privilegedidentitymanagementv3-overview).
### Get all eligible roles that you can activate
-When a user gets their role eligibility via group membership, this Graph request doesn't return their eligibility.
+When a user gets their role eligibility via group membership, this Microsoft Graph request doesn't return their eligibility.
#### HTTP request ````HTTP
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilityScheduleRequests/filterByCurrentUser(on='principal')
+GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleEligibilityScheduleRequests/filterByCurrentUser(on='principal')
```` #### HTTP response
GET https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilitySch
To save space we're showing only the response for one role, but all eligible role assignments that you can activate will be listed. ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#Collection(unifiedRoleEligibilityScheduleRequest)",
- "value": [
- {
- "@odata.type": "#microsoft.graph.unifiedRoleEligibilityScheduleRequest",
- "id": "<request-ID-GUID>",
- "status": "Provisioned",
- "createdDateTime": "2021-07-15T19:39:53.33Z",
- "completedDateTime": "2021-07-15T19:39:53.383Z",
- "approvalId": null,
- "customData": null,
- "action": "AdminAssign",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:39:53.3846704Z",
- "recurrence": null,
- "expiration": {
- "type": "noExpiration",
- "endDateTime": null,
- "duration": null
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
- },
-}
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#Collection(unifiedRoleEligibilityScheduleRequest)",
+ "value": [
+ {
+ "@odata.type": "#microsoft.graph.unifiedRoleEligibilityScheduleRequest",
+ "id": "50d34326-f243-4540-8bb5-2af6692aafd0",
+ "status": "Provisioned",
+ "createdDateTime": "2022-04-12T18:26:08.843Z",
+ "completedDateTime": "2022-04-12T18:26:08.89Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "adminAssign",
+ "principalId": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5",
+ "roleDefinitionId": "8424c6f0-a189-499e-bbd0-26c1753c96d4",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "50d34326-f243-4540-8bb5-2af6692aafd0",
+ "justification": "Assign Attribute Assignment Admin eligibility to myself",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-04-12T18:26:08.8911834Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDateTime",
+ "endDateTime": "2024-04-10T00:00:00Z",
+ "duration": null
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+ }
+ ]
+}
````
-### Activate a role assignment with justification
+### Self-activate a role eligibility with justification
#### HTTP request ````HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests
-
-{
- "action": "SelfActivate",
- "justification": "adssadasasd",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "principalId": "<principal-ID-GUID>"
-}
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests
+
+{
+ "action": "selfActivate",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "8424c6f0-a189-499e-bbd0-26c1753c96d4",
+ "directoryScopeId": "/",
+ "justification": "I need access to the Attribute Administrator role to manage attributes to be assigned to restricted AUs",
+ "scheduleInfo": {
+ "startDateTime": "2022-04-14T00:00:00.000Z",
+ "expiration": {
+ "type": "AfterDuration",
+ "duration": "PT5H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": "CONTOSO:Normal-67890",
+ "ticketSystem": "MS Project"
+ }
+}
```` #### HTTP response ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
- "id": "f1ccef03-8750-40e0-b488-5aa2f02e2e55",
- "status": "PendingApprovalProvisioning",
- "createdDateTime": "2021-07-15T19:51:07.1870599Z",
- "completedDateTime": "2021-07-15T19:51:17.3903028Z",
- "approvalId": "<approval-ID-GUID>",
- "customData": null,
- "action": "SelfActivate",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": null,
- "recurrence": null,
- "expiration": {
- "type": "afterDuration",
- "endDateTime": null,
- "duration": "PT5H30M"
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
-}
+HTTP/1.1 201 Created
+Content-Type: application/json
+
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
+ "id": "911bab8a-6912-4de2-9dc0-2648ede7dd6d",
+ "status": "Granted",
+ "createdDateTime": "2022-04-13T08:52:32.6485851Z",
+ "completedDateTime": "2022-04-14T00:00:00Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "selfActivate",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "8424c6f0-a189-499e-bbd0-26c1753c96d4",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "911bab8a-6912-4de2-9dc0-2648ede7dd6d",
+ "justification": "I need access to the Attribute Administrator role to manage attributes to be assigned to restricted AUs",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "071cc716-8147-4397-a5ba-b2105951cc0b"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-04-14T00:00:00Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDuration",
+ "endDateTime": null,
+ "duration": "PT5H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": "CONTOSO:Normal-67890",
+ "ticketSystem": "MS Project"
+ }
+}
```` ## View the status of activation requests
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
For certain roles, the scope of the granted permissions can be restricted to a s
For more information about creating administrative units, see [Add and remove administrative units](../roles/admin-units-manage.md).
-## Assign a role using Graph API
+## Assign a role using Microsoft Graph API
+
+For more information about Microsoft Graph APIs for PIM, see [Overview of role management through the privileged identity management (PIM) API](/graph/api/resources/privilegedidentitymanagementv3-overview).
For permissions required to use the PIM API, see [Understand the Privileged Identity Management APIs](pim-apis.md). ### Eligible with no end date
-The following is a sample HTTP request to create an eligible assignment with no end date. For details on the API commands including samples such as C# and JavaScript, see [Create unifiedRoleEligibilityScheduleRequest](/graph/api/unifiedroleeligibilityschedulerequest-post-unifiedroleeligibilityschedulerequests?view=graph-rest-beta&tabs=http&preserve-view=true).
+The following is a sample HTTP request to create an eligible assignment with no end date. For details on the API commands including request samples in languages such as C# and JavaScript, see [Create roleEligibilityScheduleRequests](/graph/api/rbacapplication-post-roleeligibilityschedulerequests).
#### HTTP request ````HTTP
-POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilityScheduleRequests
-
- "action": "AdminAssign",
- "justification": "abcde",
- "directoryScopeId": "/",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:15:08.941Z",
- "expiration": {
- "type": "NoExpiration" }
- }
-{
-}
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleEligibilityScheduleRequests
+Content-Type: application/json
+
+{
+ "action": "adminAssign",
+ "justification": "Permanently assign the Global Reader to the auditor",
+ "roleDefinitionId": "f2ef992c-3afb-46b9-b7cf-a126ee74c451",
+ "directoryScopeId": "/",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "scheduleInfo": {
+ "startDateTime": "2022-04-10T00:00:00Z",
+ "expiration": {
+ "type": "noExpiration"
+ }
+ }
+}
```` #### HTTP response
POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilitySc
The following is an example of the response. The response object shown here might be shortened for readability. ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
- "id": "<schedule-ID-GUID>",
- "status": "Provisioned",
- "createdDateTime": "2021-07-15T19:47:41.0939004Z",
- "completedDateTime": "2021-07-15T19:47:42.4376681Z",
- "approvalId": null,
- "customData": null,
- "action": "AdminAssign",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:47:42.4376681Z",
- "recurrence": null,
- "expiration": {
- "type": "noExpiration",
- "endDateTime": null,
- "duration": null
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
-}
+HTTP/1.1 201 Created
+Content-Type: application/json
+
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
+ "id": "42159c11-45a9-4631-97e4-b64abdd42c25",
+ "status": "Provisioned",
+ "createdDateTime": "2022-05-13T13:40:33.2364309Z",
+ "completedDateTime": "2022-05-13T13:40:34.6270851Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "adminAssign",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "f2ef992c-3afb-46b9-b7cf-a126ee74c451",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "42159c11-45a9-4631-97e4-b64abdd42c25",
+ "justification": "Permanently assign the Global Reader to the auditor",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-05-13T13:40:34.6270851Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "noExpiration",
+ "endDateTime": null,
+ "duration": null
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
```` ### Active and time-bound
-The following is a sample HTTP request to create an active assignment that's time-bound. For details on the API commands including samples such as C# and JavaScript, see [Create unifiedRoleEligibilityScheduleRequest](/graph/api/unifiedroleeligibilityschedulerequest-post-unifiedroleeligibilityschedulerequests?view=graph-rest-beta&tabs=http&preserve-view=true).
+The following is a sample HTTP request to create an active assignment that's time-bound. For details on the API commands including request samples in languages such as C# and JavaScript, see [Create roleAssignmentScheduleRequests](/graph/api/rbacapplication-post-roleassignmentschedulerequests).
#### HTTP request ````HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests
-
-{
- "action": "AdminAssign",
- "justification": "abcde",
- "directoryScopeId": "/",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:15:08.941Z",
- "expiration": {
- "type": "AfterDuration",
- "duration": "PT3H"
- }
- }
-}
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests
+
+{
+ "action": "adminAssign",
+ "justification": "Assign the Exchange Recipient Administrator to the mail admin",
+ "roleDefinitionId": "31392ffb-586c-42d1-9346-e59415a2cc4e",
+ "directoryScopeId": "/",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "scheduleInfo": {
+ "startDateTime": "2022-04-10T00:00:00Z",
+ "expiration": {
+ "type": "afterDuration",
+ "duration": "PT3H"
+ }
+ }
+}
```` #### HTTP response
POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentSch
The following is an example of the response. The response object shown here might be shortened for readability. ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
- "id": "<schedule-ID-GUID>",
- "status": "Provisioned",
- "createdDateTime": "2021-07-15T19:15:09.7093491Z",
- "completedDateTime": "2021-07-15T19:15:11.4437343Z",
- "approvalId": null,
- "customData": null,
- "action": "AdminAssign",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:15:11.4437343Z",
- "recurrence": null,
- "expiration": {
- "type": "afterDuration",
- "endDateTime": null,
- "duration": "PT3H"
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
-}
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
+ "id": "ac643e37-e75c-4b42-960a-b0fc3fbdf4b3",
+ "status": "Provisioned",
+ "createdDateTime": "2022-05-13T14:01:48.0145711Z",
+ "completedDateTime": "2022-05-13T14:01:49.8589701Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "adminAssign",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "31392ffb-586c-42d1-9346-e59415a2cc4e",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "ac643e37-e75c-4b42-960a-b0fc3fbdf4b3",
+ "justification": "Assign the Exchange Recipient Administrator to the mail admin",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-05-13T14:01:49.8589701Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDuration",
+ "endDateTime": null,
+ "duration": "PT3H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
```` ## Update or remove an existing role assignment
Follow these steps to update or remove an existing role assignment. **Azure AD P
1. Select **Update** or **Remove** to update or remove the role assignment.
-## Remove eligible assignment via API
+## Remove eligible assignment via Microsoft Graph API
+
+The following is a sample HTTP request to revoke an eligible assignment to a role from a principal. For details on the API commands including request samples in languages such as C# and JavaScript, see [Create roleEligibilityScheduleRequests](/graph/api/rbacapplication-post-roleeligibilityschedulerequests).
### Request ````HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilityScheduleRequests
-
-
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleEligibilityScheduleRequests
{ "action": "AdminRemove",
POST https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilitySc
````HTTP {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
"id": "fc7bb2ca-b505-4ca7-ad2a-576d152633de", "status": "Revoked", "createdDateTime": "2021-07-15T20:23:23.85453Z",
active-directory Pim How To Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-renew-extend.md
To extend a role assignment, browse to the role or assignment view in Privileged
![Azure AD Roles - Assignments page listing eligible roles with links to extend](./media/pim-how-to-renew-extend/extend-admin-extend.png)
-## Extend role assignments using Graph API
+## Extend role assignments using Microsoft Graph API
-Extend an active assignment using Graph API.
+In the following request, an administrator extends an active assignment using Microsoft Graph API.
#### HTTP request ````HTTP
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests
-{
- "action": "AdminExtend",
- "justification": "abcde",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- `"principalId": "<principal-ID-GUID>",
- "scheduleInfo": {
- "startDateTime": "2021-07-15T19:15:08.941Z",
- "expiration": {
- "type": "AfterDuration",
- "duration": "PT3H"
- }
- }
+{
+ "action": "adminExtend",
+ "justification": "TEST",
+ "roleDefinitionId": "31392ffb-586c-42d1-9346-e59415a2cc4e",
+ "directoryScopeId": "/",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "scheduleInfo": {
+ "startDateTime": "2022-04-10T00:00:00Z",
+ "expiration": {
+ "type": "afterDuration",
+ "duration": "PT3H"
+ }
+ }
} ```` #### HTTP response ````HTTP
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
- "id": "<assignment-ID-GUID>",
- "status": "Provisioned",
- "createdDateTime": "2021-07-15T20:26:44.865248Z",
- "completedDateTime": "2021-07-15T20:26:47.9434068Z",
- "approvalId": null,
- "customData": null,
- "action": "AdminExtend",
- "principalId": "<principal-ID-GUID>",
- "roleDefinitionId": "<definition-ID-GUID>",
- "directoryScopeId": "/",
- "appScopeId": null,
- "isValidationOnly": false,
- "targetScheduleId": "<schedule-ID-GUID>",
- "justification": "test",
- "createdBy": {
- "application": null,
- "device": null,
- "user": {
- "displayName": null,
- "id": "<user-ID-GUID>"
- }
- },
- "scheduleInfo": {
- "startDateTime": "2021-07-15T20:26:47.9434068Z",
- "recurrence": null,
- "expiration": {
- "type": "afterDuration",
- "endDateTime": null,
- "duration": "PT3H"
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- }
-}
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
+ "id": "c3a3aa36-22e2-4240-8e4c-ea2a3af7c30f",
+ "status": "Provisioned",
+ "createdDateTime": "2022-05-13T16:18:36.3647674Z",
+ "completedDateTime": "2022-05-13T16:18:40.0835993Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "adminExtend",
+ "principalId": "071cc716-8147-4397-a5ba-b2105951cc0b",
+ "roleDefinitionId": "31392ffb-586c-42d1-9346-e59415a2cc4e",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "c3a3aa36-22e2-4240-8e4c-ea2a3af7c30f",
+ "justification": "TEST",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "3fbd929d-8c56-4462-851e-0eb9a7b3a2a5"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2022-05-13T16:18:40.0835993Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDuration",
+ "endDateTime": null,
+ "duration": "PT3H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
```` ## Renew role assignments
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Clust
Some of the subnets for this cluster's node pools are full and cannot take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there is not enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster cannot be upgraded if the node subnet is full.
-Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet-preview).
+Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet).
### Disable the Application Routing Addon
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
Title: Configure Azure CNI networking in Azure Kubernetes Service (AKS)
description: Learn how to configure Azure CNI (advanced) networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. Previously updated : 06/03/2019 Last updated : 05/16/2022
The following screenshot from the Azure portal shows an example of configuring t
![Advanced networking configuration in the Azure portal][portal-01-networking-advanced]
-## Dynamic allocation of IPs and enhanced subnet support (preview)
-
+## Dynamic allocation of IPs and enhanced subnet support
A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allotting pod IPs from a subnet separate from the subnet hosting the AKS cluster. It offers the following benefits:
The [prerequisites][prerequisites] already listed for Azure CNI still apply, but
* Only linux node clusters and node pools are supported. * AKS Engine and DIY clusters are not supported.-
-### Install the `aks-preview` Azure CLI
-
-You will need the *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `PodSubnetPreview` preview feature
-
-To use the feature, you must also enable the `PodSubnetPreview` feature flag on your subscription.
-
-Register the `PodSubnetPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "PodSubnetPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/PodSubnetPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+* Azure CLI version `2.37.0` or later.
### Planning IP addressing
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Title: Use multiple node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) Previously updated : 02/11/2021 Last updated : 05/16/2022
The following example output shows that *mynodepool* has been successfully creat
> [!TIP] > If no *VmSize* is specified when you add a node pool, the default size is *Standard_D2s_v3* for Windows node pools and *Standard_DS2_v2* for Linux node pools. If no *OrchestratorVersion* is specified, it defaults to the same version as the control plane.
-### Add a node pool with a unique subnet (preview)
+### Add a node pool with a unique subnet
A workload may require splitting a cluster's nodes into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
+> [!NOTE]
+> Make sure to use Azure CLI version `2.35.0` or later.
+ #### Limitations * All subnets assigned to nodepools must belong to the same virtual network.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Logic | [Logic Apps](../../logic-apps/index.yml) | | Microsoft.MachineLearning | [Machine Learning Studio](../../machine-learning/classic/index.yml) | | Microsoft.MachineLearningServices | [Azure Machine Learning](../../machine-learning/index.yml) |
-| Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-control-cli.md) |
+| Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-configurations.md) |
| Microsoft.ManagedIdentity | [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/index.yml) | | Microsoft.ManagedNetwork | Virtual networks managed by PaaS services | | Microsoft.ManagedServices | [Azure Lighthouse](../../lighthouse/index.yml) |
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes
-description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles.
+ Title: Protect your Azure resources with a lock
+description: You can safeguard Azure resources from updates or deletions by locking down all users and roles.
Previously updated : 04/13/2022 Last updated : 05/13/2022
-# Lock resources to prevent unexpected changes
+# Lock your resources to protect your infrastructure
-As an administrator, you can lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. The lock overrides any permissions the user might have.
+As an administrator, you can lock an Azure subscription, resource group, or resource to protect them from accidental user deletions and modifications. The lock overrides any user permissions.
-You can set the lock level to **CanNotDelete** or **ReadOnly**. In the portal, the locks are called **Delete** and **Read-only** respectively.
+You can set locks that prevent either deletions or modifications. In the portal, these locks are called Delete and Read-only. In the command line, these locks are called **CanNotDelete** or **ReadOnly**. In the left navigation panel, the subscription lock feature's name is **Resource locks**, while the resource group lock feature's name is **Locks**.
-- **CanNotDelete** means authorized users can still read and modify a resource, but they can't delete the resource.-- **ReadOnly** means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role.
+- **CanNotDelete** means authorized users can read and modify a resource, but they can't delete it.
+- **ReadOnly** means authorized users can read a resource, but they can't delete or update it. Applying this lock is similar to restricting all authorized users to the permissions that the **Reader** role provides.
Unlike role-based access control, you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md). ## Lock inheritance
-When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence.
+When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the same parent lock. The most restrictive lock in the inheritance takes precedence.
-If you have a **Delete** lock on a resource and attempt to delete its resource group, the whole delete operation is blocked. Even if the resource group or other resources in the resource group aren't locked, the deletion doesn't happen. You never have a partial deletion.
+If you have a **Delete** lock on a resource and attempt to delete its resource group, the feature blocks the whole delete operation. Even if the resource group or other resources in the resource group are unlocked, the deletion doesn't happen. You never have a partial deletion.
-When you [cancel an Azure subscription](../../cost-management-billing/manage/cancel-azure-subscription.md#what-happens-after-subscription-cancellation), the resources are initially deactivated but not deleted. A resource lock doesn't block canceling the subscription. After a waiting period, the resources are permanently deleted. The resource lock doesn't prevent the permanent deletion of the resources.
+When you [cancel an Azure subscription](../../cost-management-billing/manage/cancel-azure-subscription.md#what-happens-after-subscription-cancellation):
+* A resource lock doesn't block the subscription cancellation.
+* Azure preserves your resources by deactivating them instead of immediately deleting them.
+* Azure only deletes your resources permanently after a waiting period.
## Understand scope of locks > [!NOTE]
-> It's important to understand that locks don't apply to all types of operations. Azure operations can be divided into two categories - control plane and data plane. **Locks only apply to control plane operations**.
+> Locks only apply to control plane Azure operations and not data plane operations.
-Control plane operations are operations sent to `https://management.azure.com`. Data plane operations are operations sent to your instance of a service, such as `https://myaccount.blob.core.windows.net/`. For more information, see [Azure control plane and data plane](control-plane-and-data-plane.md). To discover which operations use the control plane URL, see the [Azure REST API](/rest/api/azure/).
+Azure control plane operations go to `https://management.azure.com`. Azure data plane operations go to your service instance, such as `https://myaccount.blob.core.windows.net/`. See [Azure control plane and data plane](control-plane-and-data-plane.md). To discover which operations use the control plane URL, see the [Azure REST API](/rest/api/azure/).
-This distinction means locks prevent changes to a resource, but they don't restrict how resources perform their own functions. For example, a ReadOnly lock on a SQL Database logical server prevents you from deleting or modifying the server. It doesn't prevent you from creating, updating, or deleting data in the databases on that server. Data transactions are permitted because those operations aren't sent to `https://management.azure.com`.
+The distinction means locks protect a resource from changes, but they don't restrict how a resource performs its functions. A ReadOnly lock, for example, on an SQL Database logical server, protects it from deletions or modifications. It allows you to create, update, or delete data in the server database. Data plane operations allow data transactions. These requests don't go to `https://management.azure.com`.
-More examples of the differences between control and data plane operations are described in the next section.
+## Considerations before applying your locks
-## Considerations before applying locks
+Applying locks can lead to unexpected results. Some operations, which don't seem to modify a resource, require blocked actions. Locks prevent the POST method from sending data to the Azure Resource Manager API. Some common examples of blocked operations are:
-Applying locks can lead to unexpected results because some operations that don't seem to modify the resource actually require actions that are blocked by the lock. Locks will prevent any operations that require a POST request to the Azure Resource Manager API. Some common examples of the operations that are blocked by locks are:
+-A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
-- A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
+- A read-only lock on a **storage account** protects Azure Role-Based Access Control (RBAC) assignments scoped for a storage account or a data container (blob container or queue).
-- A cannot-delete lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted. If a request uses [data plane operations](control-plane-and-data-plane.md#data-plane), the lock on the storage account doesn't protect blob, queue, table, or file data within that storage account. However, if the request uses [control plane operations](control-plane-and-data-plane.md#control-plane), the lock protects those resources.
+- A cannot-delete lock on a **storage account** doesn't protect account data from deletion or modification. It only protects the storage account from deletion. If a request uses [data plane operations](control-plane-and-data-plane.md#data-plane), the lock on the storage account doesn't protect blob, queue, table, or file data within that storage account. If the request uses [control plane operations](control-plane-and-data-plane.md#control-plane), however, the lock protects those resources.
- For example, if a request uses [File Shares - Delete](/rest/api/storagerp/file-shares/delete), which is a control plane operation, the deletion is denied. If the request uses [Delete Share](/rest/api/storageservices/delete-share), which is a data plane operation, the deletion succeeds. We recommend that you use the control plane operations.
+ If a request uses [File Shares - Delete](/rest/api/storagerp/file-shares/delete), for example, which is a control plane operation, the deletion fails. If the request uses [Delete Share](/rest/api/storageservices/delete-share), which is a data plane operation, the deletion succeeds. We recommend that you use a control plane operation.
-- A read-only lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted or modified, and doesn't protect blob, queue, table, or file data within that storage account.
+- A read-only lock on a **storage account** doesn't prevent its data from deletion or modification. It also doesn't protect its blob, queue, table, or file data.
- A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access. -- A read-only lock on a **resource group** that contains an **App Service plan** prevents you from [scaling up or out the plan](../../app-service/manage-scale-up.md).
+- A read-only lock on a **resource group** that contains an **App Service plan** prevents you from [scaling up or out of the plan](../../app-service/manage-scale-up.md).
-- A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting the virtual machine. These operations require a POST request.
+- A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting a virtual machine. These operations require a POST method request.
-- A read-only lock on a **resource group** that contains an **automation account** prevents all runbooks from starting. These operations require a POST request.
+- A read-only lock on a **resource group** that contains an **automation account** prevents all runbooks from starting. These operations require a POST method request.
-- A cannot-delete lock on a **resource group** prevents Azure Resource Manager from [automatically deleting deployments](../templates/deployment-history-deletions.md) in the history. If you reach 800 deployments in the history, your deployments will fail.
+- A cannot-delete lock on a **resource group** prevents Azure Resource Manager from [automatically deleting deployments](../templates/deployment-history-deletions.md) in the history. If you reach 800 deployments in the history, your deployments fail.
- A cannot-delete lock on the **resource group** created by **Azure Backup Service** causes backups to fail. The service supports a maximum of 18 restore points. When locked, the backup service can't clean up restore points. For more information, see [Frequently asked questions-Back up Azure VMs](../../backup/backup-azure-vm-backup-faq.yml).
Applying locks can lead to unexpected results because some operations that don't
- A read-only lock on a **subscription** prevents **Azure Advisor** from working correctly. Advisor is unable to store the results of its queries. -- A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses POST](/rest/api/application-gateway/application-gateways/backend-health), which is blocked by the read-only lock.
+- A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses a POST method](/rest/api/application-gateway/application-gateways/backend-health), which a read-only lock blocks.
-- A read-only lock on a **AKS cluster** prevents all users from accessing any cluster resources from the **Kubernetes Resources** section of AKS cluster on the left of the Azure portal. These operations require a POST request for authentication.
+- A read-only lock on a AKS cluster limits how you can access cluster resources through the portal. A read-only lock prevents you from using the AKS cluster's Kubernetes Resources section in the Azure portal to choose a cluster resource. These operations require a POST method request for authentication.
## Who can create or delete locks
-To create or delete management locks, you must have access to `Microsoft.Authorization/*` or `Microsoft.Authorization/locks/*` actions. Of the built-in roles, only **Owner** and **User Access Administrator** are granted those actions.
+To create or delete management locks, you need access to `Microsoft.Authorization/*` or `Microsoft.Authorization/locks/*` actions. Only the **Owner** and the **User Access Administrator** built-in roles can create and delete management locks. You can create a custom role with the required permissions.
-## Managed Applications and locks
+## Managed applications and locks
-Some Azure services, such as Azure Databricks, use [managed applications](../managed-applications/overview.md) to implement the service. In that case, the service creates two resource groups. One resource group contains an overview of the service and isn't locked. The other resource group contains the infrastructure for the service and is locked.
+Some Azure services, such as Azure Databricks, use [managed applications](../managed-applications/overview.md) to implement the service. In that case, the service creates two resource groups. One is an unlocked resource group that contains a service overview. The other is a locked resource group that contains the service infrastructure.
-If you try to delete the infrastructure resource group, you get an error stating that the resource group is locked. If you try to delete the lock for the infrastructure resource group, you get an error stating that the lock can't be deleted because it's owned by a system application.
+If you try to delete the infrastructure resource group, you get an error stating that the resource group is locked. If you try to delete the lock for the infrastructure resource group, you get an error stating that the lock can't be deleted because a system application owns it.
Instead, delete the service, which also deletes the infrastructure resource group.
For managed applications, select the service you deployed.
![Select service](./media/lock-resources/select-service.png)
-Notice the service includes a link for a **Managed Resource Group**. That resource group holds the infrastructure and is locked. It can't be directly deleted.
+Notice the service includes a link for a **Managed Resource Group**. That resource group holds the infrastructure and is locked. You can only delete it indirectly.
![Show managed group](./media/lock-resources/show-managed-group.png)
-To delete everything for the service, including the locked infrastructure resource group, select **Delete** for the service.
+To delete everything for the service, including the locked infrastructure resource group, choose **Delete** for the service.
![Delete service](./media/lock-resources/delete-service.png)
To delete everything for the service, including the locked infrastructure resour
### Template
-When using an Azure Resource Manager template (ARM template) or Bicep file to deploy a lock, you need to be aware of the scope of the lock and the scope of the deployment. To apply a lock at the deployment scope, such as locking a resource group or subscription, don't set the scope property. When locking a resource within the deployment scope, set the scope property.
+When using an Azure Resource Manager template (ARM template) or Bicep file to deploy a lock, it's good to understand how the deployment scope and the lock scope work together. To apply a lock at the deployment scope, such as locking a resource group or a subscription, leave the scope property unset. When locking a resource, within the deployment scope, set the scope property on the lock.
-The following template applies a lock to the resource group it's deployed to. Notice there isn't a scope property on the lock resource because the scope of the lock matches the scope of deployment. This template is deployed at the resource group level.
+The following template applies a lock to the resource group it's deployed to. Notice there isn't a scope property on the lock resource because the lock scope matches the deployment scope. Deploy this template at the resource group level.
# [JSON](#tab/json)
resource createRgLock 'Microsoft.Authorization/locks@2016-09-01' = {
When applying a lock to a **resource** within the resource group, add the scope property. Set scope to the name of the resource to lock.
-The following example shows a template that creates an app service plan, a website, and a lock on the website. The scope of the lock is set to the website.
+The following example shows a template that creates an app service plan, a website, and a lock on the website. The lock's scope is set to the website.
# [JSON](#tab/json)
az lock delete --ids $lockid
### REST API
-You can lock deployed resources with the [REST API for management locks](/rest/api/resources/managementlocks). The REST API enables you to create and delete locks, and retrieve information about existing locks.
+You can lock deployed resources with the [REST API for management locks](/rest/api/resources/managementlocks). The REST API lets you create and delete locks and retrieve information about existing locks.
To create a lock, run:
To create a lock, run:
PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/locks/{lock-name}?api-version={api-version} ```
-The scope could be a subscription, resource group, or resource. The lock-name is whatever you want to call the lock. For api-version, use **2016-09-01**.
+The scope could be a subscription, resource group, or resource. The lock name is whatever you want to call it. For the api-version, use **2016-09-01**.
-In the request, include a JSON object that specifies the properties for the lock.
+In the request, include a JSON object that specifies the lock properties.
```json {
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md
For each area, we have external pages to track and review our SDKs. You can cons
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - | | Identity | [npm](https://www.npmjs.com/package/@azure/communication-identity) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Identity) | [PyPi](https://pypi.org/project/azure-communication-identity/) | [Maven](https://search.maven.org/search?q=a:azure-communication-identity) | - | - | - |
-| Network Traversal | [npm](https://www.npmjs.com/package/@azure/communication-network-traversal) | [NuGet](https://www.nuget.org/packages/Azure.Communication.NetworkTraversal) | [PyPi]https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | - | - | - |
+| Network Traversal | [npm](https://www.npmjs.com/package/@azure/communication-network-traversal) | [NuGet](https://www.nuget.org/packages/Azure.Communication.NetworkTraversal) | [PyPi](https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | - | - | - |
| Phone numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.nuget.org/packages/Azure.Communication.phonenumbers) | [PyPi](https://pypi.org/project/azure-communication-phonenumbers/) | [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | - | - | - | | Signaling | [npm](https://www.npmjs.com/package/@azure/communication-signaling) | - | | - | - | - | - | | SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - |
communication-services Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/learn-modules.md
+
+ Title: Microsoft Learn modules for Azure Communication Services
+description: Learn about the available Learn modules for Azure Communication Services.
+++++ Last updated : 06/30/2021+++
+# Learn modules
+
+If you're looking for more guided experiences that teach you how to use Azure Communication Services then we have several Learn modules at your disposal. These modules provide a more structured experience of learning by providing a step by step guide to learning particular topics. Check them out, we'd love to know what you think.
+
+- [Introduction to Communication Services](/learn/modules/intro-azure-communication-services/)
+- [Send an SMS message from a C# console application with Azure Communication Services](/learn/modules/communication-service-send-sms-console-app/)
+- [Create a voice calling web app with Azure Communication Services](/learn/modules/communication-services-voice-calling-web-app)
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/apis-list.md
Title: Connectors overview for Azure Logic Apps
-description: Learn about connectors and how they help you quickly and easily build automated integration workflows using Azure Logic Apps.
+ Title: Overview about connectors in Azure Logic Apps
+description: Learn about connectors to create automated integration workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 09/13/2021 Last updated : 05/10/2022 # About connectors in Azure Logic Apps
-When you build workflows using Azure Logic Apps, you can use *connectors* to help you quickly and easily access data, events, and resources in other apps, services, systems, protocols, and platforms - often without writing any code. A connector provides prebuilt operations that you can use as steps in your workflows. Azure Logic Apps provides hundreds of connectors that you can use. If no connector is available for the resource that you want to access, you can use the generic HTTP operation to communicate with the service, or you can [create a custom connector](#custom-apis-and-connectors).
+When you build workflows using Azure Logic Apps, you can use *connectors* to help you quickly and easily access data, events, and resources in other apps, services, systems, protocols, and platforms - often without writing any code. A connector provides prebuilt operations that you can use as steps in your workflows. Azure Logic Apps provides hundreds of connectors that you can use. If no connector is available for the resource that you want to access, you can use the generic HTTP operation to communicate with the service, or you can [create a custom connector](#custom-connectors-and-apis).
-This overview offers an introduction to connectors, how they generally work, and the more popular and commonly used connectors in Azure Logic Apps. For more information, review the following documentation:
+This overview provides a high-level introduction to connectors and how they generally work. For information about the more popular and commonly used connectors in Azure Logic Apps, review the following documentation:
-* [Connectors overview for Azure Logic Apps, Microsoft Power Automate, and Microsoft Power Apps](/connectors)
* [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](built-in.md)
+* [Managed connectors in Azure Logic Apps](managed.md)
* [Pricing and billing models in Azure Logic Apps](../logic-apps/logic-apps-pricing.md) * [Azure Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/) ## What are connectors?
-Technically, a connector is a proxy or a wrapper around an API that the underlying service uses to communicate with Azure Logic Apps. This connector provides operations that you use in your workflows to perform tasks. An operation is available either as a *trigger* or *action* with properties you can configure. Some triggers and actions also require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, for example, so that you can authenticate access to a user account.
+Technically, a connector is a proxy or a wrapper around an API that the underlying service uses to communicate with Azure Logic Apps. This connector provides operations that you use in your workflows to perform tasks. An operation is available either as a *trigger* or *action* with properties you can configure. Some triggers and actions also require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, for example, so that you can authenticate access to a user account. For more overview information, review [Connectors overview for Azure Logic Apps, Microsoft Power Automate, and Microsoft Power Apps](/connectors).
### Triggers A *trigger* specifies the event that starts the workflow and is always the first step in any workflow. Each trigger also follows a specific firing pattern that controls how the trigger monitors and responds to events. Usually, a trigger follows the *polling* pattern or *push* pattern, but sometimes, a trigger is available in both versions. - *Polling triggers* regularly check a specific service or system on a specified schedule to check for new data or a specific event. If new data is available, or the specific event happens, these triggers create and run a new instance of your workflow. This new instance can then use the data that's passed as input.+ - *Push triggers* listen for new data or for an event to happen, without polling. When new data is available, or when the event happens, these triggers create and run a new instance of your workflow. This new instance can then use the data that's passed as input. For example, you might want to build a workflow that does something when a file is uploaded to your FTP server. As the first step in your workflow, you can use the FTP trigger named **When a file is added or modified**, which follows the polling pattern. You can then specify a schedule to regularly check for upload events.
A trigger also passes along any inputs and other required data into your workflo
### Actions
-An *action* is an operation that follows the trigger and performs some kind of task in your workflow. You can use multiple actions in your workflow. For example, you might start the workflow with a SQL trigger that detects new customer data in a SQL database. Following the trigger, your workflow can have a SQL action that gets the customer data. Following the SQL action, your workflow can have another action, not necessarily SQL, that processes the data.
+An *action* is an operation that follows the trigger and performs some kind of task in your workflow. You can use multiple actions in your workflow. For example, you might start the workflow with a SQL trigger that detects new customer data in an SQL database. Following the trigger, your workflow can have a SQL action that gets the customer data. Following the SQL action, your workflow can have another action, not necessarily SQL, that processes the data.
## Connector categories
-In Azure Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A few triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+In Azure Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A few triggers and actions are available in both versions. The versions available depend on whether you create a *Consumption* logic app that runs in multi-tenant Azure Logic Apps, or a *Standard* logic app that runs in single-tenant Azure Logic Apps.
-[Built-in triggers and actions](built-in.md) run natively on the Logic Apps runtime, don't require creating connections, and perform these kinds of tasks:
+* [Built-in connectors](built-in.md) run natively on the Azure Logic Apps runtime.
-- [Run code in your workflows](built-in.md#run-code-from-workflows).-- [Organize and control your data](built-in.md#control-workflow).-- [Manage or manipulate data](built-in.md#manage-or-manipulate-data).
+* [Managed connectors](managed.md) are deployed, hosted, and managed by Microsoft. These connectors provide triggers and actions for cloud services, on-premises systems, or both.
-[Managed connectors](managed.md) are deployed, hosted, and managed by Microsoft. These connectors provide triggers and actions for cloud services, on-premises systems, or both. Managed connectors are available in these categories:
+ In a *Standard* logic app, all managed connectors are organized as **Azure** connectors. However, in a *Consumption* logic app, managed connectors are organized as **Standard** or **Enterprise**, based on pricing level.
-- [On-premises connectors](managed.md#on-premises-connectors) that help you access data and resources in on-premises systems.-- [Enterprise connectors](managed.md#enterprise-connectors) that provide access to enterprise systems.-- [Integration account connectors](managed.md#integration-account-connectors) that support business-to-business (B2B) communication scenarios.-- [Integration service environment (ISE) connectors](managed.md#ise-connectors) that are a small group of [managed connectors available only for ISEs](#ise-and-connectors).
+For more information about logic app types, review [Resource types and host environment differences](../logic-apps/logic-apps-overview.md#resource-environment-differences).
<a name="connection-configuration"></a> ## Connection configuration
-To create or manage logic app resources and connections, you need certain permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has these specific roles:
+In Consumption logic apps, before you can create or manage logic apps and their connections, you need specific permissions. For more information about these permissions, review [Secure operations - Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md#secure-operations).
+
+Before you can use a managed connector's triggers or actions in your workflow, many connectors require that you first create a *connection* to the target service or system. To create a connection from within the logic app workflow designer, you have to authenticate your identity with account credentials and sometimes other connection information. For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account. For some built-in connectors and managed connectors, you can [set up and use a managed identity for authentication](../logic-apps/create-managed-service-identity.md#triggers-actions-managed-identity), rather than provide your credentials.
-* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
+Although you create connections within a workflow, these connections are actually separate Azure resources with their own resource definitions. To review these connection resource definitions, follow these steps based on whether you have a Consumption or Standard logic app:
-* [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator): Lets you read, enable, and disable logic apps, but you can't edit or update them.
+* Consumption: To view these connections in the Azure portal, review [View connections for Consumption logic apps in the Azure portal](../logic-apps/manage-logic-apps-with-azure-portal.md#view-connections).
-* [Contributor](../role-based-access-control/built-in-roles.md#contributor): Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
+ To view and manage these connections in Visual Studio, review [Manage Consumption logic apps with Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md), and download your logic app from Azure into Visual Studio. For more information about connection resource definitions for Consumption logic apps, review [Connection resource definitions](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#connection-resource-definitions).
- For example, suppose you have to work with a logic app that you didn't create and authenticate connections used by that logic app's workflow. Your Azure subscription requires Contributor permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
+* Standard: To view these connections in the Azure portal, review [View connections for Standard logic apps in the Azure portal](../logic-apps/create-single-tenant-workflows-azure-portal.md#view-connections).
-Before you can use a connector's triggers or actions in your workflow, most connectors require that you first create a *connection* to the target service or system. To create a connection from within a logic app workflow, you have to authenticate your identity with account credentials and sometimes other connection information. For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account. For a small number of built-in operations and managed connectors, you can [set up and use a managed identity for authentication](../logic-apps/create-managed-service-identity.md#triggers-actions-managed-identity), rather than provide your credentials.
+ To view and manage these connections in Visual Studio Code, review [View your logic app in Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md#manage-deployed-apps-vs-code). The **connections.json** file contains the required configuration for the connections created by connectors.
<a name="connection-security-encryption"></a>
Before you can use a connector's triggers or actions in your workflow, most conn
Connection configuration details, such as server address, username, and password, credentials, and secrets are [encrypted and stored in the secured Azure environment](../security/fundamentals/encryption-overview.md). This information can be used only in logic app resources and by clients who have permissions for the connection resource, which is enforced using linked access checks. Connections that use Azure Active Directory Open Authentication (Azure AD OAuth), such as Office 365, Salesforce, and GitHub, require that you sign in, but Azure Logic Apps stores only access and refresh tokens as secrets, not sign-in credentials.
-Established connections can access the target service or system for as long as that service or system allows. For services that use Azure AD OAuth connections, such as Office 365 and Dynamics, the Logic Apps service refreshes access tokens indefinitely. Other services might have limits on how long Logic Apps can use a token without refreshing. Some actions, such as changing your password, invalidate all access tokens.
-
-Although you create connections from within a workflow, connections are separate Azure resources with their own resource definitions. To review these connection resource definitions, [download your logic app from Azure into Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md). This method is the easiest way to create a valid, parameterized logic app template that's mostly ready for deployment.
+Established connections can access the target service or system for as long as that service or system allows. For services that use Azure AD OAuth connections, such as Office 365 and Dynamics, Azure Logic Apps refreshes access tokens indefinitely. Other services might have limits on how long Logic Apps can use a token without refreshing. Some actions, such as changing your password, invalidate all access tokens.
> [!TIP]
-> If your organization doesn't permit you to access specific resources through Logic Apps connectors, you can [block the capability to create such connections](../logic-apps/block-connections-connectors.md) using [Azure Policy](../governance/policy/overview.md).
+> If your organization doesn't permit you to access specific resources through connectors in Azure Logic Apps, you can [block the capability to create such connections](../logic-apps/block-connections-connectors.md) using [Azure Policy](../governance/policy/overview.md).
For more information about securing logic apps and connections, review [Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md).
For more information about securing logic apps and connections, review [Secure a
### Firewall access for connections
-If you use a firewall that limits traffic, and your logic app workflows need to communicate through that firewall, you have to set up your firewall to allow access for both the [inbound](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by the Logic Apps service or runtime in the Azure region where your logic app workflows exist. If your workflows also use managed connectors, such as the Office 365 Outlook connector or SQL connector, or use custom connectors, your firewall also needs to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in your logic app's Azure region. For more information, review [Firewall configuration](../logic-apps/logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags).
+If you use a firewall that limits traffic, and your logic app workflows need to communicate through that firewall, you have to set up your firewall to allow access for both the [inbound](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by the Azure Logic Apps platform or runtime in the Azure region where your logic app workflows exist. If your workflows also use managed connectors, such as the Office 365 Outlook connector or SQL connector, or use custom connectors, your firewall also needs to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in your logic app's Azure region. For more information, review [Firewall configuration](../logic-apps/logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags).
## Recurrence behavior
-Recurring built-in triggers, such as the [Recurrence trigger](connectors-native-recurrence.md), run natively on the Logic Apps runtime and differ from recurring connection-based triggers, such as the Office 365 Outlook connector trigger where you need to create a connection first.
+Recurring built-in triggers, such as the [Recurrence trigger](connectors-native-recurrence.md), run natively on the Azure Logic Apps runtime and differ from recurring connection-based triggers, such as the Office 365 Outlook connector trigger where you need to create a connection first.
For both kinds of triggers, if a recurrence doesn't specify a specific start date and time, the first recurrence runs immediately when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior, provide a start date and time for when you want the first recurrence to run.
+Some managed connectors have both recurrence-based and webhook-based triggers, so if you use a recurrence-based trigger, review the [Recurrence behavior overview](apis-list.md#recurrence-behavior).
+ ### Recurrence for built-in triggers Recurring built-in triggers follow the schedule that you set, including any specified time zone. However, if a recurrence doesn't specify other advanced scheduling options, such as specific times to run future recurrences, those recurrences are based on the last trigger execution. As a result, the start times for those recurrences might drift due to factors such as latency during storage calls.
To make sure that your workflow runs at your specified start time and doesn't mi
* Consider using a [**Sliding Window** trigger](connectors-native-sliding-window.md) instead of a **Recurrence** trigger to avoid missed recurrences.
-## Custom APIs and connectors
+## Custom connectors and APIs
+
+In Consumption logic apps that run in multi-tenant Azure Logic Apps, you can call Swagger-based or SOAP-based APIs that aren't available as out-of-the-box connectors. You can also run custom code by creating custom API Apps. For more information, review the following documentation:
+
+* [Swagger-based or SOAP-based custom connectors for Consumption logic apps](../logic-apps/custom-connector-overview.md#custom-connector-consumption)
+
+* Create a [Swagger-based](/connectors/custom-connectors/define-openapi-definition) or [SOAP-based](/connectors/custom-connectors/create-register-logic-apps-soap-connector) custom connector, which makes these APIs available to any Consumption logic app in your Azure subscription. To make your custom connector public for anyone to use in Azure, [submit your connector for Microsoft certification](/connectors/custom-connectors/submit-certification).
+
+* [Create custom API Apps](../logic-apps/logic-apps-create-api-app.md)
+
+In Standard logic apps that run in single-tenant Azure Logic Apps, you can create natively running service provider-based custom built-in connectors that are available to any Standard logic app. For more information, review the following documentation:
+
+* [Service provider-based custom built-in connectors for Standard logic apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
-To call APIs that run custom code or aren't available as connectors, you can extend the Logic Apps platform by [creating custom API Apps](../logic-apps/logic-apps-create-api-app.md). You can also [create custom connectors](../logic-apps/custom-connector-overview.md) for any REST or SOAP-based APIs, which make those APIs available to any logic app in your Azure subscription. To make custom API Apps or connectors public for anyone to use in Azure, you can [submit connectors for Microsoft certification](/connectors/custom-connectors/submit-certification).
+* [Create service provider-based custom built-in connectors for Standard logic apps](../logic-apps/create-custom-built-in-connector-standard.md)
## ISE and connectors
For workflows that need direct access to resources in an Azure virtual network,
Custom connectors created within an ISE don't work with the on-premises data gateway. However, these connectors can directly access on-premises data sources that are connected to an Azure virtual network hosting the ISE. So, logic apps in an ISE most likely don't need the data gateway when communicating with those resources. If you have custom connectors that you created outside an ISE that require the on-premises data gateway, logic apps in an ISE can use those connectors.
-In the Logic Apps Designer, when you browse the built-in triggers and actions or managed connectors that you want to use for logic apps in an ISE, the **CORE** label appears on built-in triggers and actions, while the **ISE** label appears on managed connectors that are designed to work with an ISE.
+In the workflow designer, when you browse the built-in connectors or managed connectors that you want to use for logic apps in an ISE, the **CORE** label appears on built-in connectors, while the **ISE** label appears on managed connectors that are designed to work with an ISE.
:::row::: :::column:::
In the Logic Apps Designer, when you browse the built-in triggers and actions or
**CORE** \ \
- Built-in triggers and actions with this label run in the same ISE as your logic apps.
+ Built-in connectors with this label run in the same ISE as your logic apps.
:::column-end::: :::column::: ![Example ISE connector](./media/apis-list/example-ise-connector.png)
In the Logic Apps Designer, when you browse the built-in triggers and actions or
Managed connectors with this label run in the same ISE as your logic apps. \ \
- If you have an on-premises system that's connected to an Azure virtual network, an ISE lets your workflows directly access that system without using the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). Instead, you can either use that system's **ISE** connector if available, an HTTP action, or a [custom connector](#custom-apis-and-connectors).
+ If you have an on-premises system that's connected to an Azure virtual network, an ISE lets your workflows directly access that system without using the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). Instead, you can either use that system's **ISE** connector if available, an HTTP action, or a [custom connector](#custom-connectors-and-apis).
\ \ For on-premises systems that don't have **ISE** connectors, use the on-premises data gateway. To find available ISE connectors, review [ISE connectors](#ise-and-connectors).
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
Title: Built-in triggers and actions
-description: Use built-in triggers and actions to create automated workflows that integrate apps, data, services, and systems, to control workflows, and to manage data using Azure Logic Apps.
+ Title: Overview about built-in connectors in Azure Logic Apps
+description: Learn about built-in connectors that run natively to create automated integration workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 04/15/2021 Last updated : 05/10/2022
-# Built-in triggers and actions in Azure Logic Apps
+# Built-in connectors in Azure Logic Apps
-[Built-in triggers and actions](apis-list.md) provide ways for you to [control your workflow's schedule and structure](#control-workflow), [run your own code](#run-code-from-workflows), [manage or manipulate data](#manage-or-manipulate-data), and complete other tasks in your workflows. Different from [managed connectors](managed.md), many built-in operations aren't tied to a specific service, system, or protocol. For example, you can start almost any workflow on a schedule by using the Recurrence trigger. Or, you can have your workflow wait until called by using the Request trigger. All built-in operations run natively in Azure Logic Apps, and most don't require that you create a connection before you use them.
+Built-in connectors provide ways for you to control your workflow's schedule and structure, run your own code, manage or manipulate data, and complete other tasks in your workflows. Different from managed connectors, some built-in connectors aren't tied to a specific service, system, or protocol. For example, you can start almost any workflow on a schedule by using the Recurrence trigger. Or, you can have your workflow wait until called by using the Request trigger. All built-in connectors run natively on the Azure Logic Apps runtime. Some don't require that you create a connection before you use them.
-For a smaller number of services, systems and protocols, Azure Logic Apps provides built-in operations, such as Azure API Management, Azure App Services, Azure Functions, and for calling other Azure Logic Apps logic app workflows. The number and range available vary based on whether you create a Consumption plan-based logic app resource that runs in multi-tenant Azure Logic Apps, or a Standard plan-based logic app resource that runs in single-tenant Azure Logic Apps. For more information, review [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md). In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app that runs in multi-tenant Azure Logic Apps, or a Standard logic app that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app type and not the other.
-For example, if you create a single-tenant logic app, both built-in operations and [managed connector operations](managed.md) are available for a few services, specifically Azure Blob, Azure Event Hubs, Azure Cosmos DB, Azure Service Bus, DB2, MQ, and SQL Server. In few cases, some built-in operations are available only for one logic app resource type. For example, Batch operations are currently available only for Consumption logic app workflows. In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+For example, a Standard logic app provides both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption logic app doesn't have the built-in versions. A Consumption logic app provides built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard logic app doesn't have these built-in connectors. For more information, review the following documentation: [Managed connectors in Azure Logic Apps](managed.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
-The following list describes only some of the tasks that you can accomplish with [built-in triggers and actions](#general-built-in-triggers-and-actions):
+This article provides a general overview about built-in connectors in Consumption logic apps versus Standard logic apps.
-- Run workflows using custom and advanced schedules. For more information about scheduling, review the [recurrence behavior section in the connector overview for Azure Logic Apps](apis-list.md#recurrence-behavior).
+<a name="built-in-operations-lists"></a>
-- Organize and control your workflow's structure, for example, using loops and conditions.
+## Built-in connectors in Consumption versus Standard
-- Work with variables, dates, data operations, content transformations, and batch operations.
+| Consumption | Standard |
+|-|-|
+| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | Azure Blob <br>Azure Cosmos DB <br>Azure Functions <br>Azure Table Storage <br>Control <br>Data Operations <br>Date Time <br>DB2 <br>Event Hubs <br>Flat File <br>FTP <br>HTTP <br>IBM Host File <br>Inline Code <br>Liquid operations <br>MQ <br>Request <br>Schedule <br>Service Bus <br>SFTP <br>SQL Server <br>Variables <br>Workflow operations <br>XML operations |
+|||
-- Communicate with other endpoints using HTTP triggers and actions.
+<a name="custom-built-in"></a>
-- Receive and respond to requests.
+## Custom built-in connectors
-- Call your own functions (Azure Functions), web apps (Azure App Services), APIs (Azure API Management), other Azure Logic Apps workflows that can receive requests, and so on.
+For Standard logic apps, if a built-connector isn't available for your scenario, you can create your own built-in connector. You can use the same [*service provider interface implementation*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation) that's used by service provider-based built-in connectors, such as SQL Server, Service Bus, Blob Storage, Event Hubs, and Blob Storage. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard logic apps.
-## General built-in triggers and actions
+For more information, review the following documentation:
-Azure Logic Apps provides the following built-in triggers and actions:
+* [Custom connectors for Standard logic apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
+* [Create custom built-in connectors for Standard logic apps](../logic-apps/create-custom-built-in-connector-standard.md)
+
+<a name="general-built-in"></a>
+
+## General built-in connectors
+
+You can use the following built-in connectors to perform general tasks, for example:
+
+* Run workflows using custom and advanced schedules. For more information about scheduling, review the [Recurrence behavior in the connector overview for Azure Logic Apps](apis-list.md#recurrence-behavior).
+
+* Organize and control your workflow's structure, for example, using loops and conditions.
+
+* Work with variables, dates, data operations, content transformations, and batch operations.
+
+* Communicate with other endpoints using HTTP triggers and actions.
+
+* Receive and respond to requests.
+
+* Call your own functions (Azure Functions) or other Azure Logic Apps workflows that can receive requests, and so on.
:::row::: :::column:::
Azure Logic Apps provides the following built-in triggers and actions:
[**Recurrence**][schedule-recurrence-doc]: Trigger a workflow based on the specified recurrence. \ \
- [**Sliding Window**][schedule-sliding-window-doc]: Trigger a workflow that needs to handle data in continuous chunks.
+ [**Sliding Window**][schedule-sliding-window-doc]<br>(*Consumption logic app only*): <br>Trigger a workflow that needs to handle data in continuous chunks.
\ \ [**Delay**][schedule-delay-doc]: Pause your workflow for the specified duration.
Azure Logic Apps provides the following built-in triggers and actions:
:::column-end::: :::row-end:::
-## Service-based built-in trigger and actions
+<a name="service-built-in"></a>
+
+## Service-based built-in connectors
-Azure Logic Apps provides the following built-in actions for the following
+Connectors for some services provide both built-in connectors and managed connectors, which might differ across these versions.
:::row::: :::column::: [![Azure API Management icon][azure-api-management-icon]][azure-api-management-doc] \ \
- [**Azure API Management**][azure-api-management-doc]
+ [**Azure API Management**][azure-api-management-doc]<br>(*Consumption logic app only*)
\ \ Call your own triggers and actions in APIs that you define, manage, and publish using [Azure API Management](../api-management/api-management-key-concepts.md). <p><p>**Note**: Not supported when using [Consumption tier for API Management](../api-management/api-management-features.md).
Azure Logic Apps provides the following built-in actions for the following servi
[![Azure App Services icon][azure-app-services-icon]][azure-app-services-doc] \ \
- [**Azure App Services**][azure-app-services-doc]
+ [**Azure App Services**][azure-app-services-doc]<br>(*Consumption logic app only*)
\ \ Call apps that you create and host on [Azure App Service](../app-service/overview.md), for example, API Apps and Web Apps.
Azure Logic Apps provides the following built-in actions for the following servi
\ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. :::column-end::: :::column::: [![Azure Functions icon][azure-functions-icon]][azure-functions-doc] \
Azure Logic Apps provides the following built-in actions for the following servi
\ Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow. :::column-end::: :::column::: [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc] \ \
- [**Azure Logic Apps**][nested-logic-app-doc]
+ [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption logic app*) <br><br>-or-<br><br>[**Workflow operations**][nested-logic-app-doc]<br>(*Standard logic app*)
\ \ Call other workflows that start with the Request trigger named **When a HTTP request is received**.
Azure Logic Apps provides the following built-in actions for the following servi
Manage asynchronous messages, queues, sessions, topics, and topic subscriptions. :::column-end::: :::column:::
- ![Azure Table Storage icon][azure-table-storage-icon]
+ [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]
\ \
- **Azure Table Storage**<br>(*Standard logic app only*)
+ [**Azure Table Storage**][azure-table-storage-doc]<br>(*Standard logic app only*)
\ \
- Connect to your Azure Table Storage account so you can create and manage tables.
+ Connect to your Azure Storage account so that you can create, update, query, and manage tables.
+ :::column-end:::
+ :::column:::
+ [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
+ \
+ \
+ [**Event Hubs**][azure-event-hubs-doc]<br>(*Standard logic app only*)
+ \
+ \
+ Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
:::column-end::: :::column::: [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc] \
Azure Logic Apps provides the following built-in actions for the following servi
\ Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more. :::column-end::: :::column:::
- [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
+ ![IBM Host File icon][ibm-host-file-icon]
\ \
- [**Event Hubs**][azure-event-hubs-doc]<br>(*Standard logic app only*)
+ **IBM Host File**<br>(*Standard logic app only*)
\ \
- Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
+ Connect to IBM Host File and generate or parse contents.
:::column-end::: :::column::: [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
Azure Logic Apps provides the following built-in actions for the following servi
[**SQL Server**][sql-server-doc]<br>(*Standard logic app only*) \ \
- Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. <p>**Note**: Single-tenant Azure Logic Apps provides both SQL built-in and managed connector operations, while multi-tenant Azure Logic Apps provides only managed connector operations. <p>For more information, review [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries.
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::column:::
:::column-end::: :::row-end:::
Azure Logic Apps provides the following built-in actions for working with data o
:::column-end::: :::row-end:::
-## Integration account built-in actions
+<a name="integration-account-built-in"></a>
+
+## Integration account built-in connectors
+
+Integration account operations specifically support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, maps, and schemas, you can use integration account built-in actions to encode and decode messages, transform content, and more.
+
+* Consumption logic apps
+
+ Before you use any integration account operations in a Consumption logic app, you have to [link your logic app to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+
+* Standard logic apps
+
+ Integration account operations don't require that you link your logic app to your integration account. Instead, you create a connection to your integration account when you add the operation to your Standard logic app workflow. Actually, the built-in Liquid operations and XML operations don't even need an integration account. However, you have to upload Liquid maps, XML maps, or XML schemas through the respective operations in the Azure portal or add these files to your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
-Azure Logic Apps provides the following built-in actions, which either require an integration account when using multi-tenant, Consumption plan-based Azure Logic Apps or don't require an integration account when using single-tenant, Standard plan-based Azure Logic Apps:
+For more information, review the following documentation:
-> [!NOTE]
-> Before you can use integration account action in multi-tenant, Consumption plan-based Azure Logic Apps, you must
-> [link your logic app resource to an integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
-> However, in single-tenant, Standard plan-based Azure Logic Apps, some integration account operations don't require linking your
-> logic app resource to an integration account, for example, Liquid operations and XML operations. To use these actions, you need
-> to have Liquid maps, XML maps, or XML schemas that you can upload through the respective actions in the Azure portal or add to
-> your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
+* [Business-to-business (B2B) enterprise integration workflows](../logic-apps/logic-apps-enterprise-integration-overview.md)
+* [Create and manage integration accounts for B2B workflows](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md)
:::row::: :::column:::
Azure Logic Apps provides the following built-in actions, which either require a
[![Integration account icon][integration-account-icon]][integration-account-doc] \ \
- [**Integration Account Artifact Lookup**<br>(*Multi-tenant only*)][integration-account-doc]
+ [**Integration Account Artifact Lookup**][integration-account-doc]<br>(*Consumption logic app only*)
\ \ Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account.
Azure Logic Apps provides the following built-in actions, which either require a
[http-swagger-icon]: ./media/apis-list/http-swagger.png [http-webhook-icon]: ./media/apis-list/http-webhook.png [ibm-db2-icon]: ./media/apis-list/ibm-db2.png
+[ibm-host-file-icon]: ./media/apis-list/ibm-host-file.png
[ibm-mq-icon]: ./media/apis-list/ibm-mq.png [inline-code-icon]: ./media/apis-list/inline-code.png [schedule-icon]: ./media/apis-list/recurrence.png
Azure Logic Apps provides the following built-in actions, which either require a
[azure-event-hubs-doc]: ./connectors-create-api-azure-event-hubs.md "Connect to Azure Event Hubs so that you can receive and send events between logic apps and Event Hubs" [azure-functions-doc]: ../logic-apps/logic-apps-azure-functions.md "Integrate logic apps with Azure Functions" [azure-service-bus-doc]: ./connectors-create-api-servicebus.md "Manage messages from Service Bus queues, topics, and topic subscriptions"
+[azure-table-storage-doc]: /connectors/azuretables/ "Connect to your Azure Storage account so that you can create, update, and query tables and more"
[batch-doc]: ../logic-apps/logic-apps-batch-process-send-receive-messages.md "Process messages in groups, or as batches" [condition-doc]: ../logic-apps/logic-apps-control-flow-conditional-statement.md "Evaluate a condition and run different actions based on whether the condition is true or false" [data-operations-doc]: ../logic-apps/logic-apps-perform-data-operations.md "Perform data operations such as filtering arrays or creating CSV and HTML tables"
Azure Logic Apps provides the following built-in actions, which either require a
[schedule-sliding-window-doc]: ./connectors-native-sliding-window.md "Run logic apps that need to handle data in contiguous chunks" [scope-doc]: ../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md "Organize actions into groups, which get their own status after the actions in group finish running" [sftp-ssh-doc]: ./connectors-sftp-ssh.md "Connect to your SFTP account by using SSH. Upload, get, delete files, and more"
-[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in a SQL database table"
+[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table"
[switch-doc]: ../logic-apps/logic-apps-control-flow-switch-statement.md "Organize actions into cases, which are assigned unique values. Run only the case whose value matches the result from an expression, object, or token. If no matches exist, run the default case" [terminate-doc]: ../logic-apps/logic-apps-workflow-actions-triggers.md#terminate-action "Stop or cancel an actively running workflow for your logic app" [until-doc]: ../logic-apps/logic-apps-control-flow-loops.md#until-loop "Repeat actions until the specified condition is true or some state has changed"
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
Title: Managed connector operations
-description: Use Microsoft-managed triggers and actions to create automated workflows that integrate other apps, data, services, and systems using Azure Logic Apps.
+ Title: Overview about managed connectors in Azure Logic Apps
+description: Learn about Microsoft-managed connectors to create automated integration workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 05/16/2021 Last updated : 05/10/2022 # Managed connectors in Azure Logic Apps
-[Managed connectors](apis-list.md) provide ways for you to access other services and systems where [built-in triggers and actions](built-in.md) aren't available. You can use these triggers and actions to create workflows that integrate data, apps, cloud-based services, and on-premises systems. Compared to built-in triggers and actions, these connectors are usually tied to a specific service or system such as Azure Blob Storage, Office 365, SQL, Salesforce, or SFTP servers. Managed by Microsoft and hosted in Azure, managed connectors usually require that you first create a connection from your workflow and authenticate your identity. Both recurrence-based and webhook-based triggers are available, so if you use a recurrence-based trigger, review the [Recurrence behavior overview](apis-list.md#recurrence-behavior).
+Managed connectors provide ways for you to access other services and systems where built-in connectors aren't available. You can use these triggers and actions to create workflows that integrate data, apps, cloud-based services, and on-premises systems. Different from built-in connectors, managed connectors are usually tied to a specific service or system such as Office 365, SharePoint, Azure Key Vault, Salesforce, Azure Automation, and so on. Managed by Microsoft and hosted in Azure, managed connectors usually require that you first create a connection from your workflow and authenticate your identity.
-For a small number of services, systems and protocols, Azure Logic Apps provides built-in operations along with their [managed connector versions](managed.md). The number and range available vary based on whether you create a Consumption plan-based logic app resource that runs in multi-tenant Azure Logic Apps, or a Standard plan-based logic app resource that runs in single-tenant Azure Logic Apps. For more information, review [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md). In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app that runs in multi-tenant Azure Logic Apps, or a Standard logic app that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app type, and not the other.
-For example, if you create a single-tenant logic app, built-in operations are available for Azure Service Bus, Azure Event Hubs, SQL Server, and MQ. In a few cases, both a built-in version and a managed connector version are available. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. If you create a multi-tenant logic app, built-in operations are available for Azure Functions, Azure App Services, and Azure API Management.
+For example, a Standard logic app provides both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption logic app doesn't have the built-in versions. A Consumption logic app provides built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard logic app doesn't have these built-in connectors. For more information, review the following documentation: [Built-in connectors in Azure Logic Apps](built-in.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
-Some managed connectors in Azure Logic Apps belong to multiple sub-categories. For example, the SAP connector is both an [enterprise connector](#enterprise-connectors) and an [on-premises connector](#on-premises-connectors).
+This article provides a general overview about managed connectors and how they're organized in Consumption logic apps versus Standard logic apps with examples. For technical reference information about each managed connector in Azure Logic Apps, review [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
+
+## Managed connector categories
+
+In a *Standard* logic app, all managed connectors are organized into the **Azure** group. In a *Consumption* logic app, managed connectors are organized into the **Standard** group or **Enterprise** group. However, pricing for managed connectors works the same in both Standard and Consumption logic apps. For more information, review [Trigger and action operations in the Consumption model](../logic-apps/logic-apps-pricing.md#consumption-operations) and [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
* [Standard connectors](#standard-connectors) provide access to services such as Azure Blob Storage, Office 365, SharePoint, Salesforce, Power BI, OneDrive, and many more.
-* [Enterprise connectors](#enterprise-connectors) provide access to enterprise systems, such as SAP, IBM MQ, and IBM 3270.
+
+* [Enterprise connectors](#enterprise-connectors) provide access to enterprise systems, such as SAP, IBM MQ, and IBM 3270 for an additional cost.
+
+Some managed connectors also belong to the following informal groups:
+ * [On-premises connectors](#on-premises-connectors) provide access to on-premises systems such as SQL Server, SharePoint Server, SAP, Oracle DB, file shares, and others.+ * [Integration account connectors](#integration-account-connectors) help you transform and validate XML, encode and decode flat files, and process business-to-business (B2B) messages using AS2, EDIFACT, and X12 protocols.
-* [Integration service environment connectors](#ise-connectors) and are designed to run specifically in an ISE and offer benefits over their non-ISE versions.
+
+* [Integration service environment connectors](#ise-connectors) and are designed to run specifically in an ISE and provide benefits over their non-ISE versions.
+
+<a name="standard-connectors"></a>
## Standard connectors
-Azure Logic Apps provides these popular Standard connectors for building automated workflows using these services and systems. Some Standard connectors also support [on-premises systems](#on-premises-connectors) or [integration accounts](#integration-account-connectors).
+For a *Consumption* logic app, this section lists *some* of the popular connectors in the **Standard** group. In a *Standard* logic app, all managed connectors are in the **Azure** group, but pricing works the same as Consumption logic apps. For more information, review [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
:::row::: :::column:::
- [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]
+ [![Azure Blob Storage icon][azure-blob-storage-icon]][azure-blob-storage-doc]
\ \
- [**Azure Service Bus**][azure-service-bus-doc]
+ [**Azure Blob Storage**][azure-blob-storage-doc]
\ \
- Manage asynchronous messages, sessions, and topic subscriptions with the most commonly used connector in Logic Apps.
+ Connect to your Azure Storage account so that you can create and manage blob content.
:::column-end::: :::column:::
- [![SQL Server icon][sql-server-icon]][sql-server-doc]
+ [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
\ \
- [**SQL Server**][sql-server-doc]
+ [**Azure Event Hubs**][azure-event-hubs-doc]
\ \
- Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries.
+ Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
:::column-end::: :::column:::
- [![Azure Blog Storage icon][azure-blob-storage-icon]][azure-blob-storage-doc]
+ [![Azure Queues icon][azure-queues-icon]][azure-queues-doc]
\ \
- [**Azure Blob Storage**][azure-blob-storage-doc]
+ [**Azure Queues**][azure-queues-doc]
\ \
- Connect to your Azure Storage account so that you can create and manage blob content.
+ Connect to your Azure Storage account so that you can create and manage queues and messages.
:::column-end::: :::column:::
- [![Office 365 Outlook icon][office-365-outlook-icon]][office-365-outlook-doc]
+ [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]
\ \
- [**Office 365 Outlook**][office-365-outlook-doc]
+ [**Azure Service Bus**][azure-service-bus-doc]
\ \
- Connect to your work or school email account so that you can create and manage emails, tasks, calendar events and meetings, contacts, requests, and more.
+ Manage asynchronous messages, sessions, and topic subscriptions with the most commonly used connector in Logic Apps.
:::column-end::: :::row-end::: :::row::: :::column:::
- [![STFP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]
+ [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]
\ \
- [**STFP-SSH**][sftp-ssh-doc]
+ [**Azure Table Storage**][azure-table-storage-doc]
\ \
- Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders.
+ Connect to your Azure Storage account so that you can create, update, query, and manage tables.
:::column-end::: :::column:::
- [![SharePoint Online icon][sharepoint-online-icon]][sharepoint-online-doc]
+ [![File System icon][file-system-icon]][file-system-doc]
\ \
- [**SharePoint Online**][sharepoint-online-doc]
+ [**File System**][file-system-doc]
\ \
- Connect to SharePoint Online so that you can manage files, attachments, folders, and more.
+ Connect to your on-premises file share so that you can create and manage files.
:::column-end::: :::column:::
- [![Azure Queues icon][azure-queues-icon]][azure-queues-doc]
+ [![FTP icon][ftp-icon]][ftp-doc]
\ \
- [**Azure Queues**][azure-queues-doc]
+ [**FTP**][ftp-doc]
\ \
- Connect to your Azure Storage account so that you can create and manage queues and messages.
+ Connect to FTP servers you can access from the internet so that you can work with your files and folders.
:::column-end::: :::column:::
- [![FTP icon][ftp-icon]][ftp-doc]
+ [![Office 365 Outlook icon][office-365-outlook-icon]][office-365-outlook-doc]
\ \
- [**FTP**][ftp-doc]
+ [**Office 365 Outlook**][office-365-outlook-doc]
\ \
- Connect to FTP servers you can access from the internet so that you can work with your files and folders.
+ Connect to your work or school email account so that you can create and manage emails, tasks, calendar events and meetings, contacts, requests, and more.
:::column-end::: :::row-end::: :::row::: :::column:::
- [![File System icon][file-system-icon]][file-system-doc]
+ [![Salesforce icon][salesforce-icon]][salesforce-doc]
\ \
- [**File System**][file-system-doc]
+ [**Salesforce**][salesforce-doc]
\ \
- Connect to your on-premises file share so that you can create and manage files.
+ Connect to your Salesforce account so that you can create and manage items such as records, jobs, objects, and more.
:::column-end::: :::column:::
- [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
+ [![SharePoint Online icon][sharepoint-online-icon]][sharepoint-online-doc]
\ \
- [**Azure Event Hubs**][azure-event-hubs-doc]
+ [**SharePoint Online**][sharepoint-online-doc]
\ \
- Consume and publish events through an Event Hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
+ Connect to SharePoint Online so that you can manage files, attachments, folders, and more.
:::column-end::: :::column:::
- [![Azure Event Grid icon][azure-event-grid-icon]][azure-event-grid-doc]
+ [![SFTP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]
\ \
- [**Azure Event Grid**][azure-event-grid-doc]
+ [**SFTP-SSH**][sftp-ssh-doc]
\ \
- Monitor events published by an Event Grid, for example, when Azure resources or third-party resources change.
+ Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders.
:::column-end::: :::column:::
- [![Salesforce icon][salesforce-icon]][salesforce-doc]
+ [![SQL Server icon][sql-server-icon]][sql-server-doc]
\ \
- [**Salesforce**][salesforce-doc]
+ [**SQL Server**][sql-server-doc]
\ \
- Connect to your Salesforce account so that you can create and manage items such as records, jobs, objects, and more.
+ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries.
+ :::column-end:::
+
+<a name="enterprise-connectors"></a>
+
+## Enterprise connectors
+
+For a *Consumption* logic app, this section lists connectors in the **Enterprise** group, which can access enterprise systems for an additional cost. In a *Standard* logic app, all managed connectors are in the **Azure** group, but pricing is the same as for Consumption logic apps. For more information, review [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
+
+ :::column:::
+ [![IBM 3270 icon][ibm-3270-icon]][ibm-3270-doc]
+ \
+ \
+ [**IBM 3270**][ibm-3270-doc]
+ :::column-end:::
+ :::column:::
+ [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
+ \
+ \
+ [**MQ**][ibm-mq-doc]
+ :::column-end:::
+ :::column:::
+ [![SAP icon][sap-icon]][sap-connector-doc]
+ \
+ \
+ [**SAP**][sap-connector-doc]
+ :::column-end:::
+ :::column:::
:::column-end::: :::row-end:::
+<a name="on-premises-connectors"></a>
+ ## On-premises connectors Before you can create a connection to an on-premises system, you must first [download, install, and set up an on-premises data gateway][gateway-doc]. This gateway provides a secure communication channel without having to set up the necessary network infrastructure.
-The following connectors are some commonly used [Standard connectors](#standard-connectors) that Azure Logic Apps provides for accessing data and resources in on-premises systems. For the on-premises connectors list, see [Supported data sources](../logic-apps/logic-apps-gateway-connection.md#supported-connections).
+For a *Consumption* logic app, this section lists example [Standard connectors](#standard-connectors) that can access on-premises systems. For the expanded on-premises connectors list, review [Supported data sources](../logic-apps/logic-apps-gateway-connection.md#supported-connections).
:::row:::
+ :::column:::
+ [![Apache Impala][apache-impala-icon]][apache-impala-doc]
+ \
+ \
+ [**Apache Impala**][apache-impala-doc]
+ :::column-end:::
:::column::: [![Biztalk Server icon][biztalk-server-icon]][biztalk-server-doc] \
The following connectors are some commonly used [Standard connectors](#standard-
\ [**IBM Informix**][ibm-informix-doc] :::column-end::: :::column::: [![MySQL icon][mysql-icon]][mysql-doc] \ \ [**MySQL**][mysql-doc] :::column-end::: :::column::: [![Oracle DB icon][oracle-db-icon]][oracle-db-doc] \
The following connectors are some commonly used [Standard connectors](#standard-
\ [**PostgreSQL**][postgre-sql-doc] :::column-end:::
+ :::column:::
+ [![SAP icon][sap-icon]][sap-connector-doc]
+ \
+ \
+ [**SAP**][sap-connector-doc]
+ :::column-end:::
:::column::: [![SharePoint Server icon][sharepoint-server-icon]][sharepoint-server-doc] \ \ [**SharePoint Server**][sharepoint-server-doc] :::column-end::: :::column::: [![SQL Server icon][sql-server-icon]][sql-server-doc] \
The following connectors are some commonly used [Standard connectors](#standard-
\ [**Teradata**][teradata-doc] :::column-end:::
- :::column:::
- :::column-end:::
- :::column:::
- :::column-end:::
:::row-end::: <a name="integration-account-connectors"></a> ## Integration account connectors
-Integration account connectors specifically support [business-to-business (B2B) communication scenarios](../logic-apps/logic-apps-enterprise-integration-overview.md) in Azure Logic Apps. After you [create an integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) and define your B2B artifacts, such as trading partners, agreements, maps, and schemas, you can use integration account connectors to encode and decode messages, transform content, and more.
+Integration account operations specifically support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, maps, and schemas, you can use integration account connectors to encode and decode messages, transform content, and more.
-For example, if you use Microsoft BizTalk Server, you can create a connection from your workflow using the [BizTalk Server on-premises connector](#on-premises-connectors). You can then extend or perform BizTalk-like operations in your workflow by using these integration account connectors.
+For example, if you use Microsoft BizTalk Server, you can create a connection from your workflow using the [on-premises BizTalk Server connector](/connectors/biztalk/). You can then extend or perform BizTalk-like operations in your workflow by using these integration account connectors.
-> [!NOTE]
-> Before you can use integration account connectors in multi-tenant, Consumption plan-based Azure Logic Apps, you must
-> [link your logic app resource to an integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+* Consumption logic apps
+
+ Before you use any integration account operations in a Consumption logic app, you have to [link your logic app to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+
+* Standard logic apps
+
+ Integration account operations don't require that you link your logic app to your integration account. Instead, you create a connection to your integration account when you add the operation to your Standard logic app workflow.
+
+For more information, review the following documentation:
+
+* [Business-to-business (B2B) enterprise integration workflows](../logic-apps/logic-apps-enterprise-integration-overview.md)
+* [Create and manage integration accounts for B2B workflows](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md)
:::row::: :::column:::
For example, if you use Microsoft BizTalk Server, you can create a connection fr
:::column-end::: :::row-end:::
-## Enterprise connectors
-
-The following connectors provide access to enterprise systems for an additional cost:
-
- :::column:::
- [![IBM 3270 icon][ibm-3270-icon]][ibm-3270-doc]
- \
- \
- [**IBM 3270**][ibm-3270-doc]
- :::column-end:::
- :::column:::
- [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
- \
- \
- [**IBM MQ**][ibm-mq-doc]
- :::column-end:::
- :::column:::
- [![SAP icon][sap-icon]][sap-connector-doc]
- \
- \
- [**SAP**][sap-connector-doc]
- :::column-end:::
- :::column:::
- :::column-end:::
- ## ISE connectors In an integration service environment (ISE), these managed connectors also have [ISE versions](apis-list.md#ise-and-connectors), which have different capabilities than their multi-tenant versions:
For more information, see these topics:
> [Create custom APIs you can call from Logic Apps](../logic-apps/logic-apps-create-api-app.md) <!--Managed connector icons-->
+[apache-impala-icon]: ./media/apis-list/apache-impala.png
[appfigures-icon]: ./media/apis-list/appfigures.png [asana-icon]: ./media/apis-list/asana.png [azure-automation-icon]: ./media/apis-list/azure-automation.png
For more information, see these topics:
[youtube-icon]: ./media/apis-list/youtube.png <!--Managed connector doc links-->
+[apache-impala-doc]: /connectors/azureimpala/ "Connect to your Impala database to read data from tables"
[azure-automation-doc]: /connectors/azureautomation/ "Create and manage automation jobs for your cloud and on-premises infrastructure" [azure-blob-storage-doc]: ./connectors-create-api-azureblobstorage.md "Manage files in your blob container with Azure blob storage connector" [azure-cosmos-db-doc]: ./connectors-create-api-cosmos-db.md "Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents"
For more information, see these topics:
[slack-doc]: ./connectors-create-api-slack.md "Connect to Slack and post messages to Slack channels" [smtp-doc]: ./connectors-create-api-smtp.md "Connect to an SMTP server and send email with attachments" [sparkpost-doc]: ./connectors-create-api-sparkpost.md "Connects to SparkPost for communication"
-[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in a SQL database table"
+[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table"
[teradata-doc]: /connectors/teradata/ "Connect to your Teradata database to read data from tables" [twilio-doc]: ./connectors-create-api-twilio.md "Connect to Twilio. Send and get messages, get available numbers, manage incoming phone numbers, and more" [youtube-doc]: ./connectors-create-api-youtube.md "Connect to YouTube. Manage your videos and channels"
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Based on your needs, you can "plug in" certain Dapr component types like state s
# [YAML](#tab/yaml)
-When defining a Dapr component via YAML, you will pass your component manifest into the Azure CLI. For example, deploy a `pubsub.yaml` component using the following command:
+When defining a Dapr component via YAML, you will pass your component manifest into the Azure CLI. When configuring multiple components, you will need to create a separate YAML file and run the Azure CLI command for each component.
+
+For example, deploy a `pubsub.yaml` component using the following command:
```azurecli
-az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub--yaml "./pubsub.yaml"
+az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub --yaml "./pubsub.yaml"
``` The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`.
The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with ap
# [Bicep](#tab/bicep)
-This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
+
+The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
```bicep resource daprComponent 'daprComponents@2022-01-01-preview' = {
resource daprComponent 'daprComponents@2022-01-01-preview' = {
# [ARM](#tab/arm)
-This resource defines a Dapr component called `dapr-pubsub` via ARM. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+A Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
+
+This resource defines a Dapr component called `dapr-pubsub` via ARM. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
```json {
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
If you've changed the `STORAGE_ACCOUNT_CONTAINER` variable from its original val
Navigate to the directory in which you stored the *statestore.yaml* file and run the following command to configure the Dapr component in the Container Apps environment.
-If you need to add multiple components, run the `az containerapp env dapr-component set` command multiple times to add each component.
+If you need to add multiple components, create a separate YAML file for each component and run the `az containerapp env dapr-component set` command multiple times to add each component. For more information about configuring Dapr components, see [Configure Dapr components](dapr-overview.md#configure-dapr-components).
+ # [Bash](#tab/bash)
databox-online Azure Stack Edge Gpu Deploy Add Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-add-shares.md
Previously updated : 02/22/2021 Last updated : 05/09/2022 # Customer intent: As an IT admin, I need to understand how to add and connect to shares on Azure Stack Edge Pro so I can use it to transfer data to Azure.
To create a share, do the following procedure:
The type of service you select depends on which format you want the data to use in Azure. In this example, because we want to store the data as block blobs in Azure, we select **Block Blob**. If you select **Page Blob**, make sure that your data is 512 bytes aligned. For example, a VHDX is always 512 bytes aligned. > [!IMPORTANT]
- > Make sure that the Azure Storage account that you use does not have immutability policies set on it if you are using it with a Azure Stack Edge Pro or Data Box Gateway device. For more information, see [Set and manage immutability policies for blob storage](../storage/blobs/immutable-policy-configure-version-scope.md).
+ > Make sure that the Azure Storage account that you use does not have immutability policies or archiving policies set on it if you are using it with a Azure Stack Edge Pro or Data Box Gateway device. If the blob policies are immutable or if the blobs are aggressively archived, you'll experience upload errors when the blob is changed in the share. For more information, see [Set and manage immutability policies for blob storage](../storage/blobs/immutable-policy-configure-version-scope.md).
e. Create a new blob container or use an existing one from the dropdown list. If creating a blob container, provide a container name. If a container doesn't already exist, it's created in the storage account with the newly created share name.
In this tutorial, you learned about the following Azure Stack Edge Pro topics:
To learn how to transform your data by using Azure Stack Edge Pro, advance to the next tutorial: > [!div class="nextstepaction"]
-> [Transform data with Azure Stack Edge Pro](./azure-stack-edge-j-series-deploy-configure-compute.md)
+> [Transform data with Azure Stack Edge Pro](./azure-stack-edge-j-series-deploy-configure-compute.md)
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
Title: Understanding just-in-time virtual machine access in Microsoft Defender for Cloud description: This document explains how just-in-time VM access in Microsoft Defender for Cloud helps you control access to your Azure virtual machines-- Previously updated : 11/09/2021 Last updated : 05/15/2022 # Understanding just-in-time (JIT) VM access
This page explains the principles behind Microsoft Defender for Cloud's just-in-
To learn how to apply JIT to your VMs using the Azure portal (either Defender for Cloud or Azure Virtual Machines) or programmatically, see [How to secure your management ports with JIT](just-in-time-access-usage.md). - ## The risk of open management ports on a virtual machine Threat actors actively hunt accessible machines with open management ports, like RDP or SSH. All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment. -- ## Why JIT VM access is the solution As with all cybersecurity prevention techniques, your goal should be to reduce the attack surface. In this case, that means having fewer open ports, especially management ports.
Your legitimate users also use these ports, so it's not practical to keep them c
To solve this dilemma, Microsoft Defender for Cloud offers JIT. With JIT, you can lock down the inbound traffic to your VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed.
+## How JIT operates with network resources in Azure and AWS
-
-## How JIT operates with network security groups and Azure Firewall
-
-When you enable just-in-time VM access, you can select the ports on the VM to which inbound traffic will be blocked. Defender for Cloud ensures "deny all inbound traffic" rules exist for your selected ports in the [network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG) and [Azure Firewall rules](../firewall/rule-processing.md). These rules restrict access to your Azure VMsΓÇÖ management ports and defend them from attack.
+In Azure, you can block inbound traffic on specific ports, by enabling just-in-time VM access. Defender for Cloud ensures "deny all inbound traffic" rules exist for your selected ports in the [network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG) and [Azure Firewall rules](../firewall/rule-processing.md). These rules restrict access to your Azure VMsΓÇÖ management ports and defend them from attack.
If other rules already exist for the selected ports, then those existing rules take priority over the new "deny all inbound traffic" rules. If there are no existing rules on the selected ports, then the new rules take top priority in the NSG and Azure Firewall.
-When a user requests access to a VM, Defender for Cloud checks that the user has [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) permissions for that VM. If the request is approved, Defender for Cloud configures the NSGs and Azure Firewall to allow inbound traffic to the selected ports from the relevant IP address (or range), for the amount of time that was specified. After the time has expired, Defender for Cloud restores the NSGs to their previous states. Connections that are already established are not interrupted.
+In AWS, by enabling JIT-access the relevant rules in the attached EC2 security groups, for the selected ports, are revoked which blocks inbound traffic on those specific ports.
+
+When a user requests access to a VM, Defender for Cloud checks that the user has [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) permissions for that VM. If the request is approved, Defender for Cloud configures the NSGs and Azure Firewall to allow inbound traffic to the selected ports from the relevant IP address (or range), for the amount of time that was specified. In AWS, Defender for Cloud creates a new EC2 security group that allow inbound traffic to the specified ports. After the time has expired, Defender for Cloud restores the NSGs to their previous states. Connections that are already established are not interrupted.
> [!NOTE] > JIT does not support VMs protected by Azure Firewalls controlled by [Azure Firewall Manager](../firewall-manager/overview.md). The Azure Firewall must be configured with Rules (Classic) and cannot use Firewall policies. --- ## How Defender for Cloud identifies which VMs should have JIT applied The diagram below shows the logic that Defender for Cloud applies when deciding how to categorize your supported VMs:
+### [**Azure**](#tab/defender-for-container-arch-aks)
[![Just-in-time (JIT) virtual machine (VM) logic flow.](media/just-in-time-explained/jit-logic-flow.png)](media/just-in-time-explained/jit-logic-flow.png#lightbox)
+### [**AWS**](#tab/defender-for-container-arch-eks)
+++ When Defender for Cloud finds a machine that can benefit from JIT, it adds that machine to the recommendation's **Unhealthy resources** tab. ![Just-in-time (JIT) virtual machine (VM) access recommendation.](./media/just-in-time-explained/unhealthy-resources.png) - ## FAQ - Just-in-time virtual machine access ### What permissions are needed to configure and use JIT?
JIT Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introd
If you want to create custom roles that can work with JIT, you'll need the details from the table below.
+If you are setting up JIT on your Amazon Web Service (AWS) VM, you will need to [connect your AWS account](quickstart-onboard-aws.md) to Microsoft Defender for Cloud.
+ > [!TIP] > To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.
If you want to create custom roles that can work with JIT, you'll need the detai
|Request JIT access to a VM | *Assign these actions to the user:* <ul><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> `Microsoft.Compute/virtualMachines/read` </li><li> `Microsoft.Network/networkInterfaces/*/read` </li> <li> `Microsoft.Network/publicIPAddresses/read` </li></ul> | |Read JIT policies| *Assign these actions to the user:* <ul><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/read`</li><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action`</li><li>`Microsoft.Security/policies/read`</li><li>`Microsoft.Security/pricings/read`</li><li>`Microsoft.Compute/virtualMachines/read`</li><li>`Microsoft.Network/*/read`</li>| -
+> [!Note]
+> Only the `Microsoft.Security` permissions are relevant for AWS.
## Next steps
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
Title: Just-in-time virtual machine access in Microsoft Defender for Cloud | Microsoft Docs description: Learn how just-in-time VM access (JIT) in Microsoft Defender for Cloud helps you control access to your Azure virtual machines. Previously updated : 01/06/2022 Last updated : 05/17/2022 # Secure your management ports with just-in-time access
For a full explanation of the privilege requirements, see [What permissions are
This page teaches you how to include JIT in your security program. You'll learn how to: -- **Enable JIT on your VMs** - You can enable JIT with your own custom options for one or more VMs using Defender for Cloud, PowerShell, or the REST API. Alternatively, you can enable JIT with default, hard-coded parameters, from Azure virtual machines. When enabled, JIT locks down inbound traffic to your Azure VMs by creating a rule in your network security group.
+- **Enable JIT on your VMs** - You can enable JIT with your own custom options for one or more VMs using Defender for Cloud, PowerShell, or the REST API. Alternatively, you can enable JIT with default, hard-coded parameters, from Azure virtual machines. When enabled, JIT locks down inbound traffic to your Azure and AWS VMs by creating a rule in your network security group.
- **Request access to a VM that has JIT enabled** - The goal of JIT is to ensure that even though your inbound traffic is locked down, Defender for Cloud still provides easy access to connect to VMs when needed. You can request access to a JIT-enabled VM from Defender for Cloud, Azure virtual machines, PowerShell, or the REST API. - **Audit the activity** - To ensure your VMs are secured appropriately, review the accesses to your JIT-enabled VMs as part of your regular security checks. ## Availability
-|Aspect|Details|
-|-|:-|
-| Release state: | General availability (GA) |
-| Supported VMs: | :::image type="icon" source="./medi). |
+| Aspect | Details |
+|--|:-|
+| Release state: | General availability (GA) |
+| Supported VMs: | :::image type="icon" source="./medi). <br> :::image type="icon" source="./media/icons/yes-icon.png"::: AWS EC2 instances (Preview) |
| Required roles and permissions: | **Reader** and **SecurityReader** roles can both view the JIT status and parameters.<br>To create custom roles that can work with JIT, see [What permissions are needed to configure and use JIT?](just-in-time-access-overview.md#what-permissions-are-needed-to-configure-and-use-jit).<br>To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages. |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts |
-
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) |
<sup><a name="footnote1"></a>1</sup> For any VM protected by Azure Firewall, JIT will only fully protect the machine if it's in the same VNET as the firewall. VMs using VNET peering will not be fully protected.
From Defender for Cloud, you can enable and configure the JIT VM access.
1. Select **Save**. -- ### Edit the JIT configuration on a JIT-enabled VM using Defender for Cloud <a name="jit-modify"></a> You can modify a VM's just-in-time configuration by adding and configuring a new port to protect for that VM, or by changing any other setting related to an already protected port.
To edit the existing JIT rules for a VM:
1. When you've finished editing the ports, select **Save**. -- ### [**Azure virtual machines**](#tab/jit-config-avm) ### Enable JIT on your VMs from Azure virtual machines
When a VM has a JIT enabled, you have to request access to connect to it. You ca
> [!NOTE] > If a user who is requesting access is behind a proxy, the option **My IP** may not work. You may need to define the full IP address range of the organization. -- ### [**Azure virtual machines**](#tab/jit-request-avm) ### Request access to a JIT-enabled VM from the Azure virtual machine's connect page
To request access from Azure virtual machines:
> [!NOTE] > After a request is approved for a VM protected by Azure Firewall, Defender for Cloud provides the user with the proper connection details (the port mapping from the DNAT table) to use to connect to the VM. -- ### [**PowerShell**](#tab/jit-request-powershell) ### Request access to a JIT-enabled VM using PowerShell
Run the following in PowerShell:
Learn more in the [PowerShell cmdlet documentation](/powershell/scripting/developer/cmdlet/cmdlet-overview). -- ### [**REST API**](#tab/jit-request-api) ### Request access to a JIT-enabled VMs using the REST API
You can gain insights into VM activities using log search. To view the logs:
1. To download the log information, select **Download as CSV**. -- ## Next steps In this article, you learned _how_ to configure and use just-in-time VM access. To learn _why_ JIT should be used, read the concept article explaining the threats it defends against:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/16/2022 Last updated : 05/17/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in May include: - [Multi-cloud settings of Servers plan are now available in connector level](#multi-cloud-settings-of-servers-plan-are-now-available-in-connector-level)
+- [JIT is now available for AWS (Preview)](#jit-is-now-available-for-aws-preview)
### Multi-cloud settings of Servers plan are now available in connector level
Updates in the UI include a reflection of the selected pricing tier and the requ
:::image type="content" source="media/release-notes/auto-provision.png" alt-text="Screenshot of the auto-provision page with the multi-cloud connector enabled.":::
+### JIT is now available for AWS (Preview)
+
+We would like to announce that Just-in-Time VM access (JIT) is now available (in preview) to protect your AWS EC2 instances.
+
+Learn how to [JIT protects](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws) your AWS EC2 instances.
+ ## April 2022 Updates in April include:
expressroute Expressroute Howto Set Global Reach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach.md
You can run the Get operation to verify the status.
After the previous operation is complete, you no longer have connectivity between your on-premises network through your ExpressRoute circuits.
+## Update connectivity configuration
+
+To update the Global Reach connectivity configuration run the following command against one of the ExpressRoute circuits.
+
+```azurepowershell-interactive
+$ckt_1 = Get-AzExpressRouteCircuit -Name "Your_circuit_1_name" -ResourceGroupName "Your_resource_group"
+$ckt_2 = Get-AzExpressRouteCircuit -Name "Your_circuit_2_name" -ResourceGroupName "Your_resource_group"
+$addressSpace = 'aa:bb::0/125'
+$addressPrefixType = 'IPv6'
+Set-AzExpressRouteCircuitConnectionConfig -Name "Your_connection_name" -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering $ckt_2.Peerings[0].Id -AddressPrefix $addressSpace -AddressPrefixType $addressPrefixType
+Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt_1
+```
+ ## Next steps 1. [Learn more about ExpressRoute Global Reach](expressroute-global-reach.md) 2. [Verify ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md)
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
description: Learn about Azure ExpressRoute monitoring, metrics, and alerts usin
Previously updated : 09/14/2021 Last updated : 05/10/2022 # ExpressRoute monitoring, metrics, and alerts
This article helps you understand ExpressRoute monitoring, metrics, and alerts u
## ExpressRoute metrics
-To view **Metrics**, navigate to the *Azure Monitor* page and select *Metrics*. To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
+To view **Metrics**, go to the *Azure Monitor* page and select *Metrics*. To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
-| [Arp Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | PeeringType, Peer | Yes |
-| [Bgp Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | PeeringType, Peer | Yes |
-| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | PeeringType | No |
-| [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | PeeringType | No |
+| [Arp Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
+| [Bgp Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
+| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Peering Type | No |
+| [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Peering Type | No |
| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Peering Type | Yes | | DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Peering Type | Yes | | GlobalReachBitsInPerSecond | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | PeeredCircuitSKey | No |
When you deploy an ExpressRoute gateway, Azure manages the compute and functions
* Frequency of routes changed * Number of VMs in the virtual network
-It's highly recommended you set alerts for each of these metrics so that you are aware of when your gateway could be seeing performance issues.
+It's highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
### <a name = "gwbits"></a>Bits received per second - Split by instance
This metric captures the number of inbound packets traversing the ExpressRoute g
### <a name = "advertisedroutes"></a>Count of Routes Advertised to Peer - Split by instance
-Aggregation type: *Count*
+Aggregation type: *Max*
-This metric is the count for the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces may include virtual networks that are connected using VNet peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
+This metric shows the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces may include virtual networks that are connected using VNet peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-advertised-to-peer.png" alt-text="Screenshot of count of routes advertised to peer.":::
This metric shows the bits per second for ingress and egress to Azure through th
## Alerts for ExpressRoute gateway connections
-1. To configure alerts, navigate to **Azure Monitor**, then select **Alerts**.
+1. To set up alerts, go to **Azure Monitor**, then select **Alerts**.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/eralertshowto.jpg" alt-text="alerts"::: 2. Select **+Select Target** and select the ExpressRoute gateway connection resource.
This metric shows the bits per second for ingress and egress to Azure through th
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/basedpeering.jpg" alt-text="each peering":::
-## Configure alerts for activity logs on circuits
+## Set up alerts for activity logs on circuits
In the **Alert Criteria**, you can select **Activity Log** for the Signal Type and select the Signal.
In the **Alert Criteria**, you can select **Activity Log** for the Signal Type a
## More metrics in Log Analytics
-You can also view ExpressRoute metrics by navigating to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output will contain the columns below.
+You can also view ExpressRoute metrics by going to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output will contain the columns below.
| **Column** | **Type** | **Description** | | | | |
You can also view ExpressRoute metrics by navigating to your ExpressRoute circui
## Next steps
-Configure your ExpressRoute connection.
+Set up your ExpressRoute connection.
* [Create and modify a circuit](expressroute-howto-circuit-arm.md) * [Create and modify peering configuration](expressroute-howto-routing-arm.md)
logic-apps Create Custom Built In Connector Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-custom-built-in-connector-standard.md
+
+ Title: Create built-in connectors for Standard logic apps
+description: Create your own custom built-in connectors for Standard workflows in single-tenant Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 05/17/2022
+# As a developer, I want learn how to create my own custom built-in connector operations to use and run in my Standard logic app workflows.
++
+# Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps
+
+If you need connectors that aren't available in Standard logic app workflows, you can create your own built-in connectors using the same extensibility model that's used by the [*service provider-based built-in connectors*](custom-connector-overview.md#service-provider-interface-implementation) available for Standard workflows in single-tenant Azure Logic Apps. This extensibility model is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
+
+This article shows how to create an example custom built-in Cosmos DB connector, which has a single Azure Functions-based trigger and no actions. The trigger fires when a new document is added to the lease collection or container in Cosmos DB and then runs a workflow that uses the input payload as the Cosmos document.
+
+| Operation | Operation details | Description |
+|--|-|-|
+| Trigger | When a document is received | This trigger operation runs when an insert operation happens in the specified Cosmos DB database and collection. |
+| Action | None | This connector doesn't define any action operations. |
+||||
+
+This sample connector uses the same functionality as the [Azure Cosmos DB trigger for Azure Functions](../azure-functions/functions-bindings-cosmosdb-v2-trigger.md), which is based on [Azure Functions triggers and bindings](../azure-functions/functions-triggers-bindings.md). For the complete sample, review [Sample custom built-in Cosmos DB connector - Azure Logic Apps Connector Extensions](https://github.com/Azure/logicapps-connector-extensions/tree/CosmosDB/src/CosmosDB).
+
+For more information, review the following documentation:
+
+* [Custom connectors for Standard logic apps](custom-connector-overview.md#custom-connector-standard)
+* [Service provider-based built-in connectors](custom-connector-overview.md#service-provider-interface-implementation)
+* [Single-tenant Azure Logic Apps](single-tenant-overview-compare.md)
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* Basic knowledge about single-tenant Azure Logic Apps, Standard logic app workflows, connectors, and how to use Visual Studio Code for creating single tenant-based workflows. For more information, review the following documentation:
+
+ * [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
+
+ * [Create an integration workflow with single-tenant Azure Logic Apps (Standard) - Azure portal](create-single-tenant-workflows-azure-portal.md)
+
+* [Visual Studio Code with the Azure Logic Apps (Standard) extension and other prerequisites installed](create-single-tenant-workflows-azure-portal.md#prerequisites). Your installation should already include the [NuGet package for Microsoft.Azure.Workflows.WebJobs.Extension](https://www.nuget.org/packages/Microsoft.Azure.Workflows.WebJobs.Extension/).
+
+* An Azure Cosmos account, database, and container or collection. For more information, review [Quickstart: Create an Azure Cosmos account, database, container and items from the Azure portal](../cosmos-db/sql/create-cosmosdb-resources-portal.md).
+
+## High-level steps
+
+The following outline describes the high-level steps to build the example connector:
+
+1. Create a class library project.
+
+1. In your project, add the **Microsoft.Azure.Workflows.WebJobs.Extension** NuGet package as a NuGet reference.
+
+1. Provide the operations for your built-in connector by using the NuGet package to implement the methods for the interfaces named [**IServiceOperationsProvider**](custom-connector-overview.md#iserviceoperationsprovider) and [**IServiceOperationsTriggerProvider**](custom-connector-overview.md#iserviceoperationstriggerprovider).
+
+1. Register your custom built-in connector with the Azure Functions runtime extension.
+
+1. Install the connector for use.
+
+## Create your class library project
+
+1. In Visual Studio Code, create a .NET Core 3.1 class library project.
+
+1. In your project, add the NuGet package named **Microsoft.Azure.Workflows.WebJobs.Extension** as a NuGet reference.
+
+## Implement the service provider interface
+
+To provide the operations for the sample built-in connector, in the **Microsoft.Azure.Workflows.WebJobs.Extension** NuGet package, implement the methods for the following interfaces. The following diagram shows the interfaces with the method implementations that the Azure Logic Apps designer and runtime expect for a custom built-in connector that has an Azure Functions-based trigger:
+
+![Conceptual class diagram showing method implementation for sample Cosmos DB custom built-in connector.](./media/create-custom-built-in-connector-standard/service-provider-cosmos-db-example.png)
+
+### IServiceOperationsProvider
+
+This interface includes the following methods that provide the operation manifest and performs your service provider's specific tasks or actual business logic in your custom built-in connector. For more information, review [IServiceOperationsProvider](custom-connector-overview.md#iserviceoperationsprovider).
+
+* [**GetService()**](#getservice)
+
+ The designer in Azure Logic Apps requires the [**GetService()**](#getservice) method to retrieve the high-level metadata for your custom service, including the service description, connection input parameters required on the designer, capabilities, brand color, icon URL, and so on.
+
+* [**GetOperations()**](#getoperations)
+
+ The designer in Azure Logic Apps requires the [**GetOperations()**](#getoperations) method to retrieve the operations implemented by your custom service. The operations list is based on Swagger schema. The designer also uses the operation metadata to understand the input parameters for specific operations and generate the outputs as property tokens, based on the schema of the output for an operation.
+
+* [**GetBindingConnectionInformation()**](#getbindingconnectioninformation)
+
+ If your trigger is an Azure Functions-based trigger type, the runtime in Azure Logic Apps requires the [**GetBindingConnectionInformation()**](#getbindingconnectioninformation) method to provide the required connection parameters information to the Azure Functions trigger binding.
+
+* [**InvokeOperation()**](#invokeoperation)
+
+ If your connector has actions, the runtime in Azure Logic Apps requires the [**InvokeOperation()**](#invokeoperation) method to call each action in your connector that runs during workflow execution. If your connector doesn't have actions, you don't have to implement the **InvokeOperation()** method.
+
+ In this example, the Cosmos DB custom built-in connector doesn't have actions. However, the method is included in this example for completeness.
+
+For more information about these methods and their implementation, review these methods later in this article.
+
+### IServiceOperationsTriggerProvider
+
+You can add or expose an [Azure Functions trigger or action](../azure-functions/functions-bindings-example.md) as a service provider trigger in your custom built-in connector. To use the Azure Functions-based trigger type and the same Azure Functions binding as the Azure managed connector trigger, implement the following methods to provide the connection information and trigger bindings as required by Azure Functions. For more information, review [IServiceOperationsTriggerProvider](custom-connector-overview.md#iserviceoperationstriggerprovider).
+
+* The [**GetFunctionTriggerType()**](#getfunctiontriggertype) method is required to return the string that's the same as the **type** parameter in the Azure Functions trigger binding.
+
+* The [**GetFunctionTriggerDefinition()**](#getfunctiontriggerdefinition) has a default implementation, so you don't need to explicitly implement this method. However, if you want to update the trigger's default behavior, such as provide extra parameters that the designer doesn't expose, you can implement this method and override the default behavior.
+
+### Methods to implement
+
+The following sections describe the methods that the example connector implements. For the complete sample, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetService()
+
+The designer requires the following method to get the high-level description for your service:
+
+```csharp
+public ServiceOperationApi GetService()
+{
+ return this.CosmosDBApis.ServiceOperationServiceApi();
+}
+```
+
+#### GetOperations()
+
+The designer requires the following method to get the operations implemented by your service. This operations list is based on Swagger schema.
+
+```csharp
+public IEnumerable<ServiceOperation> GetOperations(bool expandManifest)
+{
+ return expandManifest ? serviceOperationsList : GetApiOperations();
+}
+```
+
+#### GetBindingConnectionInformation()
+
+To use the Azure Functions-based trigger type, the following method provides the required connection parameters information to the Azure Functions trigger binding.
+
+```csharp
+public string GetBindingConnectionInformation(string operationId, InsensitiveDictionary<JToken> connectionParameters)
+{
+ return ServiceOperationsProviderUtilities
+ .GetRequiredParameterValue(
+ serviceId: ServiceId,
+ operationId: operationID,
+ parameterName: "connectionString",
+ parameters: connectionParameters)?
+ .ToValue<string>();
+}
+```
+
+#### InvokeOperation()
+
+The example Cosmos DB custom built-in connector doesn't have actions, but the following method is included for completeness:
+
+```csharp
+public Task<ServiceOperationResponse> InvokeOperation(string operationId, InsensitiveDictionary<JToken> connectionParameters, ServiceOperationRequest serviceOperationRequest)
+{
+ throw new NotImplementedException();
+}
+```
+
+#### GetFunctionTriggerType()
+
+To use an Azure Functions-based trigger as a trigger in your connector, you have to return the string that's the same as the **type** parameter in the Azure Functions trigger binding.
+
+The following example returns the string for the out-of-the-box built-in Azure Cosmos DB trigger, `"type": "cosmosDBTrigger"`:
+
+```csharp
+public string GetFunctionTriggerType()
+{
+ return "CosmosDBTrigger";
+}
+```
+
+#### GetFunctionTriggerDefinition()
+
+This method has a default implementation, so you don't need to explicitly implement this method. However, if you want to update the trigger's default behavior, such as provide extra parameters that the designer doesn't expose, you can implement this method and override the default behavior.
+
+<a name="register-connector"></a>
+
+## Register your connector
+
+To load your custom built-in connector extension during the Azure Functions runtime start process, you have to add the Azure Functions extension registration as a startup job and register your connector as a service provider in service provider list. Based on the type of data that your built-in trigger needs as inputs, optionally add the converter. This example converts the **Document** data type for Cosmos DB Documents to a **JObject** array.
+
+The following sections show how to register your custom built-in connector as an Azure Functions extension.
+
+### Create the startup job
+
+1. Create a startup class using the assembly attribute named **[assembly:WebJobsStartup]**.
+
+1. Implement the **IWebJobsStartup** interface. In the **Configure()** method, register the extension and inject the service provider.
+
+ For example, the following code snippet shows the startup class implementation for the sample custom built-in Cosmos DB connector:
+
+ ```csharp
+ using Microsoft.Azure.WebJobs;
+ using Microsoft.Azure.WebJobs.Hosting;
+ using Microsoft.Extensions.DependencyInjection.Extensions;
+
+ [assembly: Microsoft.Azure.WebJobs.Hosting.WebJobsStartup(typeof(ServiceProviders.CosmosDb.Extensions.CosmosDbTriggerStartup))]
+
+ namespace ServiceProviders.CosmosDb.Extensions
+ {
+ public class CosmosDbServiceProviderStartup : IWebJobsStartup
+ {
+ // Initialize the workflow service.
+ public void Configure(IWebJobsBuilder builder)
+ {
+ // Register the extension.
+ builder.AddExtension<CosmosDbServiceProvider>)();
+
+ // Use dependency injection (DI) for the trigger service operation provider.
+ builder.Services.TryAddSingleton<CosmosDbTriggerServiceOperationsProvider>();
+ }
+ }
+ }
+ ```
+
+ For more information, review [Register services - Use dependency injection in .NET Azure Functions](../azure-functions/functions-dotnet-dependency-injection.md#register-services).
+
+### Register the service provider
+
+Now, register the service provider implementation as an Azure Functions extension with the Azure Logic Apps engine. This example uses the built-in [Azure Cosmos DB trigger for Azure Functions](../azure-functions/functions-bindings-cosmosdb-v2-trigger.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp) as a new trigger. This example also registers the new Cosmos DB service provider for an existing list of service providers, which is already part of the Azure Logic Apps extension. For more information, review [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md).
+
+```csharp
+using Microsoft.Azure.Documents;
+using Microsoft.Azure.WebJobs.Description;
+using Microsoft.Azure.WebJobs.Host.Config;
+using Microsoft.Azure.Workflows.ServiceProviders.Abstractions;
+using Microsoft.WindowsAzure.ResourceStack.Common.Extensions;
+using Microsoft.WindowsAzure.ResourceStack.Common.Json;
+using Microsoft.WindowsAzure.ResourceStack.Common.Storage.Cosmos;
+using Newtonsoft.Json.Linq;
+using System;
+using System.Collections.Generic;
+
+namespace ServiceProviders.CosmosDb.Extensions
+{
+ [Extension("CosmosDbServiceProvider", configurationSection: "CosmosDbServiceProvider")]
+ public class CosmosDbServiceProvider : IExtensionConfigProvider
+ {
+ // Initialize a new instance for the CosmosDbServiceProvider class.
+ public CosmosDbServiceProvider(ServiceOperationsProvider serviceOperationsProvider, CosmosDbTriggerServiceOperationsProvider operationsProvider)
+ {
+ serviceOperationsProvider.RegisterService(serviceName: CosmosDBServiceOperationsProvider.ServiceName, serviceOperationsProviderId: CosmosDBServiceOperationsProvider.ServiceId, serviceOperationsProviderInstance: operationsProvider);
+ }
+
+ // Convert the Cosmos Document array to a generic JObject array.
+ public static JObject[] ConvertDocumentToJObject(IReadOnlyList<Document> data)
+ {
+ List<JObject> jobjects = new List<JObject>();
+
+ foreach(var doc in data)
+ {
+ jobjects.Add((JObject)doc.ToJToken());
+ }
+
+ return jobjects.ToArray();
+ }
+
+ // In the Initialize method, you can add any custom implementation.
+ public void Initialize(ExtensionConfigContext context)
+ {
+ // Convert the Cosmos Document list to a JObject array.
+ context.AddConverter<IReadOnlyList<Document>, JObject[]>(ConvertDocumentToJObject);
+ }
+ }
+}
+```
+
+### Add a converter
+
+Azure Logic Apps has a generic way to handle any Azure Functions built-in trigger by using the **JObject** array. However, if you want to convert the read-only list of Azure Cosmos DB documents into a **JObject** array, you can add a converter. When the converter is ready, register the converter as part of **ExtensionConfigContext** as shown earlier in this example:
+
+```csharp
+// Convert the Cosmos document list to a JObject array.
+context.AddConverter<IReadOnlyList<Document>, JObject[]>(ConvertDocumentToJObject);
+```
+
+### Class library diagram for implemented classes
+
+When you're done, review the following class diagram that shows the implementation for all the classes in the **Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB.dll** extension bundle:
+
+* **CosmosDbServiceOperationsProvider**
+* **CosmosDbServiceProvider**
+* **CosmosDbServiceProviderStartup**
+
+![Conceptual code map diagram that shows complete class implementation.](./media/create-custom-built-in-connector-standard/methods-implementation-code-map-diagram.png)
+
+## Install your connector
+
+To add the NuGet reference from the previous section, in the extension bundle named **Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB.dll**, update the **extensions.json** file. For more information, go to the **Azure/logicapps-connector-extensions** repo, and review the PowerShell script named [**add-extension.ps1**](https://github.com/Azure/logicapps-connector-extensions/blob/main/src/Common/tools/add-extension.ps1).
+
+1. Update the extension bundle to include the custom built-in connector.
+
+1. In Visual Studio Code, which should have the **Azure Logic Apps (Standard) for Visual Studio Code** extension installed, create a logic app project, and install the extension package using the following PowerShell command:
+
+ **PowerShell**
+
+ ```powershell
+ dotnet add package "Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB" --version 1.0.0 --source $extensionPath
+ ```
+
+ Alternatively, from your logic app project's directory using a PowerShell prompt, run the PowerShell script named [**add-extension.ps1**](https://github.com/Azure/logicapps-connector-extensions/blob/main/src/Common/tools/add-extension.ps1):
+
+ ```powershell
+ .\add-extension.ps1 {Cosmos-DB-output-bin-NuGet-folder-path} CosmosDB
+ ```
+
+ **Bash**
+
+ To use Bash instead, from your logic app project's directory, run the PowerShell script with the following command:
+
+ ```bash
+ powershell -file add-extension.ps1 {Cosmos-DB-output-bin-NuGet-folder-path} CosmosDB
+ ```
+
+ If the extension for your custom built-in connector was successfully installed, you get output that looks similar to the following example:
+
+ ```output
+ C:\Users\{your-user-name}\Desktop\demoproj\cdbproj>powershell - file C:\myrepo\github\logicapps-connector-extensions\src\Common\tools\add-extension.ps1 C:\myrepo\github\logicapps-connector-extensions\src\CosmosDB\bin\Debug\CosmosDB
+
+ Nuget extension path is C:\myrepo\github\logicapps-connector-extensions\src\CosmosDB\bin\Debug\
+ Extension dll path is C:\myrepo\github\logicapps-connector-extensions\src\CosmosDB\bin\Debug\netcoreapp3.1\Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB.dll
+ Extension bundle module path is C:\Users\{your-user-name}\.azure-functions-core-tools\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows1.1.9
+ EXTENSION PATH is C:\Users\{your-user-name}\.azure-functions-core-tools\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows\1.1.9\bin\extensions.json and dll Path is C:\myrepo\github\logicapps-connector-extensions\src\CosmosDB\bin\Debug\netcoreapp3.1\Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB.dll
+ SUCCESS: The process "func.exe" with PID 26692 has been terminated.
+ Determining projects to restore...
+ Writing C:\Users\{your-user-name}\AppData\Local\Temp\tmpD343.tmp`<br>
+ info : Adding PackageReference for package 'Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB' into project 'C:\Users\{your-user-name}\Desktop\demoproj\cdbproj.csproj'.
+ info : Restoring packages for C:\Users\{your-user-name}\Desktop\demoproj\cdbproj.csproj...
+ info : Package 'Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB' is compatible with all the specified frameworks in project 'C:\Users\{your-user-name}\Desktop\demoproj\cdbproj.csproj'.
+ info : PackageReference for package 'Microsoft.Azure.Workflows.ServiceProvider.Extensions.CosmosDB' version '1.0.0' updated in file 'C:\Users\{your-user-name}\Desktop\demoproj\cdbproj.csproj'.
+ info : Committing restore...
+ info : Generating MSBuild file C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\obj\cdbproj.csproj.nuget.g.props.
+ info : Generating MSBuild file C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\obj\cdbproj.csproj.nuget.g.targets.
+ info : Writing assets file to disk. Path: C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\obj\project.assets.json.
+ log : Restored C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\cdbproj.csproj (in 1.5 sec).
+ Extension CosmosDB is successfully added.
+
+ C:\Users\{your-user-name}\Desktop\demoproj\cdbproj\>
+ ```
+
+1. If any **func.exe** process is running, make sure to close or exit that process before you continue to the next step.
+
+## Test your connector
+
+1. In Visual Studio Code, open your Standard logic app and blank workflow in the designer.
+
+1. On the designer surface, select **Choose an operation** to open the connector operations picker.
+
+1. Under the operations search box, select **Built-in**. In the search box, enter **cosmos db**.
+
+ The operations picker shows your custom built-in connector and trigger, for example:
+
+ ![Screenshot showing Visual Studio Code and the designer for a Standard logic app workflow with the new custom built-in Cosmos DB connector.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-picker.png)
+
+1. From the **Triggers** list, select your custom built-in trigger to start your workflow.
+
+1. On the connection pane, provide the following property values to create a connection, for example:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*Cosmos-DB-connection-name*> | The name for the Cosmos DB connection to create |
+ | **Connection String** | Yes | <*Cosmos-DB-connection-string*> | The connection string for the Azure Cosmos DB database collection or lease collection where you want to add each new received document. |
+ |||||
+
+ ![Screenshot showing the connection pane when using the connector for the first time.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-create-connection.png)
+
+1. When you're done, select **Create**.
+
+1. On the trigger properties pane, provide the following property values for your trigger, for example:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Database name** | Yes | <*Cosmos-DB-database-name*> | The name for the Cosmos DB database to use |
+ | **Collection name** | Yes | <*Cosmos-DB-collection-name*> | The name for the Cosmos DB collection where you want to add each new received document. |
+ |||||
+
+ ![Screenshot showing the trigger properties pane.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-trigger-properties.png)
+
+ For this example, in code view, the workflow definition, which is in the **workflow.json** file, has a `triggers` JSON object that appears similar to the following sample:
+
+ ```json
+ {
+ "definition": {
+ "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
+ "actions": {},
+ "contentVersion": "1.0.0.0",
+ "outputs": {},
+ "triggers": {
+ "When_a_document_is_received": {
+ "inputs":{
+ "parameters": {
+ "collectionName": "States",
+ "databaseName": "SampleCosmosDB"
+ },
+ "serviceProviderConfiguration": {
+ "connectionName": "cosmosDb",
+ "operationId": "whenADocumentIsReceived",
+ "serviceProviderId": "/serviceProviders/CosmosDb"
+ },
+ "splitOn": "@triggerOutputs()?['body']",
+ "type": "ServiceProvider"
+ }
+ }
+ }
+ },
+ "kind": "Stateful"
+ }
+ ```
+
+ The connection definition, which is in the **connections.json** file, has a `serviceProviderConnections` JSON object that appears similar to the following sample:
+
+ ```json
+ {
+ "serviceProviderConnections": {
+ "cosmosDb": {
+ "parameterValues": {
+ "connectionString": "@appsetting('cosmosDb_connectionString')"
+ },
+ "serviceProvider": {
+ "id": "/serviceProviders/CosmosDb"
+ },
+ "displayName": "myCosmosDbConnection"
+ }
+ },
+ "managedApiConnections": {}
+ }
+ ```
+
+1. In Visual Studio Code, on the **Run** menu, select **Start Debugging**. (Press F5)
+
+1. To trigger your workflow, in the Azure portal, open your Azure Cosmos DB account. On the account menu, select **Data Explorer**. Browse to the database and collection that you specified in the trigger. Add an item to the collection.
+
+ ![Screenshot showing the Azure portal, Cosmos DB account, and Data Explorer open to the specified database and collection.](./media/create-custom-built-in-connector-standard/cosmos-db-account-test-add-item.png)
+
+## Next steps
+
+* [Source for sample custom built-in Cosmos DB connector - Azure Logic Apps Connector Extensions](https://github.com/Azure/logicapps-connector-extensions/tree/CosmosDB/src/CosmosDB)
+
+* [Built-in Service Bus trigger: batching and session handling](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-running-anywhere-built-in-service-bus-trigger/ba-p/2079995)
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
To debug a stateless workflow more easily, you can enable the run history for th
1. To disable the run history when you're done, either set the `Workflows.{yourWorkflowName}.OperationOptions`property to `None`, or delete the property and its value.
+<a name="view-connections"></a>
+
+## View connections
+
+When you create connections within a workflow using [managed connectors](../connectors/managed.md) or [service provider based, built-in connectors](../connectors/built-in.md), these connections are actually separate Azure resources with their own resource definitions.
+
+1. From your logic app's menu, under **Workflows**, select **Connections**.
+
+1. Based on the connection type, you want to view, select one of the following options:
+
+ | Option | Description |
+ |--|-|
+ | **API Connections** | Connections created by managed connectors |
+ | **Service Provider Connections** | Connections created by built-in connectors based on the service provider interface implementation. a specific connection instance, which shows more information about that connection. To view the selected connection's underlying resource definition, select **JSON View**. |
+ | **JSON View** | The underlying resource definitions for all connections in the logic app |
+ |||
+ <a name="delete-from-designer"></a> ## Delete items from the designer
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
Title: Custom connector topic links
-description: Links to topics about how to create, use, share, and certify custom connectors in Azure Logic Apps.
+ Title: Custom connectors
+description: Learn about creating custom connectors in Azure Logic Apps.
ms.suite: integration-+ Previously updated : 1/30/2018 Last updated : 05/17/2022
+# As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows.
-# Custom connectors in Logic Apps
+# Custom connectors in Azure Logic Apps
-Without writing any code, you can build workflows and apps with
-[Azure Logic Apps](https://azure.microsoft.com/services/logic-apps),
-[Power Automate](https://flow.microsoft.com),
-and [Power Apps](https://powerapps.microsoft.com).
-To help you integrate apps, data, and business processes,
-these services offer [~200 connectors](/connectors/) -
-for Microsoft services and products, as well as other services,
-like GitHub, Salesforce, Twitter, and more.
+Without writing any code, you can quickly create automated integration workflows when you use the prebuilt connector operations in Azure Logic Apps. A connector helps your workflows connect and access data, events, and actions across other apps, services, systems, protocols, and platforms. Each connector offers operations as triggers, actions, or both that you can add to your workflows. By using these operations, you expand the capabilities for your cloud apps and on-premises apps to work with new and existing data.
-Sometimes though, you might want to call APIs, services, and systems that aren't available as prebuilt connectors.
-To support more tailored scenarios, you can build *custom connectors* with their own triggers and actions.
-The Connectors documentation site has complete basic and advanced tutorials about custom connectors.
-You can start with the [custom connector overview](/connectors/custom-connectors/),
-but you can also go directly to these topics for details about a specific area:
+Connectors in Azure Logic Apps are either *built in* or *managed*. A *built-in* connector runs natively on the Azure Logic Apps runtime, which means they're hosted in the same process as the runtime and provide higher throughput, low latency, and local connectivity. A *managed connector* is a proxy or a wrapper around an API, such as Office 365 or Salesforce, that helps the underlying service talk to Azure Logic Apps. Managed connectors are powered by the connector infrastructure in Azure and are deployed, hosted, run, and managed by Microsoft. You can choose from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) to use with your workflows in Azure Logic Apps.
-* [Create a Logic Apps connector](/connectors/custom-connectors/create-logic-apps-connector)
+When you use a connector operation for the first time in a workflow, some connectors don't require that you create a connection first, but many other connectors require this step. Each connection that you create is actually a separate Azure resource that provides access to the target app, service, system, protocol, or platform.
-* [Create a custom connector from an OpenAPI definition](/connectors/custom-connectors/define-openapi-definition)
+Sometimes though, you might want to call REST APIs that aren't available as prebuilt connectors. To support more tailored scenarios, you can create your own [*custom connectors*](/connectors/custom-connectors/) to offer triggers and actions that aren't available as prebuilt operations.
-* [Create a custom connector from a Postman collection](/connectors/custom-connectors/define-postman-collection)
+This article provides an overview about custom connectors for [Consumption logic app workflows and Standard logic app workflows](logic-apps-overview.md). Each logic app type is powered by a different Azure Logic Apps runtime, respectively hosted in multi-tenant Azure and single-tenant Azure. For more information about connectors in Azure Logic Apps, review the following documentation:
-* [Use a custom connector from a logic app](/connectors/custom-connectors/use-custom-connector-logic-apps)
+* [About connectors in Azure Logic Apps](../connectors/apis-list.md)
+* [Built-in connectors in Azure Logic Apps](../connectors/built-in.md)
+* [Managed connectors in Azure Logic Apps](../connectors/managed.md)
+* [Connector overview](/connectors/connectors)
+* [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
-* [Share custom connectors in your organization](/connectors/custom-connectors/share)
+<a name="custom-connector-consumption"></a>
-* [Submit your connectors for Microsoft certification](/connectors/custom-connectors/submit-certification)
+## Consumption logic apps
-* [Custom connector FAQ](/connectors/custom-connectors/faq)
+In [multi-tenant Azure Logic Apps](logic-apps-overview.md), you can create [custom connectors from Swagger-based or SOAP-based APIs](/connectors/custom-connectors/) up to [specific limits](../logic-apps/logic-apps-limits-and-config.md#custom-connector-limits) for use in Consumption logic app workflows. The [Connectors documentation](/connectors/connectors) provides more overview information about how to create custom connectors for Consumption logic apps, including complete basic and advanced tutorials. The following list also provides direct links to information about custom connectors for Consumption logic apps:
+
+ * [Create an Azure Logic Apps connector](/connectors/custom-connectors/create-logic-apps-connector)
+ * [Create a custom connector from an OpenAPI definition](/connectors/custom-connectors/define-openapi-definition)
+ * [Create a custom connector from a Postman collection](/connectors/custom-connectors/define-postman-collection)
+ * [Use a custom connector from a logic app](/connectors/custom-connectors/use-custom-connector-logic-apps)
+ * [Share custom connectors in your organization](/connectors/custom-connectors/share)
+ * [Submit your connectors for Microsoft certification](/connectors/custom-connectors/submit-certification)
+ * [Custom connector FAQ](/connectors/custom-connectors/faq)
+
+<a name="custom-connector-standard"></a>
+
+## Standard logic apps
+
+In [single-tenant Azure Logic Apps](logic-apps-overview.md), the redesigned Azure Logic Apps runtime powers Standard logic app workflows. This runtime differs from the multi-tenant Azure Logic Apps runtime that powers Consumption logic app workflows. The single-tenant runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), which provides a key capability for you to create your own [built-in connectors](../connectors/built-in.md) for anyone to use in Standard workflows. In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+
+When single-tenant Azure Logic Apps officially released, new built-in connectors included Azure Blob Storage, Azure Event Hubs, Azure Service Bus, and SQL Server. Over time, this list of built-in connectors continues to grow. However, if you need connectors that aren't available in Standard logic app workflows, you can [create your own built-in connectors](create-custom-built-in-connector-standard.md) using the same extensibility model that's used by built-in connectors in Standard workflows.
+
+<a name="service-provider-interface-implementation"></a>
+
+### Built-in connectors as service providers
+
+In single-tenant Azure Logic Apps, a built-in connector that has the following attributes is called a *service provider*:
+
+* Is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
+
+* Provides access from a Standard logic app workflow to a service, such as Azure Blob Storage, Azure Service Bus, Azure Event Hubs, SFTP, and SQL Server.
+
+ Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity.
+
+* Runs in the same process as the redesigned Azure Logic Apps runtime.
+
+A built-in connector that's *not a service provider* has the following attributes:
+
+* Isn't based on the Azure Functions extensibility model.
+
+* Is directly implemented as a job within the Azure Logic Apps runtime, such as Schedule, HTTP, Request, and XML operations.
+
+No capability is currently available to create a non-service provider built-in connector or a new job type that runs directly in the Azure Logic Apps runtime. However, you can create your own built-in connectors using the service provider infrastructure.
+
+The following section provides more information about how the extensibility model works for custom built-in connectors.
+
+<a name="built-in-connector-extensibility-model"></a>
+
+### Built-in connector extensibility model
+
+Based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), the built-in connector extensibility model in single-tenant Azure Logic Apps has a service provider infrastructure that you can use to [create, package, register, and install your own built-in connectors](create-custom-built-in-connector-standard.md) as Azure Functions extensions that anyone can use in their Standard workflows. This model includes custom built-in trigger capabilities that support exposing an [Azure Functions trigger or action](../azure-functions/functions-bindings-example.md) as a service provider trigger in your custom built-in connector.
+
+The following diagram shows the method implementations that the Azure Logic Apps designer and runtime expects for a custom built-in connector with an [Azure Functions-based trigger](../azure-functions/functions-bindings-example.md):
+
+![Conceptual diagram showing Azure Functions-based service provider infrastructure.](./media/custom-connector-overview/service-provider-azure-functions-based.png)
+
+The following sections provide more information about the interfaces that your connector needs to implement.
+
+#### IServiceOperationsProvider
+
+This interface includes the methods that provide the operations manifest for your custom built-in connector.
+
+* Operations manifest
+
+ The operations manifest includes metadata about the implemented operations in your custom built-in connector. The Azure Logic Apps designer primarily uses this metadata to drive the authoring and monitoring experiences for your connector's operations. For example, the designer uses operation metadata to understand the input parameters required by a specific operation and to facilitate generating the outputs' property tokens, based on the schema for the operation's outputs.
+
+ The designer requires and uses the [**GetService()**](#getservice) and [**GetOperations()**](#getoperations) methods to query the operations that your connector provides and shows on the designer surface. The **GetService()** method also specifies the connection's input parameters that are required by the designer.
+
+ For more information about these methods and their implementation, review the [Methods to implement](#method-implementation) section later in this article.
+
+* Operation invocations
+
+ Operation invocations are the method implementations used during workflow execution by the Azure Logic Apps runtime to call the specified operations in the workflow definition.
+
+ * If your trigger is an Azure Functions-based trigger type, the [**GetBindingConnectionInformation()**](#getbindingconnectioninformation) method is used by the runtime in Azure Logic Apps to provide the required connection parameters information to the Azure Functions trigger binding.
+
+ * If your connector has actions, the [**InvokeOperation()**](#invokeoperation) method is used by the runtime to call each action in your connector that runs during workflow execution. Otherwise, you don't have to implement this method.
+
+For more information about these methods and their implementation, review the [Methods to implement](#method-implementation) section later in this article.
+
+#### IServiceOperationsTriggerProvider
+
+Custom built-in trigger capabilities support adding or exposing an [Azure Functions trigger or action](../azure-functions/functions-bindings-example.md) as a service provider trigger in your custom built-in connector. To use the Azure Functions-based trigger type and the same Azure Functions binding as the Azure managed connector trigger, implement the following methods to provide the connection information and trigger bindings as required by Azure Functions.
+
+* The [**GetFunctionTriggerType()**](#getfunctiontriggertype) method is required to return the string that's the same as the **type** parameter in the Azure Functions trigger binding.
+
+* The [**GetFunctionTriggerDefinition()**](#getfunctiontriggerdefinition) has a default implementation, so you don't need to explicitly implement this method. However, if you want to update the trigger's default behavior, such as provide extra parameters that the designer doesn't expose, you can implement this method and override the default behavior.
+
+<a name="method-implementation"></a>
+
+### Methods to implement
+
+The following sections provide more information about the methods that your connector needs to implement. For the complete sample, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs) and [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md).
+
+#### GetService()
+
+The designer requires this method to get the high-level metadata for your service, including the service description, connection input parameters, capabilities, brand color, icon URL, and so on.
+
+```csharp
+public ServiceOperationApi GetService()
+{
+ return this.{custom-service-name-apis}.ServiceOperationServiceApi();
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetOperations()
+
+The designer requires this method to get the operations implemented by your service. The operations list is based on Swagger schema. The designer also uses the operation metadata to understand the input parameters for specific operations and generate the outputs as property tokens, based on the schema of the output for an operation.
+
+```csharp
+public IEnumerable<ServiceOperation> GetOperations(bool expandManifest)
+{
+ return expandManifest ? serviceOperationsList : GetApiOperations();
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetBindingConnectionInformation()
+
+If you want to use the Azure Functions-based trigger type, this method provides the required connection parameters information to the Azure Functions trigger binding.
+
+```csharp
+public string GetBindingConnectionInformation(string operationId, InsensitiveDictionary<JToken> connectionParameters)
+{
+ return ServiceOperationsProviderUtilities
+ .GetRequiredParameterValue(
+ serviceId: ServiceId,
+ operationId: operationID,
+ parameterName: "connectionString",
+ parameters: connectionParameters)?
+ .ToValue<string>();
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### InvokeOperation()
+
+If your custom built-in connector only has a trigger, you don't have to implement this method. However, if your connector has actions to implement, you have to implement the **InvokeOperation()** method, which is called for each action in your connector that runs during workflow execution. You can use any client, such as FTPClient, HTTPClient, and so on, as required by your connector's actions. This example uses HTTPClient.
+
+```csharp
+public Task<ServiceOperationResponse> InvokeOperation(string operationId, InsensitiveDictionary<JToken> connectionParameters, ServiceOperationRequest serviceOperationRequest)
+{
+ using (var client = new HttpClient())
+ {
+ response = client.SendAsync(httpRequestMessage).ConfigureAwait(false).ToJObject();
+ }
+ return new ServiceOperationResponse(body: response);
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetFunctionTriggerType()
+
+To use an Azure Functions-based trigger as a trigger in your connector, you have to return the string that's the same as the **type** parameter in the Azure Functions trigger binding.
+
+The following example returns the string for the out-of-the-box built-in Azure Cosmos DB trigger, `"type": "cosmosDBTrigger"`:
+
+```csharp
+public string GetFunctionTriggerType()
+{
+ return "CosmosDBTrigger";
+}
+```
+
+For more information, review [Sample CosmosDbServiceOperationProvider.cs](https://github.com/Azure/logicapps-connector-extensions/blob/CosmosDB/src/CosmosDB/Providers/CosmosDbServiceOperationProvider.cs).
+
+#### GetFunctionTriggerDefinition()
+
+This method has a default implementation, so you don't need to explicitly implement this method. However, if you want to update the trigger's default behavior, such as provide extra parameters that the designer doesn't expose, you can implement this method and override the default behavior.
+
+## Next steps
+
+When you're ready to start the implementation steps, continue to the following article:
+
+* [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md)
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
Last updated 11/02/2021
# Encode and decode flat files in Azure Logic Apps
-Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. By building a logic app workflow, you can encode and decode flat files by using the [built-in](../connectors/built-in.md#integration-account-built-in-actions) **Flat File** actions.
+Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. By building a logic app workflow, you can encode and decode flat files by using the [built-in](../connectors/built-in.md#integration-account-built-in) **Flat File** actions.
Although no **Flat File** triggers are available, you can use a different trigger or action to get or feed the XML content from various sources into your workflow for encoding or decoding. For example, you can use the Request trigger, another app, or other [connectors supported by Azure Logic Apps](../connectors/apis-list.md). You can use **Flat File** actions with workflows in the [**Logic App (Consumption)** and **Logic App (Standard)** resource types](single-tenant-overview-compare.md).
This article shows how to add the Flat File encoding and decoding actions to an
* A [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account).
- * If you're using use the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you don't store schemas in your integration account. Instead, you can [directly add schemas to your logic app resource](logic-apps-enterprise-integration-schemas.md) using either the Azure portal or Visual Studio Code. You can then use these schemas across multiple workflows within the *same logic app resource*.
+ * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you don't store schemas in your integration account. Instead, you can [directly add schemas to your logic app resource](logic-apps-enterprise-integration-schemas.md) using either the Azure portal or Visual Studio Code. You can then use these schemas across multiple workflows within the *same logic app resource*.
You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
The following table lists the values for custom connectors:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-| | Custom connectors | 1,000 per Azure subscription | Unlimited | 1,000 per Azure subscription ||
-| Custom connectors - Number of APIs | SOAP-based: 50 | Not applicable | SOAP-based: 50 ||
+| APIs per service | SOAP-based: 50 | Not applicable | SOAP-based: 50 ||
+| Parameters per API | SOAP-based: 50 | Not applicable | SOAP-based: 50 ||
| Requests per minute for a custom connector | 500 requests per minute per connection | Based on your implementation | 2,000 requests per minute per *custom connector* || | Connection timeout | 2 min | Idle connection: <br>4 min <p><p>Active connection: <br>10 min | 2 min || ||||||
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md
The following table summarizes how the Consumption model handles metering and bi
|-|-|-| | [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime. In the designer, you can find these operations under the **Built-in** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The Consumption model includes an *initial number of free built-in operations*, per Azure subscription, that a workflow can run. Above this number, built-in operation executions follow the [*Actions* pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Some managed connector operations are *also* available as built-in operations, which are included in the initial free operations. Above the initially free operations, billing follows the [*Actions* pricing](https://azure.microsoft.com/pricing/details/logic-apps/), not the [*Standard* or *Enterprise* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). | | [*Managed connector*](../connectors/managed.md) | These operations run separately in Azure. In the designer, you can find these operations under the **Standard** or **Enterprise** label. | These operation executions follow the [*Standard* or *Enterprise* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Preview Enterprise connector operation executions follow the [Consumption *Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
-| [*Custom connector*](../connectors/apis-list.md#custom-apis-and-connectors) | These operations run separately in Azure. In the designer, you can find these operations under the **Custom** label. For limits number of connectors, throughput, and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). | These operation executions follow the [*Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
+| [*Custom connector*](../connectors/apis-list.md#custom-connectors-and-apis) | These operations run separately in Azure. In the designer, you can find these operations under the **Custom** label. For limits number of connectors, throughput, and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). | These operation executions follow the [*Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
|||| For more information about how the Consumption model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
The following table summarizes how the Standard model handles metering and billi
| Component | Metering and billing | |--|-| | Virtual CPU (vCPU) and memory | The Standard model *requires* that your logic app uses the **Workflow Standard** hosting plan and a pricing tier, which determines the resource levels and pricing rates that apply to compute and memory capacity. For more information, review [Pricing tiers in the Standard model](#standard-pricing-tiers). |
-| Trigger and action operations | The Standard model includes an *unlimited number* of free built-in operations that your workflow can run. <p>If your workflow uses any managed connector operations, metering for those operations applies to *each call*, while billing follows the [same *Standard* or *Enterprise* connector pricing as the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For more information, review [Trigger and action operations in the Standard model](#standard-operations). |
+| Trigger and action operations | The Standard model includes an *unlimited number* of free built-in operations that your workflow can run. <p>If your workflow uses any managed connector operations, metering applies to *each call*, while billing follows the [same *Standard* or *Enterprise* connector pricing as the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For more information, review [Trigger and action operations in the Standard model](#standard-operations). |
| Storage operations | Metering applies to any storage operations run by Azure Logic Apps. For example, storage operations run when the service saves inputs and outputs from your workflow's run history. Billing follows your chosen [pricing tier](#standard-pricing-tiers). For more information, review [Storage operations](#storage-operations). | | Integration accounts | If you create an integration account for your logic app to use, metering is based on the integration account type that you create. Billing follows the [*Integration Account* pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For more information, review [Integration accounts](#integration-accounts). | |||
The following table summarizes how the Standard model handles metering and billi
|-|-|-| | [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime. In the designer, you can find these operations under the **Built-in** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The Standard model includes unlimited free built-in operations. <p><p>**Note**: Some managed connector operations are *also* available as built-in operations. While built-in operations are free, the Standard model still meters and bills managed connector operations using the [same *Standard* or *Enterprise* connector pricing as the Consumption model](https://azure.microsoft.com/pricing/details/logic-apps/). | | [*Managed connector*](../connectors/managed.md) | These operations run separately in Azure. In the designer, you can find these operations under the combined **Azure** label. | The Standard model meters and bills managed connector operations based on the [same *Standard* and *Enterprise* connector pricing as the Consumption model](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Preview Enterprise connector operations follow the [Consumption *Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
-| [*Custom connector*](../connectors/apis-list.md#custom-apis-and-connectors) | Currently, you can create and use only [custom built-in connector operations](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272) in single-tenant based logic app workflows. | The Standard model includes unlimited free built-in operations. For limits on throughput and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
+| [*Custom connector*](../connectors/apis-list.md#custom-connectors-and-apis) | Currently, you can create and use only [custom built-in connector operations](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272) in single-tenant based logic app workflows. | The Standard model includes unlimited free built-in operations. For limits on throughput and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
|||| For more information about how the Standard model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
The following table summarizes how the ISE model handles the following operation
|-|-|-| | [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime and in the same ISE as your logic app workflow. In the designer, you can find these operations under the **Built-in** label, but each operation also displays the **CORE** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The ISE model includes these operations *for free*, but are subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | | [*Managed connector*](../connectors/managed.md) | Whether *Standard* or *Enterprise*, managed connector operations run in either your ISE or multi-tenant Azure, based on whether the connector or operation displays the **ISE** label. <p><p>- **ISE** label: These operations run in the same ISE as your logic app and work without requiring the [on-premises data gateway](#data-gateway). <p><p>- No **ISE** label: These operations run in multi-tenant Azure. | The ISE model includes both **ISE** and no **ISE** labeled operations *for free*, but are subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). |
-| [*Custom connector*](../connectors/apis-list.md#custom-apis-and-connectors) | In the designer, you can find these operations under the **Custom** label. | The ISE model includes these operations *for free*, but are subject to [custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
+| [*Custom connector*](../connectors/apis-list.md#custom-connectors-and-apis) | In the designer, you can find these operations under the **Custom** label. | The ISE model includes these operations *for free*, but are subject to [custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
|||| For more information about how the ISE model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
For more information about security in Azure, review these topics:
## Access to logic app operations
-On Consumption logic apps only, you can set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, use [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
+For Consumption logic apps only, before you can create or manage logic apps and their connections, you need specific permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can also
+you can set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, you can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
logic-apps Manage Logic Apps With Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/manage-logic-apps-with-azure-portal.md
ms.suite: integration
Previously updated : 01/28/2022 Last updated : 04/01/2022 # Manage logic apps in the Azure portal
-You can manage logic apps using the [Azure portal](https://portal.azure.com) or [Visual Studio](manage-logic-apps-with-visual-studio.md). This article shows how to edit, disable, enable, or delete logic apps in the Azure portal. If you're new to Azure Logic Apps, see [What is Azure Logic Apps](logic-apps-overview.md)?
+This article shows how to edit, disable, enable, or delete Consumption logic apps with the Azure portal. You can also [manage Consumption logic apps in Visual Studio](manage-logic-apps-with-visual-studio.md).
+
+To manage Standard logic apps, review [Create a Standard workflow with single-tenant Azure Logic Apps in the Azure portal](create-single-tenant-workflows-azure-portal.md). If you're new to Azure Logic Apps, review [What is Azure Logic Apps](logic-apps-overview.md)?
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)).
-* An existing logic app. To learn how to create a logic app in the Azure portal, see [Quickstart: Create your first workflow by using Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md).
+* An existing logic app. To learn how to create a logic app in the Azure portal, review [Quickstart: Create your first workflow by using Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md).
<a name="find-logic-app"></a>
You can manage logic apps using the [Azure portal](https://portal.azure.com) or
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-1. In the portal search box, enter `logic apps`, and select **Logic apps**.
+1. In the portal search box, enter **logic apps**, and select **Logic apps**.
1. From the logic apps list, find your logic app by either browsing or filtering the list.
You can manage logic apps using the [Azure portal](https://portal.azure.com) or
* **Access endpoint IP addresses** * **Connector outgoing IP addresses**
+<a name="view-connections"></a>
+
+## View connections
+
+When you create connections within a workflow using [managed connectors](../connectors/managed.md), these connections are actually separate Azure resources with their own resource definitions. To view and manage these connections, follow these steps:
+
+1. In the Azure portal, [find and open your logic app](#find-logic-app).
+
+1. From your logic app's menu, under **Development tools**, select **API Connections**.
+
+1. On the **API Connections** pane, select a specific connection instance, which shows more information about that connection. To view the underlying connection resource definition, select **JSON View**.
+ <a name="disable-enable-logic-apps"></a> ## Disable or enable logic apps
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
This table specifies the child workflow's behavior based on whether the parent a
The single-tenant model and **Logic App (Standard)** resource type include many current and new capabilities, for example:
-* Create logic apps and their workflows from [400+ managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
+* Create logic apps and their workflows from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
* More managed connectors are now available as built-in operations and run similarly to other built-in operations, such as Azure Functions. Built-in operations run natively on the single-tenant Azure Logic Apps runtime. For example, new built-in operations include Azure Service Bus, Azure Event Hubs, SQL Server, MQ, DB2, and IBM Host File.
The single-tenant model and **Logic App (Standard)** resource type include many
> For the built-in SQL Server version, only the **Execute Query** action can directly connect to Azure > virtual networks without using the [on-premises data gateway](logic-apps-gateway-connection.md).
- * You can create your own built-in connectors for any service that you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors), which aren't currently supported, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime.
+ * You can create your own built-in connectors for any service that you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis), which aren't currently supported, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime.
The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, [switch your project from extension bundle-based (Node.js) to NuGet package-based (.NET)](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring). For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
For the **Logic App (Standard)** resource, these capabilities have changed, or t
* The Gmail connector currently isn't supported.
- * [Custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors) currently aren't currently supported. However, you can create *custom built-in operations* when you use Visual Studio Code. For more information, review [Create single-tenant based workflows using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring).
+ * [Custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis) currently aren't currently supported. However, you can create *custom built-in operations* when you use Visual Studio Code. For more information, review [Create single-tenant based workflows using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring).
* **Authentication**: The following authentication types are currently unavailable for the **Logic App (Standard)** resource type:
mysql App Development Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/app-development-best-practices.md
- Title: App development best practices - Azure Database for MySQL
-description: Learn about best practices for building an app by using Azure Database for MySQL.
----- Previously updated : 08/11/2020--
-# Best practices for building an application with Azure Database for MySQL
--
-Here are some best practices to help you build a cloud-ready application by using Azure Database for MySQL. These best practices can reduce development time for your app.
-
-## Configuration of application and database resources
-
-### Keep the application and database in the same region
-
-Make sure all your dependencies are in the same region when deploying your application in Azure. Spreading instances across regions or availability zones creates network latency, which might affect the overall performance of your application.
-
-### Keep your MySQL server secure
-
-Configure your MySQL server to be [secure](./concepts-security.md) and not accessible publicly. Use one of these options to secure your server:
--- [Firewall rules](./concepts-firewall-rules.md)-- [Virtual networks](./concepts-data-access-and-security-vnet.md)-- [Azure Private Link](./concepts-data-access-security-private-link.md)-
-For security, you must always connect to your MySQL server over SSL and configure your MySQL server and your application to use TLS 1.2. See [How to configure SSL/TLS](./concepts-ssl-connection-security.md).
-
-### Use advanced networking with AKS
-
-When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. To learn more , see [Best practices for Azure Kubernetes Service and Azure Database for MySQL](concepts-aks.md)
-
-### Tune your server parameters
-
-For read-heavy workloads tuning server parameters, `tmp_table_size` and `max_heap_table_size` can help optimize for better performance. To calculate the values required for these variables, look at the total per-connection memory values and the base memory. The sum of per-connection memory parameters, excluding `tmp_table_size`, combined with the base memory accounts for total memory of the server.
-
-To calculate the largest possible size of `tmp_table_size` and `max_heap_table_size`, use the following formula:
-
-`(total memory - (base memory + (sum of per-connection memory * # of connections)) / # of connections`
-
-> [!NOTE]
-> Total memory indicates the total amount of memory that the server has across the provisioned vCores. For example, in a General Purpose two-vCore Azure Database for MySQL server, the total memory will be 5 GB * 2. You can find more details about memory for each tier in the [pricing tier](./concepts-pricing-tiers.md) documentation.
->
-> Base memory indicates the memory variables, like `query_cache_size` and `innodb_buffer_pool_size`, that MySQL will initialize and allocate at server start. Per-connection memory, like `sort_buffer_size` and `join_buffer_size`, is memory that's allocated only when a query needs it.
-
-### Create non-admin users
-
-[Create non-admin users](./howto-create-users.md) for each database. Typically, the user names are identified as the database names.
-
-### Reset your password
-
-You can [reset your password](./howto-create-manage-server-portal.md#update-admin-password) for your MySQL server by using the Azure portal.
-
-Resetting your server password for a production database can bring down your application. It's a good practice to reset the password for any production workloads at off-peak hours to minimize the impact on your application's users.
-
-## Performance and resiliency
-
-Here are a few tools and practices that you can use to help debug performance issues with your application.
-
-### Enable slow query logs to identify performance issues
-
-You can enable [slow query logs](./concepts-server-logs.md) and [audit logs](./concepts-audit-logs.md) on your server. Analyzing slow query logs can help identify performance bottlenecks for troubleshooting.
-
-Audit logs are also available through Azure Diagnostics logs in Azure Monitor logs, Azure Event Hubs, and storage accounts. See [How to troubleshoot query performance issues](./howto-troubleshoot-query-performance.md).
-
-### Use connection pooling
-
-Managing database connections can have a significant impact on the performance of the application as a whole. To optimize performance, you must reduce the number of times that connections are established and the time for establishing connections in key code paths. Use [connection pooling](./concepts-connectivity.md#access-databases-by-using-connection-pooling-recommended) to connect to Azure Database for MySQL to improve resiliency and performance.
-
-You can use the [ProxySQL](https://proxysql.com/) connection pooler to efficiently manage connections. Using a connection pooler can decrease idle connections and reuse existing connections, which will help avoid problems. See [How to set up ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/connecting-efficiently-to-azure-database-for-mysql-with-proxysql/ba-p/1279842) to learn more.
-
-### Retry logic to handle transient errors
-
-Your application might experience [transient errors](./concepts-connectivity.md#handling-transient-errors) where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds.
-
-A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./howto-troubleshoot-common-connection-issues.md) to learn more.
-
-### Enable read replication to mitigate failovers
-
-You can use [Data-in Replication](./howto-data-in-replication.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs.
-
-You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
-
-## Database deployment
-
-### Configure an Azure database for MySQL task in your CI/CD deployment pipeline
-
-Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) and continuous delivery (CD) through [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/) and use a task for [your MySQL server](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment) to update the database by running a custom script against it.
-
-### Use an effective process for manual database deployment
-
-During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
-
-1. Create a copy of a production database on a new database by using [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) or [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-admin-export-import-management.html).
-2. Update the new database with your new schema changes or updates needed for your database.
-3. Put the production database in a read-only state. You should not have write operations on the production database until deployment is completed.
-4. Test your application with the newly updated database from step 1.
-5. Deploy your application changes and make sure the application is now using the new database that has the latest updates.
-6. Keep the old production database so that you can roll back the changes. You can then evaluate to either delete the old production database or export it on Azure Storage if needed.
-
-> [!NOTE]
-> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests.
->
-> Make sure your application code also handles any failed requests.
-
-### Use MySQL native metrics to see if your workload is exceeding in-memory temporary table sizes
-
-With a read-heavy workload, queries running against your MySQL server might exceed the in-memory temporary table sizes. A read-heavy workload can cause your server to switch to writing temporary tables to disk, which affects the performance of your application. To determine if your server is writing to disk as a result of exceeding temporary table size, look at the following metrics:
-
-```sql
-show global status like 'created_tmp_disk_tables';
-show global status like 'created_tmp_tables';
-```
-
-The `created_tmp_disk_tables` metric indicates how many tables were created on disk. The `created_tmp_table` metric tells you how many temporary tables have to be formed in memory, given your workload. To determine if running a specific query will use temporary tables, run the [EXPLAIN](https://dev.mysql.com/doc/refman/8.0/en/explain.html) statement on the query. The detail in the `extra` column indicates `Using temporary` if the query will run using temporary tables.
-
-To calculate the percentage of your workload with queries spilling to disks, use your metric values in the following formula:
-
-`(created_tmp_disk_tables / (created_tmp_disk_tables + created_tmp_tables)) * 100`
-
-Ideally, this percentage should be less 25%. If you see that the percentage is 25% or greater, we suggest modifying two server parameters, tmp_table_size and max_heap_table_size.
-
-## Database schema and queries
-
-Here are few tips to keep in mind when you build your database schema and your queries.
-
-### Use the right datatype for your table columns
-
-Using the right datatype based on the type of data you want to store can optimize storage and reduce errors that can occur because of incorrect datatypes.
-
-### Use indexes
-
-To avoid slow queries, you can use indexes. Indexes can help find rows with specific columns quickly. See [How to use indexes in MySQL](https://dev.mysql.com/doc/refman/8.0/en/mysql-indexes.html).
-
-### Use EXPLAIN for your SELECT queries
-
-Use the `EXPLAIN` statement to get insights on what MySQL is doing to run your query. It can help you detect bottlenecks or issues with your query. See [How to use EXPLAIN to profile query performance](./howto-troubleshoot-query-performance.md).
mysql Concept Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-monitoring-best-practices.md
- Title: Monitoring best practices - Azure Database for MySQL
-description: This article describes the best practices to monitor your Azure Database for MySQL.
------ Previously updated : 11/23/2020--
-# Best practices for monitoring Azure Database for MySQL - Single server
--
-Learn about the best practices that can be used to monitor your database operations and ensure that the performance is not compromised as data size grows. As we add new capabilities to the platform, we will continue to refine the best practices detailed in this section.
-
-## Layout of the current monitoring toolkit
-
-Azure Database for MySQL provides tools and methods you can use to monitor usage easily, add, or remove resources (such as CPU, memory, or I/O), troubleshoot potential problems, and help improve the performance of a database. You can [monitor performance metrics](concepts-monitoring.md#metrics) on a regular basis to see the average, maximum, and minimum values for a variety of time ranges.
-
-You can [set up alerts](howto-alert-on-metric.md#create-an-alert-rule-on-a-metric-from-the-azure-portal) for a metric threshold, so you are informed if the server has reached those limits and take appropriate actions.
-
-Monitor the database server to make sure that the resources assigned to the database can handle the application workload. If the database is hitting resource limits, consider:
-
-* Identifying and optimizing the top resource-consuming queries.
-* Adding more resources by upgrading the service tier.
-
-### CPU utilization
-
-Monitor CPU usage and if the database is exhausting CPU resources. If CPU usage is 90% or more then you should scale up your compute by increasing the number of vCores or scale to next pricing tier. Make sure that the throughput or concurrency is as expected as you scale up/down the CPU.
-
-### Memory
-
-The amount of memory available for the database server is proportional to the [number of vCores](concepts-pricing-tiers.md). Make sure the memory is enough for the workload. Load test your application to verify the memory is sufficient for read and write operations. If the database memory consumption frequently grows beyond a defined threshold, this indicates that you should upgrade your instance by increasing vCores or higher performance tier. Use [Query Store](concepts-query-store.md), [Query Performance Recommendations](concepts-performance-recommendations.md) to identify queries with the longest duration, most executed. Explore opportunities to optimize.
-
-### Storage
-
-The [amount of storage](howto-create-manage-server-portal.md#scale-compute-and-storage) provisioned for the MySQL server determines the IOPs for your server. The storage used by the service includes the database files, transaction logs, the server logs and backup snapshots. Ensure that the consumed disk space does not constantly exceed above 85 percent of the total provisioned disk space. If that is the case, you need to delete or archive data from the database server to free up some space.
-
-### Network traffic
-
-**Network Receive Throughput, Network Transmit Throughput** ΓÇô The rate of network traffic to and from the MySQL instance in megabytes per second. You need to evaluate the throughput requirement for server and constantly monitor the traffic if throughput is lower than expected.
-
-### Database connections
-
-**Database Connections** ΓÇô The number of client sessions that are connected to the Azure Database for MySQL should be aligned with the [connection limits for the selected SKU](concepts-server-parameters.md#max_connections) size.
-
-## Next steps
-
-* [Best practice for performance of Azure Database for MySQL](concept-performance-best-practices.md)
-* [Best practice for server operations using Azure Database for MySQL](concept-operation-excellence-best-practices.md)
mysql Concept Operation Excellence Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-operation-excellence-best-practices.md
- Title: MySQL server operational best practices - Azure Database for MySQL
-description: This article describes the best practices to operate your MySQL database on Azure.
----- Previously updated : 11/23/2020--
-# Best practices for server operations on Azure Database for MySQL -Single server
--
-Learn about the best practices for working with Azure Database for MySQL. As we add new capabilities to the platform, we will continue to focus on refining the best practices detailed in this section.
-
-## Azure Database for MySQL Operational Guidelines
-
-The following are operational guidelines that should be followed when working with your Azure Database for MySQL to improve the performance of your database:
-
-* **Co-location**: To reduce network latency, place the client and the database server are in the same Azure region.
-
-* **Monitor your memory, CPU, and storage usage**: You can [setup alerts](howto-alert-on-metric.md) to notify you when usage patterns change or when you approach the capacity of your deployment, so that you can maintain system performance and availability.
-
-* **Scale up your DB instance**: You can [scale up](howto-create-manage-server-portal.md) when you are approaching storage capacity limits. You should have some buffer in storage and memory to accommodate unforeseen increases in demand from your applications. You can also [enable the storage autogrow](howto-auto-grow-storage-portal.md) feature 'ON' just to ensure that the service automatically scales the storage as it nears the storage limits.
-
-* **Configure backups**: Enable [local or geo-redundant backups](howto-restore-server-portal.md#set-backup-configuration) based on the requirement of the business. Also, you modify the retention period on how long the backups are available for business continuity.
-
-* **Increase I/O capacity**: If your database workload requires more I/O than you have provisioned, recovery or other transactional operations for your database will be slow. To increase the I/O capacity of a server instance, do any or all of the following:
-
- * Azure database for MySQL provides IOPS scaling at the rate of three IOPS per GB storage provisioned. [Increase the provisioned storage](howto-create-manage-server-portal.md#scale-storage-up) to scale the IOPS for better performance.
-
- * If you are already using Provisioned IOPS storage, provision [additional throughput capacity](howto-create-manage-server-portal.md#scale-storage-up).
-
-* **Scale compute**: Database workload can also be limited due to CPU or memory and this can have serious impact on the transaction processing. Note that compute (pricing tier) can be scaled up or down between [General Purpose or Memory Optimized](concepts-pricing-tiers.md) tiers only.
-
-* **Test for failover**: Manually test failover for your server instance to understand how long the process takes for your use case and to ensure that the application that accesses your server instance can automatically connect to the new server instance after failover.
-
-* **Use primary key**: Make sure your tables have a primary or unique key as you operate on the Azure Database for MySQL. This helps in a lot taking backups, replica etc. and improves performance.
-
-* **Configure TTL value**: If your client application is caching the Domain Name Service (DNS) data of your server instances, set a time-to-live (TTL) value of less than 30 seconds. Because the underlying IP address of a server instance can change after a failover, caching the DNS data for an extended time can lead to connection failures if your application tries to connect to an IP address that no longer is in service.
-
-* Use connection pooling to avoid hitting the [maximum connection limits](concepts-server-parameters.md#max_connections)and use retry logic to avoid intermittent connection issues.
-
-* If you are using replica, use [ProxySQL to balance off load](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/scaling-an-azure-database-for-mysql-workload-running-on/ba-p/1105847) between the primary server and the readable secondary replica server. See the setup steps here. </br>
-
-* When provisioning the resource, make sure you [enabled the autogrow](howto-auto-grow-storage-portal.md) for your Azure Database for MySQL. This does not add any additional cost and will protect the database from any storage bottlenecks that you might run into. </br>
--
-### Using InnoDB with Azure Database for MySQL
-
-* If using `ibdata1` feature, which is a system tablespace data file cannot shrink or be purged by dropping the data from the table, or moving the table to file-per-table `tablespaces`.
-
-* For a database greater than 1 TB in size, you should create the table in **innodb_file_per_table** `tablespace`. For a single table that is larger than 1 TB in size, you should the [partition](https://dev.mysql.com/doc/refman/5.7/en/partitioning.html) table.
-
-* For a server that has a large number of `tablespace`, the engine startup will be very slow due to the sequential tablespace scan during MySQL startup or failover.
-
-* Set innodb_file_per_table = ON before you create a table, if the total table number is less than 500.
-
-* If you have more than 500 tables in a database, then review the table size for each individual table. For a large table, you should still consider using the file-per-table tablespace to avoid the system tablespace file hit max storage limit.
-
-> [!NOTE]
-> For tables with size less than 5GB, consider using the system tablespace
-> ```sql
-> CREATE TABLE tbl_name ... *TABLESPACE* = *innodb_system*;
-> ```
-
-* [Partition](https://dev.mysql.com/doc/refman/5.7/en/partitioning.html) your table at table creation if you have a very large table might potentially grow beyond 1 TB.
-
-* Use multiple MySQL servers and spread the tables across those servers. Avoid putting too many tables on a single server if you have around 10000 tables or more.
-
-## Next steps
-- [Best practice for performance of Azure Database for MySQL](concept-performance-best-practices.md)-- [Best practice for monitoring your Azure Database for MySQL](concept-monitoring-best-practices.md)
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-performance-best-practices.md
- Title: Performance best practices - Azure Database for MySQL
-description: This article describes some recommendations to monitor and tune performance for your Azure Database for MySQL.
----- Previously updated : 1/28/2021--
-# Best practices for optimal performance of your Azure Database for MySQL - Single server
--
-Learn how to get best performance while working with your Azure Database for MySQL - Single server. As we add new capabilities to the platform, we will continue refining our recommendations in this section.
-
-## Physical Proximity
-
- Make sure you deploy an application and the database in the same region. A quick check before starting any performance benchmarking run is to determine the network latency between the client and database using a simple SELECT 1 query.
-
-## Accelerated Networking
-
-Use accelerated networking for the application server if you are using Azure virtual machine, Azure Kubernetes, or App Services. Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types.
-
-## Connection Efficiency
-
-Establishing a new connection is always an expensive and time-consuming task. When an application requests a database connection, it prioritizes the allocation of existing idle database connections rather than creating a new one. Here are some options for good connection practices:
--- **ProxySQL**: Use [ProxySQL](https://proxysql.com/) which provides built-in connection pooling and [load balance your workload](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042) to multiple read replicas as required on demand with any changes in application code.--- **Heimdall Data Proxy**: Alternatively, you can also leverage Heimdall Data Proxy, a vendor-neutral proprietary proxy solution. It supports query caching and read/write split with replication lag detection. You can also refer to how to [accelerate MySQL Performance with the Heimdall proxy](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/accelerate-mysql-performance-with-the-heimdall-proxy/ba-p/1063349). --- **Persistent or Long-Lived Connection**: If your application has short transactions or queries typically with execution time < 5-10 ms, then replace short connections with persistent connections. Replace short connections with persistent connections requires only minor changes to the code, but it has a major effect in terms of improving performance in many typical application scenarios. Make sure to set the timeout or close connection when the transaction is complete.--- **Replica**: If you are using replica, use [ProxySQL](https://proxysql.com/) to balance off load between the primary server and the readable secondary replica server. Learn [how to set up ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/scaling-an-azure-database-for-mysql-workload-running-on/ba-p/1105847).-
-## Data Import configurations
--- You can temporarily scale your instance to higher SKU size before starting a data import operation and then scale it down when the import is successful.-- You can import your data with minimal downtime by using [Azure Database Migration Service (DMS)](https://datamigration.microsoft.com/) for online or offline migrations. -
-## Azure Database for MySQL Memory Recommendations
-
-An Azure Database for MySQL performance best practice is to allocate enough RAM so that your working set resides almost completely in memory.
--- Check if the memory percentage being used in reaching the [limits](./concepts-pricing-tiers.md) using the [metrics for the MySQL server](./concepts-monitoring.md). -- Set up alerts on such numbers to ensure that as the servers reaches limits you can take prompt actions to fix it. Based on the limits defined, check if scaling up the database SKUΓÇöeither to higher compute size or to better pricing tier, which results in a dramatic increase in performance. -- Scale up until your performance numbers no longer drops dramatically after a scaling operation. For information on monitoring a DB instance's metrics, see [MySQL DB Metrics](./concepts-monitoring.md#metrics).
-
-## Use InnoDB Buffer Pool Warmup
-
-After restarting Azure Database for MySQL server, the data pages residing in storage are loaded as the tables are queried which leads to increased latency and slower performance for the first execution of the queries. This may not be acceptable for latency sensitive workloads.
-
-Utilizing InnoDB buffer pool warmup shortens the warmup period by reloading disk pages that were in the buffer pool before the restart rather than waiting for DML or SELECT operations to access corresponding rows.
-
-You can reduce the warmup period after restarting your Azure Database for MySQL server, which represents a performance advantage by configuring [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html). InnoDB saves a percentage of the most recently used pages for each buffer pool at server shutdown and restores these pages at server startup.
-
-It is also important to note that improved performance comes at the expense of longer start-up time for the server. When this parameter is enabled, server startup and restart time is expected to increase depending on the IOPS provisioned on the server.
-
-We recommend testing and monitor the restart time to ensure the start-up/restart performance is acceptable as the server is unavailable during that time. It is not recommended to use this parameter with less than 1000 provisioned IOPS (or in other words, when storage provisioned is less than 335 GB).
-
-To save the state of the buffer pool at server shutdown, set server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up/restart time by lowering and fine-tuning the value of server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`.
-
-> [!Note]
-> InnoDB buffer pool warmup parameters are only supported in general purpose storage servers with up to 16-TB storage. Learn more about [Azure Database for MySQL storage options here](./concepts-pricing-tiers.md#storage).
-
-## Next steps
--- [Best practice for server operations using Azure Database for MySQL](concept-operation-excellence-best-practices.md) <br/>-- [Best practice for monitoring your Azure Database for MySQL](concept-monitoring-best-practices.md)<br/>-- [Get started with Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md)<br/>
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-reserved-pricing.md
- Title: Prepay for compute with reserved capacity - Azure Database for MySQL
-description: Prepay for Azure Database for MySQL compute resources with reserved capacity
----- Previously updated : 10/06/2021--
-# Prepay for Azure Database for MySQL compute resources with reserved instances
--
-Azure Database for MySQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
-
-## How does the instance reservation work?
-You do not need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). </br>
-
-You can buy Azure Database for MySQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
-
-* You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
-* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
-* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for MySQL reserved capacity. </br>
-
-The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reserved-instance-usage.md).
-
-## Reservation exchanges and refunds
-
-You can exchange a reservation for another reservation of the same type, you can also exchange a reservation from Azure Database for MySQL - Single Server with Flexible Server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
-
-## Reservation discount
-
-You may save up to 67% on compute costs with reserved instances. In order to find the discount for your case, please visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
--
-## Determine the right database size before purchase
-
-The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed server within a specific region and using the same performance tier and hardware generation.</br>
-
-For example, let's suppose that you are running one general purpose, Gen5 ΓÇô 32 vCore MySQL database, and two memory optimized, Gen5 ΓÇô 16 vCore MySQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose, Gen5 ΓÇô 32 vCore database server, and one memory optimized, Gen5 ΓÇô 16 vCore database server. Let's suppose that you know that you will need these resources for at least 1 year. In this case, you should purchase a 64 (2x32) vCores, 1 year reservation for single database general purpose - Gen5 and a 48 (2x16 + 16) vCore 1 year reservation for single database memory optimized - Gen5
--
-## Buy Azure Database for MySQL reserved capacity
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Select **All services** > **Reservations**.
-3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for MySQL** to purchase a new reservation for your MySQL databases.
-4. Fill-in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for MySQL servers that get the discount depend on the scope and quantity selected.
----
-The following table describes required fields.
-
-| Field | Description |
-| : | :- |
-| Subscription | The subscription used to pay for the Azure Database for MySQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MySQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
-| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for MySQL servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for MySQL servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for MySQL servers in the selected subscription and the selected resource group within that subscription.
-| Region | The Azure region that's covered by the Azure Database for MySQL reserved capacity reservation.
-| Deployment Type | The Azure Database for MySQL resource type that you want to buy the reservation for.
-| Performance Tier | The service tier for the Azure Database for MySQL servers.
-| Term | One year
-| Quantity | The amount of compute resources being purchased within the Azure Database for MySQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you are running or planning to run an Azure Database for MySQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
-
-## Reserved instances API support
-
-Use Azure APIs to programmatically get information for your organization about Azure service or software reservations. For example, use the APIs to:
--- Find reservations to buy-- Buy a reservation-- View purchased reservations-- View and manage reservation access-- Split or merge reservations-- Change the scope of reservations-
-For more information, see [APIs for Azure reservation automation](../cost-management-billing/reservations/reservation-apis.md).
-
-## vCore size flexibility
-
-vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit.
-
-## How to view reserved instance purchase details
-
-You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations). For more information, see [How a reservation discount is applied to Azure Database for MySQL](../cost-management-billing/reservations/understand-reservation-charges-mysql.md).
-
-## Reserved instance expiration
-
-You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Instances for Azure Database for MySQL](../cost-management-billing/reservations/understand-reservation-charges-mysql.md).
-
-## Need help ? Contact us
-
-If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest)
-
-## Next steps
-
-The vCore reservation discount is applied automatically to the number of Azure Database for MySQL servers that match the Azure Database for MySQL reserved capacity reservation scope and attributes. You can update the scope of the Azure database for MySQL reserved capacity reservation through Azure portal, PowerShell, CLI or through the API. </br></br>
-To learn how to manage the Azure Database for MySQL reserved capacity, see manage Azure Database for MySQL reserved capacity.
-
-To learn more about Azure Reservations, see the following articles:
-
-* [What are Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md)?
-* [Manage Azure Reservations](../cost-management-billing/reservations/manage-reserved-vm-instance.md)
-* [Understand Azure Reservations discount](../cost-management-billing/reservations/understand-reservation-charges.md)
-* [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reservation-charges-mysql.md)
-* [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
-* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-aks.md
- Title: Connect to Azure Kubernetes Service - Azure Database for MySQL
-description: Learn about connecting Azure Kubernetes Service with Azure Database for MySQL
----- Previously updated : 07/14/2020---
-# Best practices for Azure Kubernetes Service and Azure Database for MySQL
--
-Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for MySQL together to create an application.
-
-## Create Database before creating the AKS cluster
-
-Azure Database for MySQL has two deployment options:
--- Single Sever-- Flexible Server-
-Single Server supports Single availability zone and Flexible Server supports multiple availability zones. AKS on the other hand also supports enabling single or multiple availability zones. Creating the database server first to see the availability zone the server is in and then create the AKS clusters in the same availability zone. This can improve performance for the application by reducing networking latency.
-
-## Use Accelerated networking
-
-Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../virtual-network/create-vm-accelerated-networking-cli.md).
-
-From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
-
-You can confirm whether your AKS cluster has accelerated networking:
-
-1. Go to the Azure portal and select your AKS cluster.
-2. Select the Properties tab.
-3. Copy the name of the **Infrastructure Resource Group**.
-4. Use the portal search bar to locate and open the infrastructure resource group.
-5. Select a VM in that resource group.
-6. Go to the VM's **Networking** tab.
-7. Confirm whether **Accelerated networking** is 'Enabled.'
-
-Or through the Azure CLI using the following two commands:
-
-```azurecli
-az aks show --resource-group myResourceGroup --name myAKSCluster --query "nodeResourceGroup"
-```
-
-The output will be the generated resource group that AKS creates containing the network interface. Take the "nodeResourceGroup" name and use it in the next command. **EnableAcceleratedNetworking** will either be true or false:
-
-```azurecli
-az network nic list --resource-group nodeResourceGroup -o table
-```
-
-## Use Azure premium fileshare
-
- Use [Azure premium fileshare](../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal) for persistent storage that can be used by one or many pods, and can be dynamically or statically provisioned. Azure premium fileshare gives you best performance for your application if you expect large number of I/O operations on the file storage. To learn more, see [how to enable Azure Files](../aks/azure-files-dynamic-pv.md).
-
-## Next steps
-
-Create an AKS cluster [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-audit-logs.md
- Title: Audit logs - Azure Database for MySQL
-description: Describes the audit logs available in Azure Database for MySQL, and the available parameters for enabling logging levels.
----- Previously updated : 6/24/2020--
-# Audit Logs in Azure Database for MySQL
--
-In Azure Database for MySQL, the audit log is available to users. The audit log can be used to track database-level activity and is commonly used for compliance.
-
-## Configure audit logging
-
->[!IMPORTANT]
-> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted and minimum amount of data is collected.
-
-By default the audit log is disabled. To enable it, set `audit_log_enabled` to ON.
-
-Other parameters you can adjust include:
--- `audit_log_events`: controls the events to be logged. See below table for specific audit events.-- `audit_log_include_users`: MySQL users to be included for logging. The default value for this parameter is empty, which will include all the users for logging. This has higher priority over `audit_log_exclude_users`. Max length of the parameter is 512 characters.-- `audit_log_exclude_users`: MySQL users to be excluded from logging. Max length of the parameter is 512 characters.-
-> [!NOTE]
-> `audit_log_include_users` has higher priority over `audit_log_exclude_users`. For example, if `audit_log_include_users` = `demouser` and `audit_log_exclude_users` = `demouser`, the user will be included in the audit logs because `audit_log_include_users` has higher priority.
-
-| **Event** | **Description** |
-|||
-| `CONNECTION` | - Connection initiation (successful or unsuccessful) <br> - User reauthentication with different user/password during session <br> - Connection termination |
-| `DML_SELECT`| SELECT queries |
-| `DML_NONSELECT` | INSERT/DELETE/UPDATE queries |
-| `DML` | DML = DML_SELECT + DML_NONSELECT |
-| `DDL` | Queries like "DROP DATABASE" |
-| `DCL` | Queries like "GRANT PERMISSION" |
-| `ADMIN` | Queries like "SHOW STATUS" |
-| `GENERAL` | All in DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN |
-| `TABLE_ACCESS` | - Available for MySQL 5.7 and MySQL 8.0 <br> - Table read statements, such as SELECT or INSERT INTO ... SELECT <br> - Table delete statements, such as DELETE or TRUNCATE TABLE <br> - Table insert statements, such as INSERT or REPLACE <br> - Table update statements, such as UPDATE |
-
-## Access audit logs
-
-Audit logs are integrated with Azure Monitor Diagnostic Logs. Once you've enabled audit logs on your MySQL server, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs in the Azure portal, see the [audit log portal article](howto-configure-audit-logs-portal.md#set-up-diagnostic-logs).
-
->[!Note]
->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings
-
-## Diagnostic Logs Schemas
-
-The following sections describe what's output by MySQL audit logs based on the event type. Depending on the output method, the fields included and the order in which they appear may vary.
-
-### Connection
-
-| **Property** | **Description** |
-|||
-| `TenantId` | Your tenant ID |
-| `SourceSystem` | `Azure` |
-| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
-| `Type` | Type of the log. Always `AzureDiagnostics` |
-| `SubscriptionId` | GUID for the subscription that the server belongs to |
-| `ResourceGroup` | Name of the resource group the server belongs to |
-| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
-| `ResourceType` | `Servers` |
-| `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
-| `Category` | `MySqlAuditLogs` |
-| `OperationName` | `LogEvent` |
-| `LogicalServerName_s` | Name of the server |
-| `event_class_s` | `connection_log` |
-| `event_subclass_s` | `CONNECT`, `DISCONNECT`, `CHANGE USER` (only available for MySQL 5.7) |
-| `connection_id_d` | Unique connection ID generated by MySQL |
-| `host_s` | Blank |
-| `ip_s` | IP address of client connecting to MySQL |
-| `user_s` | Name of user executing the query |
-| `db_s` | Name of database connected to |
-| `\_ResourceId` | Resource URI |
-
-### General
-
-Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN event types.
-
-> [!NOTE]
-> For `sql_text`, log will be truncated if it exceeds 2048 characters.
-
-| **Property** | **Description** |
-|||
-| `TenantId` | Your tenant ID |
-| `SourceSystem` | `Azure` |
-| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
-| `Type` | Type of the log. Always `AzureDiagnostics` |
-| `SubscriptionId` | GUID for the subscription that the server belongs to |
-| `ResourceGroup` | Name of the resource group the server belongs to |
-| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
-| `ResourceType` | `Servers` |
-| `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
-| `Category` | `MySqlAuditLogs` |
-| `OperationName` | `LogEvent` |
-| `LogicalServerName_s` | Name of the server |
-| `event_class_s` | `general_log` |
-| `event_subclass_s` | `LOG`, `ERROR`, `RESULT` (only available for MySQL 5.6) |
-| `event_time` | Query start time in UTC timestamp |
-| `error_code_d` | Error code if query failed. `0` means no error |
-| `thread_id_d` | ID of thread that executed the query |
-| `host_s` | Blank |
-| `ip_s` | IP address of client connecting to MySQL |
-| `user_s` | Name of user executing the query |
-| `sql_text_s` | Full query text |
-| `\_ResourceId` | Resource URI |
-
-### Table access
-
-> [!NOTE]
-> Table access logs are only output for MySQL 5.7.<br>For `sql_text`, log will be truncated if it exceeds 2048 characters.
-
-| **Property** | **Description** |
-|||
-| `TenantId` | Your tenant ID |
-| `SourceSystem` | `Azure` |
-| `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
-| `Type` | Type of the log. Always `AzureDiagnostics` |
-| `SubscriptionId` | GUID for the subscription that the server belongs to |
-| `ResourceGroup` | Name of the resource group the server belongs to |
-| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
-| `ResourceType` | `Servers` |
-| `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
-| `Category` | `MySqlAuditLogs` |
-| `OperationName` | `LogEvent` |
-| `LogicalServerName_s` | Name of the server |
-| `event_class_s` | `table_access_log` |
-| `event_subclass_s` | `READ`, `INSERT`, `UPDATE`, or `DELETE` |
-| `connection_id_d` | Unique connection ID generated by MySQL |
-| `db_s` | Name of database accessed |
-| `table_s` | Name of table accessed |
-| `sql_text_s` | Full query text |
-| `\_ResourceId` | Resource URI |
-
-## Analyze logs in Azure Monitor Logs
-
-Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your audited events. Below are some sample queries to help you get started. Make sure to update the below with your server name.
--- List GENERAL events on a particular server-
- ```kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlAuditLogs' and event_class_s == "general_log"
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | order by TimeGenerated asc nulls last
- ```
--- List CONNECTION events on a particular server-
- ```kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlAuditLogs' and event_class_s == "connection_log"
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | order by TimeGenerated asc nulls last
- ```
--- Summarize audited events on a particular server-
- ```kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | summarize count() by event_class_s, event_subclass_s, user_s, ip_s
- ```
--- Graph the audit event type distribution on a particular server-
- ```kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m)
- | render timechart
- ```
--- List audited events across all MySQL servers with Diagnostic Logs enabled for audit logs-
- ```kusto
- AzureDiagnostics
- | where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | order by TimeGenerated asc nulls last
- ```
-
-## Next steps
--- [How to configure audit logs in the Azure portal](howto-configure-audit-logs-portal.md)
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-azure-ad-authentication.md
- Title: Active Directory authentication - Azure Database for MySQL
-description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for MySQL
----- Previously updated : 07/23/2020--
-# Use Azure Active Directory for authenticating with MySQL
--
-Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for MySQL using identities defined in Azure AD.
-With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
-
-Benefits of using Azure AD include:
--- Authentication of users across Azure Services in a uniform way-- Management of password policies and password rotation in a single place-- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords-- Customers can manage database permissions using external (Azure AD) groups.-- Azure AD authentication uses MySQL database users to authenticate identities at the database level-- Support of token-based authentication for applications connecting to Azure Database for MySQL-
-To configure and use Azure Active Directory authentication, use the following process:
-
-1. Create and populate Azure Active Directory with user identities as needed.
-2. Optionally associate or change the Active Directory currently associated with your Azure subscription.
-3. Create an Azure AD administrator for the Azure Database for MySQL server.
-4. Create database users in your database mapped to Azure AD identities.
-5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
-
-> [!NOTE]
-> To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Configure and sign in with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).
-
-## Architecture
-
-The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for MySQL. The arrows indicate communication pathways.
-
-![authentication flow][1]
-
-## Administrator structure
-
-When using Azure AD authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Azure AD administrator. Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL server. Only one Azure AD administrator (a user or group) can be configured at any time.
-
-![admin structure][2]
-
-## Permissions
-
-To create new users that can authenticate with Azure AD, you must be the designated Azure AD administrator. This user is assigned by configuring the Azure AD Administrator account for a specific Azure Database for MySQL server.
-
-To create a new Azure AD database user, you must connect as the Azure AD administrator. This is demonstrated in [Configure and Login with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).
-
-Any Azure AD authentication is only possible if the Azure AD admin was created for Azure Database for MySQL. If the Azure Active Directory admin was removed from the server, existing Azure Active Directory users created previously can no longer connect to the database using their Azure Active Directory credentials.
-
-## Connecting using Azure AD identities
-
-Azure Active Directory authentication supports the following methods of connecting to a database using Azure AD identities:
--- Azure Active Directory Password-- Azure Active Directory Integrated-- Azure Active Directory Universal with MFA-- Using Active Directory Application certificates or client secrets-- [Managed Identity](howto-connect-with-managed-identity.md)-
-Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in.
-
-Please note that management operations, such as adding new users, are only supported for Azure AD user roles at this point.
-
-> [!NOTE]
-> For more details on how to connect with an Active Directory token, see [Configure and sign in with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).
-
-## Additional considerations
--- Azure Active Directory authentication is only available for MySQL 5.7 and newer.-- Only one Azure AD administrator can be configured for a Azure Database for MySQL server at any time.-- Only an Azure AD administrator for MySQL can initially connect to the Azure Database for MySQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users.-- If a user is deleted from Azure AD, that user will no longer be able to authenticate with Azure AD, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user.
-> [!NOTE]
-> Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for MySQL this access will be revoked immediately.
-- If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will re-enable Azure AD logins.-- Azure Database for MySQL matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing user.-
-## Next steps
--- To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Configure and sign in with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).-- For an overview of logins, and database users for Azure Database for MySQL, see [Create users in Azure Database for MySQL](howto-create-users.md).-
-<!--Image references-->
-
-[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
-[2]: ./media/concepts-azure-ad-authentication/admin-structure.png
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-azure-advisor-recommendations.md
- Title: Azure Advisor for MySQL
-description: Learn about Azure Advisor recommendations for MySQL.
----- Previously updated : 04/08/2021-
-# Azure Advisor for MySQL
--
-Learn about how Azure Advisor is applied to Azure Database for MySQL and get answers to common questions.
-## What is Azure Advisor for MySQL?
-The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your MySQL database.
-Advisor recommendations are split among our MySQL database offerings:
-* Azure Database for MySQL - Single Server
-* Azure Database for MySQL - Flexible Server
-
-Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations.
-## Where can I view my recommendations?
-Recommendations are available from the **Overview** navigation sidebar in the Azure portal. A preview will appear as a banner notification, and details can be viewed in the **Notifications** section located just below the resource usage graphs.
--
-## Recommendation types
-Azure Database for MySQL prioritize the following types of recommendations:
-* **Performance**: To improve the speed of your MySQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../advisor/advisor-performance-recommendations.md).
-* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limit and connection limit recommendations. For more information, see [Advisor Reliability recommendations](../advisor/advisor-high-availability-recommendations.md).
-* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../advisor/advisor-cost-recommendations.md).
-
-## Understanding your recommendations
-* **Daily schedule**: For Azure MySQL databases, we check server telemetry and issue recommendations on a daily schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry on the following day.
-* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
-
-## Next steps
-For more information, see [Azure Advisor Overview](../advisor/advisor-overview.md).
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-backup.md
- Title: Backup and restore - Azure Database for MySQL
-description: Learn about automatic backups and restoring your Azure Database for MySQL server.
----- Previously updated : 3/27/2020---
-# Backup and restore in Azure Database for MySQL
--
-Azure Database for MySQL automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion.
-
-## Backups
-
-Azure Database for MySQL takes backups of the data files and the transaction log. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can [optionally configure it](howto-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted using AES 256-bit encryption.
-
-These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](concepts-migrate-dump-restore.md) to copy a database.
-
-The backup type and frequency is depending on the backend storage for the servers.
-
-### Backup type and frequency
-
-#### Basic storage servers
-
-The Basic storage is the backend storage supporting [Basic tier servers](concepts-pricing-tiers.md). Backups on Basic storage servers are snapshot-based. A full database snapshot is performed daily. There are no differential backups performed for basic storage servers and all snapshot backups are full database backups only.
-
-Transaction log backups occur every five minutes.
-
-#### General purpose storage v1 servers (supports up to 4-TB storage)
-
-The General purpose storage is the backend storage supporting [General Purpose](concepts-pricing-tiers.md) and [Memory Optimized tier](concepts-pricing-tiers.md) server. For servers with general purpose storage up to 4 TB, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. The backups on general purpose storage up to 4-TB storage are not snapshot-based and consumes IO bandwidth at the time of backup. For large databases (> 1 TB) on 4-TB storage, we recommend you consider
--- Provisioning more IOPs to account for backup IOs OR-- Alternatively, migrate to general purpose storage that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred [Azure regions](./concepts-pricing-tiers.md#storage). There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.-
-#### General purpose storage v2 servers (supports up to 16-TB storage)
-
-In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes.
-
-For more information of Basic and General purpose storage, refer [storage documentation](./concepts-pricing-tiers.md#storage).
-
-### Backup retention
-
-Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./howto-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./howto-restore-server-cli.md#set-backup-configuration).
-
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days:
--- General purpose storage v1 servers (supporting up to 4-TB storage) will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.-- General purpose storage v2 servers (supporting up to 16-TB storage) will retain the full database snapshots and transaction log backups in last 8 days.-
-#### Long-term retention
-
-Long-term retention of backups beyond 35 days is currently not natively supported by the service yet. You have an option to use mysqldump to take backups and store them for long-term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it.
-
-### Backup redundancy options
-
-Azure Database for MySQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../availability-zones/cross-region-replication-azure.md). This geo-redundancy provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
-
-> [!NOTE]
->For the following regions - Central India, France Central, UAE North, South Africa North; General purpose storage v2 storage is in Public Preview. If you create a source server in General purpose storage v2 (Supporting up to 16-TB storage) in the above mentioned regions then enabling Geo-Redundant Backup is not supported.
-
-#### Moving from locally redundant to geo-redundant backup storage
-
-Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](concepts-migrate-dump-restore.md) is the only supported option.
-
-### Backup storage cost
-
-Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/).
-
-You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available via the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
-
-The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups.
-
-## Restore
-
-In Azure Database for MySQL, performing a restore creates a new server from the original server's backups and restores all databases contained in the server.
-
-There are two types of restore available:
--- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server utilizing the combination of full and transaction log backups.-- **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region utilizing the most recent backup taken.-
-The estimated time for the recovery of the server depends on several factors:
-* The size of the databases
-* The number of transaction logs involved
-* The amount of activity that needs to be replayed to recover to the restore point
-* The network bandwidth if the restore is to a different region
-* The number of concurrent restore requests being processed in the target region
-* The presence of primary key in the tables in the database. For faster recovery, consider adding primary key for all the tables in your database. To check if your tables have primary key, you can use the following query:
-```sql
-select tab.table_schema as database_name, tab.table_name from information_schema.tables tab left join information_schema.table_constraints tco on tab.table_schema = tco.table_schema and tab.table_name = tco.table_name and tco.constraint_type = 'PRIMARY KEY' where tco.constraint_type is null and tab.table_schema not in('mysql', 'information_schema', 'performance_schema', 'sys') and tab.table_type = 'BASE TABLE' order by tab.table_schema, tab.table_name;
-```
-For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that a high number of geo-restore requests will be initiated for disaster recovery. When there are many requests, the recovery time for individual databases can increase. Most database restores finish in less than 12 hours.
-
-> [!IMPORTANT]
-> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](howto-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
-
-### Point-in-time restore
-
-Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option.
-
-> [!NOTE]
-> There are two server parameters which are reset to default values (and are not copied over from the primary server) after the restore operation
->
-> - time_zone - This value to set to DEFAULT value **SYSTEM**
-> - event_scheduler - The event_scheduler is set to **OFF** on the restored server
->
-> You will need to set these server parameters by reconfiguring the [server parameter](howto-server-parameters.md)
-
-Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect.
-
-You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes.
-
-### Geo-restore
-
-You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups.
-- General purpose storage v1 servers (supporting up to 4-TB storage) can be restored to the geo-paired region, or to any Azure region that supports Azure Database for MySQL Single Server service.-- General purpose storage v2 servers (supporting up to 16-TB storage) can only be restored to Azure regions that support General purpose storage v2 servers infrastructure.
-Review [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage) for the list of supported regions.
-
-Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
-
-> [!IMPORTANT]
->If a geo-restore is performed for a newly created server, the initial backup synchronization may take more than 24 hours depending on data size as the initial full snapshot backup copy time is much higher. Subsequent snapshot backups are incremental copy and hence the restores are faster after 24 hours of server creation. If you are evaluating geo-restores to define your RTO, we recommend you to wait and evaluate geo-restore **only after 24 hours** of server creation for better estimates.
-
-During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size during geo-restore is not supported.
-
-The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours.
-
-### Perform post-restore tasks
-
-After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running:
--- If the new server is meant to replace the original server, redirect clients and client applications to the new server-- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.-- Ensure appropriate logins and database level permissions are in place-- Configure alerts, as appropriate-
-## Next steps
--- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).-- To restore to a point-in-time using the Azure portal, seeΓÇ»[restore server to a point-in-time using the Azure portal](howto-restore-server-portal.md).-- To restore to a point-in-time using Azure CLI, seeΓÇ»[restore server to a point-in-time using CLI](howto-restore-server-cli.md).
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-business-continuity.md
- Title: Business continuity - Azure Database for MySQL
-description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MySQL service.
----- Previously updated : 7/7/2020--
-# Overview of business continuity with Azure Database for MySQL - Single Server
--
-This article describes the capabilities that Azure Database for MySQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance.
-
-## Features that you can use to provide business continuity
-
-As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after the disruptive event - this is your Recovery Time Objective (RTO). You also need to understand the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event - this is your Recovery Point Objective (RPO).
-
-Azure Database for MySQL Single Server provides business continuity and disaster recovery features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in a different region. Each has different characteristics for the recovery time and the potential data loss. With [Geo-restore](concepts-backup.md) feature, a new server is created using the backup data that is replicated from another region. The overall time it takes to restore and recover depends on the size of the database and the amount of logs to recover. The overall time to establish the server varies from few minutes to few hours. With [read replicas](concepts-read-replicas.md), transaction logs from the primary are asynchronously streamed to the replica. In the event of a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss.
-
-> [!NOTE]
-> The lag between the primary and the replica depends on the latency between the sites, the amount of data to be transmitted and most importantly on the write workload of the primary server. Heavy write workloads can generate significant lag.
->
-> Because of asynchronous nature of replication used for read-replicas, they **should not** be considered as a High Availability (HA) solution since the higher lags can mean higher RTO and RPO. Only for workloads where the lag remains smaller through the peak and non-peak times of the workload, read replicas can act as a HA alternative. Otherwise read replicas are intended for true read-scale for ready heavy workloads and for (Disaster Recovery) DR scenarios.
-
-The following table compares RTO and RPO in a **typical workload** scenario:
-
-| **Capability** | **Basic** | **General Purpose** | **Memory optimized** |
-| :: | :-: | :--: | :: |
-| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
-| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h |
-| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
-
- \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
-
-## Recover a server after a user or application error
-
-You can use the service's backups to recover a server from various disruptive events. A user may accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data due to an application defect, and so on.
-
-You can perform a point-in-time-restore to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server.
-
-> [!IMPORTANT]
-> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](howto-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
-
-## Recover from an Azure regional data center outage
-
-Although rare, an Azure data center can have an outage. When an outage occurs, it causes a business disruption that might only last a few minutes, but could last for hours.
-
-One option is to wait for your server to come back online when the data center outage is over. This works for applications that can afford to have the server offline for some period of time, for example a development environment. When data center has an outage, you do not know how long the outage might last, so this option only works if you don't need your server for a while.
-
-## Geo-restore
-
-The geo-restore feature restores the server using geo-redundant backups. The backups are hosted in your server's [paired region](../availability-zones/cross-region-replication-azure.md). These backups are accessible even when the region your server is hosted in is offline. You can restore from these backups to any other region and bring your server back online. Learn more about geo-restore from the [backup and restore concepts article](concepts-backup.md).
-
-> [!IMPORTANT]
-> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using mysqldump of your existing server and restore it to a newly created server configured with geo-redundant backups.
-
-## Cross-region read replicas
-
-You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using MySQL's binary log replication technology. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
-
-## FAQ
-
-### Where does Azure Database for MySQL store customer data?
-By default, Azure Database for MySQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
-
-## Next steps
--- Learn more about the [automated backups in Azure Database for MySQL](concepts-backup.md).-- Learn how to restore using [the Azure portal](howto-restore-server-portal.md) or [the Azure CLI](howto-restore-server-cli.md).-- Learn about [read replicas in Azure Database for MySQL](concepts-read-replicas.md).
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-certificate-rotation.md
- Title: Certificate rotation for Azure Database for MySQL
-description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for MySQL
----- Previously updated : 04/08/2021--
-# Understanding the changes in the Root CA change for Azure Database for MySQL Single Server
--
-Azure Database for MySQL Single Server successfully completed the root certificate change on **February 15, 2021 (02/15/2021)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
-
-> [!NOTE]
-> This article applies to [Azure Database for MySQL - Single Server](single-server-overview.md) ONLY. For [Azure Database for MySQL - Flexible Server](flexible-server/overview.md), the certificate needed to communicate over SSL is [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)
->
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
-
-#### Why is a root certificate update required?
-
-Azure Database for MySQL users can only use the predefined certificate to connect to their MySQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
-
-Per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-
-The new certificate is rolled out and in effect as of February 15, 2021 (02/15/2021).
-
-#### What change was performed on February 15, 2021 (02/15/2021)?
-
-On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers don't need to change anything and there's no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
-
-#### Do I need to make any changes on my client to maintain connectivity?
-
-No change is required on client side. If you followed our previous recommendation below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
-
-###### Previous recommendation
-
-To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation, one of the allowed values will be used. Refer to the following steps:
-
-1. Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from the following links:
-
- * [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem)
- * [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem)
-
-2. Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
-
- * For Java (MySQL Connector/J) users, execute:
-
- ```console
- keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
- ```
-
- ```console
- keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
- ```
-
- Then replace the original keystore file with the new generated one:
-
- * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
- * System.setProperty("javax.net.ssl.trustStorePassword","password");
-
- * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
-
- :::image type="content" source="media/overview/netconnecter-cert.png" alt-text="Azure Database for MySQL .NET cert diagram":::
-
- * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
-
- * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format:
-
- ```
- --BEGIN CERTIFICATE--
- (Root CA1: BaltimoreCyberTrustRoot.crt.pem)
- --END CERTIFICATE--
- --BEGIN CERTIFICATE--
- (Root CA2: DigiCertGlobalRootG2.crt.pem)
- --END CERTIFICATE--
- ```
-
-3. Replace the original root CA pem file with the combined root CA file and restart your application/client.
-
- In the future, after the new certificate is deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
-
-> [!NOTE]
-> Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done, and then it will be safe to drop the **Baltimore certificate**.
-
-#### Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-
-We evaluated the customer readiness for this change and realized that many customers were looking for extra lead time to manage this change. To provide more lead time to customers for readiness, we decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year, providing sufficient lead time to the customers and end users.
-
-Our recommendation to users is to use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
-
-#### What if we removed the BaltimoreCyberTrustRoot certificate?
-
-You'll start to encounter connectivity errors while connecting to your Azure Database for MySQL server. You'll need to [configure SSL](howto-configure-ssl.md) with the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
-
-## Frequently asked questions
-
-#### If I'm not using SSL/TLS, do I still need to update the root CA?
-
- No actions are required if you aren't using SSL/TLS.
-
-#### If I'm using SSL/TLS, do I need to restart my database server to update the root CA?
-
-No, you don't need to restart the database server to start using the new certificate. This root certificate is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
-
-#### How do I know if I'm using SSL/TLS with root certificate verification?
-
-You can identify whether your connections verify the root certificate by reviewing your connection string.
-
-* If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate.
-* If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you don't need to update certificates.
-* If your connection string doesn't specify sslmode, you don't need to update certificates.
-
-If you're using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates.
-
-#### What is the impact of using App Service with Azure Database for MySQL?
-
-For Azure app services connecting to Azure Database for MySQL, there are two possible scenarios depending on how on you're using SSL with your application.
-
-* This new certificate has been added to App Service at platform level. If you're using the SSL certificates included on App Service platform in your application, then no action is needed. This is the most common scenario.
-* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and produce a combined certificate as mentioned above and use the certificate file. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress). This is an uncommon scenario but we have seen some users using this.
-
-#### What is the impact of using Azure Kubernetes Services (AKS) with Azure Database for MySQL?
-
-If you're trying to connect to the Azure Database for MySQL using Azure Kubernetes Services (AKS), it's similar to access from a dedicated customers host environment. Refer to the steps [here](../aks/ingress-own-tls.md).
-
-#### What is the impact of using Azure Data Factory to connect to Azure Database for MySQL?
-
-For a connector using Azure Integration Runtime, the connector uses certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates, and therefore no action is needed.
-
-For a connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you'll need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
-
-#### Do I need to plan a database server maintenance downtime for this change?
-
-No. Since the change is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change.
-
-#### If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
-
-For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL.
-
-#### How often does Microsoft update their certificates or what is the expiry policy?
-
-These certificates used by Azure Database for MySQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times.
-
-#### If I'm using read replicas, do I need to perform this update only on source server or the read replicas?
-
-Since this update is a client-side change, if the client used to read data from the replica server, you'll need to apply the changes for those clients as well.
-
-#### If I'm using Data-in replication, do I need to perform any action?
-
-If you're using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider:
-
-* If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
-
- ```azurecli-interactive
- Master_SSL_Allowed : Yes
- Master_SSL_CA_File : ~\azure_mysqlservice.pem
- Master_SSL_CA_Path :
- Master_SSL_Cert : ~\azure_mysqlclient_cert.pem
- Master_SSL_Cipher :
- Master_SSL_Key : ~\azure_mysqlclient_key.pem
- ```
-
- If you see that the certificate is provided for the CA_file, SSL_Cert, and SSL_Key, you'll need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and create a combined cert file.
-
-* If the data-replication is between two Azure Database for MySQL servers, then you'll need to reset the replica by executing **CALL mysql.az_replication_change_master** and provide the new dual root certificate as the last parameter [master_ssl_ca](howto-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication).
-
-#### Is there a server-side query to determine whether SSL is being used?
-
-To verify if you're using SSL connection to connect to the server refer [SSL verification](howto-configure-ssl.md#step-4-verify-the-ssl-connection).
-
-#### Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
-
-No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
-
-#### What if I have further questions?
-
-For questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com).
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-compatibility.md
- Title: Driver and tools compatibility - Azure Database for MySQL
-description: This article describes the MySQL drivers and management tools that are compatible with Azure Database for MySQL.
----- Previously updated : 11/4/2021-
-# MySQL drivers and management tools compatible with Azure Database for MySQL
--
-This article describes the drivers and management tools that are compatible with Azure Database for MySQL Single Server.
-
-> [!NOTE]
-> This article is only applicable to Azure Database for MySQL Single Server to ensure drivers are compatible with [connectivity architecture](concepts-connectivity-architecture.md) of Single Server service. [Azure Database for MySQL Flexible Server](./flexible-server/overview.md) is compatible with all the drivers and tools supported and compatible with MySQL community edition.
-
-## MySQL Drivers
-Azure Database for MySQL uses the world's most popular community edition of MySQL database. As such, it's compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open-source community to constantly improve the functionality and usability of MySQL drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 is provided in the following table:
-
-| **Programming Language** | **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** |
-| :-- | : | :-- | :- | : | :-- |
-| PHP | mysqli, pdo_mysql, mysqlnd | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.|
-| .NET | Async MySQL Connector for .NET | https://github.com/mysql-net/MySqlConnector <br> [Installation package from NuGet](https://www.nuget.org/packages/MySqlConnector/) | 0.27 and after | 0.26.5 and before | |
-| .NET | MySQL Connector/NET | https://github.com/mysql/mysql-connector-net | 6.6.3, 7.0, 8.0 | | An encoding bug may cause connections to fail on some non-UTF8 Windows systems. |
-| Node.js | mysqljs | https://github.com/mysqljs/mysql/ <br> Installation package from NPM:<br> Run `npm install mysql` from NPM | 2.15 | 2.14.1 and before | |
-| Node.js | node-mysql2 | https://github.com/sidorares/node-mysql2 | 1.3.4+ | | |
-| Go | Go MySQL Driver | https://github.com/go-sql-driver/mysql/releases | 1.3, 1.4 | 1.2 and before | Use `allowNativePasswords=true` in the connection string for version 1.3. Version 1.4 contains a fix and `allowNativePasswords=true` is no longer required. |
-| Python | MySQL Connector/Python | https://pypi.python.org/pypi/mysql-connector-python | 1.2.3, 2.0, 2.1, 2.2, use 8.0.16+ with MySQL 8.0 | 1.2.2 and before | |
-| Python | PyMySQL | https://pypi.org/project/PyMySQL/ | 0.7.11, 0.8.0, 0.8.1, 0.9.3+ | 0.9.0 - 0.9.2 (regression in web2py) | |
-| Java | MariaDB Connector/J | https://downloads.mariadb.org/connector-java/ | 2.1, 2.0, 1.6 | 1.5.5 and before | |
-| Java | MySQL Connector/J | https://github.com/mysql/mysql-connector-j | 5.1.21+, use 8.0.17+ with MySQL 8.0 | 5.1.20 and below | |
-| C | MySQL Connector/C (libmysqlclient) | https://dev.mysql.com/doc/c-api/5.7/en/c-api-implementations.html | 6.0.2+ | | |
-| C | MySQL Connector/ODBC (myodbc) | https://github.com/mysql/mysql-connector-odbc | 3.51.29+ | | |
-| C++ | MySQL Connector/C++ | https://github.com/mysql/mysql-connector-cpp | 1.1.9+ | 1.1.3 and below | |
-| C++ | MySQL++| https://github.com/tangentsoft/mysqlpp | 3.2.3+ | | |
-| Ruby | mysql2 | https://github.com/brianmario/mysql2 | 0.4.10+ | | |
-| R | RMySQL | https://github.com/rstats-db/RMySQL | 0.10.16+ | | |
-| Swift | mysql-swift | https://github.com/novi/mysql-swift | 0.7.2+ | | |
-| Swift | vapor/mysql | https://github.com/vapor/mysql-kit | 2.0.1+ | | |
-
-## Management Tools
-The compatibility advantage extends to database management tools as well. Your existing tools should continue to work with Azure Database for MySQL, as long as the database manipulation operates within the confines of user permissions. Three common database management tools that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 are listed in the following table:
-
-| | **MySQL Workbench 6.x and up** | **Navicat 12** | **PHPMyAdmin 4.x and up** | **dbForge Studio for MySQL 9.0** |
-| :- | :-- | :- | :-| :- |
-| **Create, Update, Read, Write, Delete** | X | X | X | X |
-| **SSL Connection** | X | X | X | X |
-| **SQL Query Auto Completion** | X | X | | X |
-| **Import and Export Data** | X | X | X | X |
-| **Export to Multiple Formats** | X | X | X | X |
-| **Backup and Restore** | | X | | X |
-| **Display Server Parameters** | X | X | X | X |
-| **Display Client Connections** | X | X | X | X |
-
-## Next steps
--- [Troubleshoot connection issues to Azure Database for MySQL](howto-troubleshoot-common-connection-issues.md)
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connection-libraries.md
- Title: Connection libraries - Azure Database for MySQL
-description: This article lists each library or driver that client programs can use when connecting to Azure Database for MySQL.
----- Previously updated : 8/3/2020--
-# Connection libraries for Azure Database for MySQL
--
-This article lists each library or driver that client programs can use when connecting to Azure Database for MySQL.
-
-## Client interfaces
-MySQL offers standard database driver connectivity for using MySQL with applications and tools that are compatible with industry standards ODBC and JDBC. Any system that works with ODBC or JDBC can use MySQL.
-
-| **Language** | **Platform** | **Additional Resource** | **Download** |
-| :-- | :| :--| :|
-| PHP | Windows, Linux | [MySQL native driver for PHP - mysqlnd](https://dev.mysql.com/downloads/connector/php-mysqlnd/) | [Download](https://secure.php.net/downloads.php) |
-| ODBC | Windows, Linux, macOS X, and Unix platforms | [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | [Download](https://dev.mysql.com/downloads/connector/odbc/) |
-| ADO.NET | Windows | [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | [Download](https://dev.mysql.com/downloads/connector/net/) |
-| JDBC | Platform independent | [MySQL Connector/J 5.1 Developer Guide](https://dev.mysql.com/doc/connector-j/5.1/en/) | [Download](https://dev.mysql.com/downloads/connector/j/) |
-| Node.js | Windows, Linux, macOS X | [sidorares/node-mysql2](https://github.com/sidorares/node-mysql2/tree/master/documentation) | [Download](https://github.com/sidorares/node-mysql2) |
-| Python | Windows, Linux, macOS X | [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) |
-| C++ | Windows, Linux, macOS X | [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/connector-cpp/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) |
-| C | Windows, Linux, macOS X | [MySQL Connector/C Developer Guide](https://dev.mysql.com/doc/c-api/8.0/en/) | [Download](https://dev.mysql.com/downloads/connector/c/)
-| Perl | Windows, Linux, macOS X, and Unix platforms | [DBD::MySQL](https://metacpan.org/pod/DBD::mysql) | [Download](https://metacpan.org/pod/DBD::mysql) |
--
-## Next steps
-Read these quickstarts on how to connect to and query Azure Database for MySQL by using your language of choice:
--- [PHP](./connect-php.md)-- [Java](./connect-java.md)-- [.NET (C#)](./connect-csharp.md)-- [Python](./connect-python.md)-- [Node.JS](./connect-nodejs.md)-- [Ruby](./connect-ruby.md)-- [C++](connect-cpp.md)-- [Go](./connect-go.md)
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connectivity-architecture.md
- Title: Connectivity architecture - Azure Database for MySQL
-description: Describes the connectivity architecture for your Azure Database for MySQL server.
----- Previously updated : 10/15/2021--
-# Connectivity architecture in Azure Database for MySQL
--
-This article explains the Azure Database for MySQL connectivity architecture and how the traffic is directed to your Azure Database for MySQL instance from clients both within and outside Azure.
-
-## Connectivity architecture
-Connection to your Azure Database for MySQL is established through a gateway that is responsible for routing incoming connections to the physical location of your server in our clusters. The following diagram illustrates the traffic flow.
--
-As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MySQL. Therefore, in order to connect to your server, such as from corporate networks, it's necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
-
-## Azure Database for MySQL gateway IP addresses
-
-The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MySQL server.
-
-As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
-
-* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**. You should use fully qualified domain name (FQDN) of your server in the format `<servername>.mysql.database.azure.com`, in the connection string for your application.
-* You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
-
-The following table lists the gateway IP addresses of the Azure Database for MySQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
-
-* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you're provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
-* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
-
-| **Region name** | **Gateway IP addresses** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
-|||--|--|
-| Australia Central | 20.36.105.0 | | |
-| Australia Central2 | 20.36.113.0 | | |
-| Australia East | 13.75.149.87, 40.79.161.1 | | |
-| Australia South East | 13.73.109.251, 13.77.49.32, 13.77.48.10 | | |
-| Brazil South | 191.233.201.8, 191.233.200.16 | | 104.41.11.5 |
-| Canada Central | 13.71.168.32|| 40.85.224.249, 52.228.35.221 |
-| Canada East | 40.86.226.166, 52.242.30.154 | | |
-| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | |
-| China East | 139.219.130.35 | | |
-| China East 2 | 40.73.82.1, 52.130.120.89 |
-| China East 3 | 52.131.155.192 |
-| China North | 139.219.15.17 | | |
-| China North 2 | 40.73.50.0 | |
-| China North 3 | 52.131.27.192 | |
-| East Asia | 13.75.33.20, 52.175.33.150, 13.75.33.20, 13.75.33.21 | | |
-| East US | 40.71.8.203, 40.71.83.113 | 40.121.158.30 | 191.238.6.43 |
-| East US 2 | 40.70.144.38, 52.167.105.38 | 52.177.185.181 | |
-| France Central | 40.79.137.0, 40.79.129.1 | | |
-| France South | 40.79.177.0 | | |
-| Germany Central | 51.4.144.100 | | |
-| Germany North | 51.116.56.0 | | |
-| Germany North East | 51.5.144.179 | | |
-| Germany West Central | 51.116.152.0 | | |
-| India Central | 104.211.96.159 | | |
-| India South | 104.211.224.146 | | |
-| India West | 104.211.160.80 | | |
-| Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | |
-| Japan West | 191.238.68.11, 40.74.96.6, 40.74.96.7 | 104.214.148.156 | |
-| Korea Central | 52.231.17.13 | 52.231.32.42 | |
-| Korea South | 52.231.145.3, 52.231.151.97 | 52.231.200.86 | |
-| North Central US | 52.162.104.35, 52.162.104.36 | 23.96.178.199 | |
-| North Europe | 52.138.224.6, 52.138.224.7 | 40.113.93.91 | 191.235.193.75 |
-| South Africa North | 102.133.152.0 | | |
-| South Africa West | 102.133.24.0 | | |
-| South Central US | 104.214.16.39, 20.45.120.0 | 13.66.62.124 | 23.98.162.75 |
-| South East Asia | 40.78.233.2, 23.98.80.12 | 104.43.15.0 | |
-| Switzerland North | 51.107.56.0 | | |
-| Switzerland West | 51.107.152.0 | | |
-| UAE Central | 20.37.72.64 | | |
-| UAE North | 65.52.248.0 | | |
-| UK South | 51.140.144.32, 51.105.64.0 | 51.140.184.11 | |
-| UK West | 51.141.8.11 | | |
-| West Central US | 13.78.145.25, 52.161.100.158 | | |
-| West Europe | 13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 |
-| West US | 13.86.216.212, 13.86.217.212 | 104.42.238.205 | 23.99.34.75 |
-| West US2 | 13.66.136.195, 13.66.136.192, 13.66.226.202 | | |
-| West US3 | 20.150.184.2 | | |
-## Connection redirection
-
-Azure Database for MySQL supports an additional connection policy, **redirection**, that helps to reduce network latency between client applications and MySQL servers. With redirection, and after the initial TCP session is established to the Azure Database for MySQL server, the server returns the backend address of the node hosting the MySQL server to the client. Thereafter, all subsequent packets flow directly to the server, bypassing the gateway. As packets flow directly to the server, latency and throughput have improved performance.
-
-This feature is supported in Azure Database for MySQL servers with engine versions 5.6, 5.7, and 8.0.
-
-Support for redirection is available in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft, and is available on [PECL](https://pecl.php.net/package/mysqlnd_azure). See the [configuring redirection](./howto-redirection.md) article for more information on how to use redirection in your applications.
--
-> [!IMPORTANT]
-> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview.
-
-## Frequently asked questions
-
-### What you need to know about this planned maintenance?
-This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it's automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
-
-### What are we decommissioning?
-Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We're decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
-
-### How can you validate if your connections are going to old gateway nodes or new gateway nodes?
-Ping your server's FQDN, for example ``ping xxx.mysql.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway.
-
-You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses
-
-### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
-You'll receive an email to inform you when we'll start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
-
-### What do I do if my client applications are still connecting to old gateway server?
-This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code.
-
-### Is there any impact for my application connections?
-This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
-
-### Can I request for a specific time window for the maintenance?
-As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for Most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-
-### I'm using private link, will my connections get affected?
-No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
---
-## Next steps
-* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./howto-manage-firewall-using-portal.md)
-* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./howto-manage-firewall-using-cli.md)
-* [Configure redirection with Azure Database for MySQL](./howto-redirection.md)
mysql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connectivity.md
- Title: Transient connectivity errors - Azure Database for MySQL
-description: Learn how to handle transient connectivity errors and connect efficiently to Azure Database for MySQL.
-keywords: mysql connection,connection string,connectivity issues,transient error,connection error,connect efficiently
----- Previously updated : 3/18/2020--
-# Handle transient errors and connect efficiently to Azure Database for MySQL
--
-This article describes how to handle transient errors and connect efficiently to Azure Database for MySQL.
-
-## Transient errors
-
-A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens. Another reason could be a new version of a PaaS service that is being rolled out. Most of these events are automatically mitigated by the system in less than 60 seconds. A best practice for designing and developing applications in the cloud is to expect transient errors. Assume they can happen in any component at any time and to have the appropriate logic in place to handle these situations.
-
-## Handling transient errors
-
-Transient errors should be handled using retry logic. Situations that must be considered:
-
-* An error occurs when you try to open a connection
-* An idle connection is dropped on the server side. When you try to issue a command it can't be executed
-* An active connection that currently is executing a command is dropped.
-
-The first and second case are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for MySQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
-
-* Wait for 5 seconds before your first retry.
-* For each following retry, the increase the wait exponentially, up to 60 seconds.
-* Set a max number of retries at which point your application considers the operation failed.
-
-When a connection with an active transaction fails, it is more difficult to handle the recovery correctly. There are two cases: If the transaction was read-only in nature, it is safe to reopen the connection and to retry the transaction. If however the transaction was also writing to the database, you must determine if the transaction was rolled back, or if it succeeded before the transient error happened. In that case, you might just not have received the commit acknowledgment from the database server.
-
-One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client-generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully.
-
-When your program communicates with Azure Database for MySQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors.
-
-Make sure to test you retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for MySQL server. Your application should handle the brief downtime that is encountered during this operation without any problems.
-
-## Connect efficiently to Azure Database for MySQL
-
-Database connections are a limited resource, so making effective use of connection pooling to access Azure Database for MySQL optimizes performance. The below section explains how to use connection pooling or persistent connections to more effectively access Azure Database for MySQL.
-
-## Access databases by using connection pooling (recommended)
-
-Managing database connections can have a significant impact on the performance of the application as a whole. To optimize the performance of your application, the goal should be to reduce the number of times connections are established and time for establishing connections in key code paths. We strongly recommend using database connection pooling or persistent connections to connect to Azure Database for MySQL. Database connection pooling handles the creation, management, and allocation of database connections. When a program requests a database connection, it prioritizes the allocation of existing idle database connections, rather than the creation of a new connection. After the program has finished using the database connection, the connection is recovered in preparation for further use, rather than simply being closed down.
-
-For better illustration, this article provides [a piece of sample code](./sample-scripts-java-connection-pooling.md) that uses JAVA as an example. For more information, see [Apache common DBCP](https://commons.apache.org/proper/commons-dbcp/).
-
-> [!NOTE]
-> The server configures a timeout mechanism to close a connection that has been in an idle state for some time to free up resources. Be sure to set up the verification system to ensure the effectiveness of persistent connections when you are using them. For more information, see [Configure verification systems on the client side to ensure the effectiveness of persistent connections](concepts-connectivity.md#configure-verification-mechanisms-in-clients-to-confirm-the-effectiveness-of-persistent-connections).
-
-## Access databases by using persistent connections (recommended)
-
-The concept of persistent connections is similar to that of connection pooling. Replacing short connections with persistent connections requires only minor changes to the code, but it has a major effect in terms of improving performance in many typical application scenarios.
-
-## Access databases by using wait and retry mechanism with short connections
-
-If you have resource limitations, we strongly recommend that you use database pooling or persistent connections to access databases. If your application use short connections and experience connection failures when you approach the upper limit on the number of concurrent connections,you can try wait and retry mechanism. You can set an appropriate wait time, with a shorter wait time after the first attempt. Thereafter, you can try waiting for events multiple times.
-
-## Configure verification mechanisms in clients to confirm the effectiveness of persistent connections
-
-The server configures a timeout mechanism to close a connection thatΓÇÖs been in an idle state for some time to free up resources. When the client accesses the database again, itΓÇÖs equivalent to creating a new connection request between the client and the server. To ensure the effectiveness of connections during the process of using them, configure a verification mechanism on the client. As shown in the following example, you can use Tomcat JDBC connection pooling to configure this verification mechanism.
-
-By setting the TestOnBorrow parameter, when there's a new request, the connection pool automatically verifies the effectiveness of any available idle connections. If such a connection is effective, its directly returned otherwise connection pool withdraws the connection. The connection pool then creates a new effective connection and returns it. This process ensures that database is accessed efficiently.
-
-For information on the specific settings, see the [JDBC connection pool official introduction document](https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Common_Attributes). You mainly need to set the following three parameters: TestOnBorrow (set to true), ValidationQuery (set to SELECT 1), and ValidationQueryTimeout (set to 1). The specific sample code is shown below:
-
-```java
-public class SimpleTestOnBorrowExample {
- public static void main(String[] args) throws Exception {
- PoolProperties p = new PoolProperties();
- p.setUrl("jdbc:mysql://localhost:3306/mysql");
- p.setDriverClassName("com.mysql.jdbc.Driver");
- p.setUsername("root");
- p.setPassword("password");
- // The indication of whether objects will be validated by the idle object evictor (if any).
- // If an object fails to validate, it will be dropped from the pool.
- // NOTE - for a true value to have any effect, the validationQuery or validatorClassName parameter must be set to a non-null string.
- p.setTestOnBorrow(true);
-
- // The SQL query that will be used to validate connections from this pool before returning them to the caller.
- // If specified, this query does not have to return any data, it just can't throw a SQLException.
- p.setValidationQuery("SELECT 1");
-
- // The timeout in seconds before a connection validation queries fail.
- // This works by calling java.sql.Statement.setQueryTimeout(seconds) on the statement that executes the validationQuery.
- // The pool itself doesn't timeout the query, it is still up to the JDBC driver to enforce query timeouts.
- // A value less than or equal to zero will disable this feature.
- p.setValidationQueryTimeout(1);
- // set other useful pool properties.
- DataSource datasource = new DataSource();
- datasource.setPoolProperties(p);
-
- Connection con = null;
- try {
- con = datasource.getConnection();
- // execute your query here
- } finally {
- if (con!=null) try {con.close();}catch (Exception ignore) {}
- }
- }
- }
-```
-
-## Next steps
-
-* [Troubleshoot connection issues to Azure Database for MySQL](howto-troubleshoot-common-connection-issues.md)
mysql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-access-and-security-vnet.md
- Title: VNet service endpoints - Azure Database for MySQL
-description: 'Describes how VNet service endpoints work for your Azure Database for MySQL server.'
----- Previously updated : 7/17/2020-
-# Use Virtual Network service endpoints and rules for Azure Database for MySQL
--
-*Virtual network rules* are one firewall security feature that controls whether your Azure Database for MySQL server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for MySQL server.
-
-To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for MySQL:
--
-> [!NOTE]
-> This feature is available in all regions of Azure where Azure Database for MySQL is deployed for General Purpose and Memory Optimized servers.
-> In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server.
-
-You can also consider using [Private Link](concepts-data-access-security-private-link.md) for connections. Private Link provides a private IP address in your VNet for the Azure Database for MySQL server.
-
-<a name="anch-terminology-and-description-82f"></a>
-
-## Terminology and description
-
-**Virtual network:** You can have virtual networks associated with your Azure subscription.
-
-**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) that you have are assigned to subnets. One subnet can contain multiple VMs or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access.
-
-**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for MySQL and PostgreSQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for all Azure SQL Database, Azure Database for MySQL and Azure Database for PostgreSQL servers on the subnet.
-
-**Virtual network rule:** A virtual network rule for your Azure Database for MySQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for MySQL server. To be in the ACL for your Azure Database for MySQL server, the subnet must contain the **Microsoft.Sql** type name.
-
-A virtual network rule tells your Azure Database for MySQL server to accept communications from every node that is on the subnet.
-------
-<a name="anch-benefits-of-a-vnet-rule-68b"></a>
-
-## Benefits of a virtual network rule
-
-Until you take action, the VMs on your subnets cannot communicate with your Azure Database for MySQL server. One action that establishes the communication is the creation of a virtual network rule. The rationale for choosing the VNet rule approach requires a compare-and-contrast discussion involving the competing security options offered by the firewall.
-
-### A. Allow access to Azure services
-
-The Connection security pane has an **ON/OFF** button that is labeled **Allow access to Azure services**. The **ON** setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This **ON** setting is probably more open than you want your Azure Database for MySQL Database to be. The virtual network rule feature offers much finer granular control.
-
-### B. IP rules
-
-The Azure Database for MySQL firewall allows you to specify IP address ranges from which communications are accepted into the Azure Database for MySQL Database. This approach is fine for stable IP addresses that are outside the Azure private network. But many nodes inside the Azure private network are configured with *dynamic* IP addresses. Dynamic IP addresses might change, such as when your VM is restarted. It would be folly to specify a dynamic IP address in a firewall rule, in a production environment.
-
-You can salvage the IP option by obtaining a *static* IP address for your VM. For details, see [Configure private IP addresses for a virtual machine by using the Azure portal][vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w].
-
-However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage.
-
-<a name="anch-details-about-vnet-rules-38q"></a>
-
-## Details about virtual network rules
-
-This section describes several details about virtual network rules.
-
-### Only one geographic region
-
-Each Virtual Network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet.
-
-Any virtual network rule is limited to the region that its underlying endpoint applies to.
-
-### Server-level, not database-level
-
-Each virtual network rule applies to your whole Azure Database for MySQL server, not just to one particular database on the server. In other words, virtual network rule applies at the server-level, not at the database-level.
-
-### Security administration roles
-
-There is a separation of security roles in the administration of Virtual Network service endpoints. Action is required from each of the following roles:
--- **Network Admin:** &nbsp; Turn on the endpoint.-- **Database Admin:** &nbsp; Update the access control list (ACL) to add the given subnet to the Azure Database for MySQL server.-
-*Azure RBAC alternative:*
-
-The roles of Network Admin and Database Admin have more capabilities than are needed to manage virtual network rules. Only a subset of their capabilities is needed.
-
-You have the option of using [Azure role-based access control (Azure RBAC)][rbac-what-is-813s] in Azure to create a single custom role that has only the necessary subset of capabilities. The custom role could be used instead of involving either the Network Admin or the Database Admin. The surface area of your security exposure is lower if you add a user to a custom role, versus adding the user to the other two major administrator roles.
-
-> [!NOTE]
-> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Both subscriptions must be in the same Azure Active Directory tenant.
-> - The user has the required permissions to initiate operations, such as enabling service endpoints and adding a VNet-subnet to the given Server.
-> - Make sure that both the subscription have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Limitations
-
-For Azure Database for MySQL, the virtual network rules feature has the following limitations:
--- A Web App can be mapped to a private IP in a VNet/subnet. Even if service endpoints are turned ON from the given VNet/subnet, connections from the Web App to the server will have an Azure public IP source, not a VNet/subnet source. To enable connectivity from a Web App to a server that has VNet firewall rules, you must Allow Azure services to access server on the server.--- In the firewall for your Azure Database for MySQL, each virtual network rule references a subnet. All these referenced subnets must be hosted in the same geographic region that hosts the Azure Database for MySQL.--- Each Azure Database for MySQL server can have up to 128 ACL entries for any given virtual network.--- Virtual network rules apply only to Azure Resource Manager virtual networks; and not to [classic deployment model][arm-deployment-model-568f] networks.--- Turning ON virtual network service endpoints to Azure Database for MySQL using the **Microsoft.Sql** service tag also enables the endpoints for all Azure Database --- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.--- If **Microsoft.Sql** is enabled in a subnet, it indicates that you only want to use VNet rules to connect. [Non-VNet firewall rules](concepts-firewall-rules.md) of resources in that subnet will not work.--- On the firewall, IP address ranges do apply to the following networking items, but virtual network rules do not:
- - [Site-to-Site (S2S) virtual private network (VPN)][vpn-gateway-indexmd-608y]
- - On-premises via [ExpressRoute][expressroute-indexmd-744v]
-
-## ExpressRoute
-
-If your network is connected to the Azure network through use of [ExpressRoute][expressroute-indexmd-744v], each circuit is configured with two public IP addresses at the Microsoft Edge. The two IP addresses are used to connect to Microsoft Services, such as to Azure Storage, by using Azure Public Peering.
-
-To allow communication from your circuit to Azure Database for MySQL, you must create IP network rules for the public IP addresses of your circuits. In order to find the public IP addresses of your ExpressRoute circuit, open a support ticket with ExpressRoute by using the Azure portal.
-
-## Adding a VNET Firewall rule to your server without turning on VNET Service Endpoints
-
-Merely setting a VNet firewall rule does not help secure the server to the VNet. You must also turn VNet service endpoints **On** for the security to take effect. When you turn service endpoints **On**, your VNet-subnet experiences downtime until it completes the transition from **Off** to **On**. This is especially true in the context of large VNets. You can use the **IgnoreMissingServiceEndpoint** flag to reduce or eliminate the downtime during transition.
-
-You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal.
-
-## Related articles
-- [Azure virtual networks][vm-virtual-network-overview]-- [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d]-
-## Next steps
-For articles on creating VNet rules, see:
-- [Create and manage Azure Database for MySQL VNet rules using the Azure portal](howto-manage-vnet-using-portal.md)-- [Create and manage Azure Database for MySQL VNet rules using Azure CLI](howto-manage-vnet-using-cli.md)-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[arm-deployment-model-568f]: ../azure-resource-manager/management/deployment-models.md
-
-[vm-virtual-network-overview]: ../virtual-network/virtual-networks-overview.md
-
-[vm-virtual-network-service-endpoints-overview-649d]: ../virtual-network/virtual-network-service-endpoints-overview.md
-
-[vm-configure-private-ip-addresses-for-a-virtual-machine-using-the-azure-portal-321w]: ../virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md
-
-[rbac-what-is-813s]: ../role-based-access-control/overview.md
-
-[vpn-gateway-indexmd-608y]: ../vpn-gateway/index.yml
-
-[expressroute-indexmd-744v]: ../expressroute/index.yml
-
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-access-security-private-link.md
- Title: Private Link - Azure Database for MySQL
-description: Learn how Private link works for Azure Database for MySQL.
----- Previously updated : 03/10/2020--
-# Private Link for Azure Database for MySQL
--
-Private Link allows you to connect to various PaaS services in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet.
-
-For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../virtual-network/virtual-networks-overview.md) and Subnet.
-
-> [!NOTE]
-> The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers.
-
-## Data exfiltration prevention
-
-Data ex-filtration in Azure Database for MySQL is when an authorized user, such as a database admin, is able to extract data from one system and move it to another location or system outside the organization. For example, the user moves the data to a storage account owned by a third party.
-
-Consider a scenario with a user running MySQL Workbench inside an Azure Virtual Machine (VM) that is connecting to an Azure Database for MySQL server provisioned in West US. The example below shows how to limit access with public endpoints on Azure Database for MySQL using network access controls.
-
-* Disable all Azure service traffic to Azure Database for MySQL via the public endpoint by setting *Allow Azure Services* to OFF. Ensure no IP addresses or ranges are allowed to access the server either via [firewall rules](./concepts-firewall-rules.md) or [virtual network service endpoints](./concepts-data-access-and-security-vnet.md).
-
-* Only allow traffic to the Azure Database for MySQL using the Private IP address of the VM. For more information, see the articles on [Service Endpoint](concepts-data-access-and-security-vnet.md) and [VNet firewall rules](howto-manage-vnet-using-portal.md).
-
-* On the Azure VM, narrow down the scope of outgoing connection by using Network Security Groups (NSGs) and Service Tags as follows
-
- * Specify an NSG rule to allow traffic for *Service Tag = SQL.WestUs* - only allowing connection to Azure Database for MySQL in West US
- * Specify an NSG rule (with a higher priority) to deny traffic for *Service Tag = SQL* - denying connections to Update to Azure Database for MySQL in all regions</br></br>
-
-At the end of this setup, the Azure VM can connect only to Azure Database for MySQL in the West US region. However, the connectivity isn't restricted to a single Azure Database for MySQL. The VM can still connect to any Azure Database for MySQL in the West US region, including the databases that aren't part of the subscription. While we've reduced the scope of data exfiltration in the above scenario to a specific region, we haven't eliminated it altogether.</br>
-
-With Private Link, you can now set up network access controls like NSGs to restrict access to the private endpoint. Individual Azure PaaS resources are then mapped to specific private endpoints. A malicious insider can only access the mapped PaaS resource (for example an Azure Database for MySQL) and no other resource.
-
-## On-premises connectivity over private peering
-
-When you connect to the public endpoint from on-premises machines, your IP address needs to be added to the IP-based firewall using a server-level firewall rule. While this model works well for allowing access to individual machines for dev or test workloads, it's difficult to manage in a production environment.
-
-With Private Link, you can enable cross-premises access to the private endpoint using [Express Route](https://azure.microsoft.com/services/expressroute/) (ER), private peering or [VPN tunnel](../vpn-gateway/index.yml). They can subsequently disable all access via public endpoint and not use the IP-based firewall.
-
-> [!NOTE]
-> In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Make sure that both subscriptions have the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
-
-## Configure Private Link for Azure Database for MySQL
-
-### Creation Process
-
-Private endpoints are required to enable Private Link. This can be done using the following how-to guides.
-
-* [Azure portal](./howto-configure-privatelink-portal.md)
-* [CLI](./howto-configure-privatelink-cli.md)
-
-### Approval Process
-Once the network admin creates the private endpoint (PE), the MySQL admin can manage the private endpoint Connection (PEC) to Azure Database for MySQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for MySQL connectivity.
-
-* Navigate to the Azure Database for MySQL server resource in the Azure portal.
- * Select the private endpoint connections in the left pane
- * Shows a list of all private endpoint Connections (PECs)
- * Corresponding private endpoint (PE) created
--
-* Select an individual PEC from the list by selecting it.
--
-* The MySQL server admin can choose to approve or reject a PEC and optionally add a short text response.
--
-* After approval or rejection, the list will reflect the appropriate state along with the response text
--
-## Use cases of Private Link for Azure Database for MySQL
-
-Clients can connect to the private endpoint from the same VNet, [peered VNet](../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases.
--
-### Connecting from an Azure VM in Peered Virtual Network (VNet)
-Configure [VNet peering](../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for MySQL from an Azure VM in a peered VNet.
-
-### Connecting from an Azure VM in VNet-to-VNet environment
-Configure [VNet-to-VNet VPN gateway connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for MySQL from an Azure VM in a different region or subscription.
-
-### Connecting from an on-premises environment over VPN
-To establish connectivity from an on-premises environment to the Azure Database for MySQL, choose and implement one of the options:
-
-* [Point-to-Site connection](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
-* [Site-to-Site VPN connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md)
-* [ExpressRoute circuit](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)
-
-## Private Link combined with firewall rules
-
-The following situations and outcomes are possible when you use Private Link in combination with firewall rules:
-
-* If you don't configure any firewall rules, then by default, no traffic will be able to access the Azure Database for MySQL.
-
-* If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule.
-
-* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for MySQL is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for MySQL.
-
-## Deny public access for Azure Database for MySQL
-
-If you want to rely only on private endpoints for accessing their Azure Database for MySQL, you can disable setting all public endpoints (i.e. [firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
-
-When this setting is set to *YES*, only connections via private endpoints are allowed to your Azure Database for MySQL. When this setting is set to *NO*, clients can connect to your Azure Database for MySQL based on your firewall or VNet service endpoint settings. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
-
-> [!Note]
-> This feature is available in all Azure regions where Azure Database for MySQL - Single server supports General Purpose and Memory Optimized pricing tiers.
->
-> This setting does not have any impact on the SSL and TLS configurations for your Azure Database for MySQL.
-
-To learn how to set the **Deny Public Network Access** for your Azure Database for MySQL from Azure portal, refer to [How to configure Deny Public Network Access](howto-deny-public-network-access.md).
-
-## Next steps
-
-To learn more about Azure Database for MySQL security features, see the following articles:
-
-* To configure a firewall for Azure Database for MySQL, see [Firewall support](./concepts-firewall-rules.md).
-
-* To learn how to configure a virtual network service endpoint for your Azure Database for MySQL, see [Configure access from virtual networks](./concepts-data-access-and-security-vnet.md).
-
-* For an overview of Azure Database for MySQL connectivity, see [Azure Database for MySQL Connectivity Architecture](./concepts-connectivity-architecture.md)
-
-<!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Concepts Data Encryption Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-encryption-mysql.md
- Title: Data encryption with customer-managed key - Azure Database for MySQL
-description: Azure Database for MySQL data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.
----- Previously updated : 01/13/2020--
-# Azure Database for MySQL data encryption with a customer-managed key
--
-Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
-
-Data encryption with customer-managed keys for Azure Database for MySQL, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../key-vault/general/security-features.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article.
-
-Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key, but does provide services of encryption and decryption to authorized entities. Key Vault can generate the key, import it, or [have it transferred from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md).
-
-> [!NOTE]
-> This feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-data-encryption-mysql.md#limitations) section.
-
-## Benefits
-
-Data encryption with customer-managed keys for Azure Database for MySQL provides the following benefits:
-
-* Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible
-* Full control over the key-lifecycle, including rotation of the key to align with corporate policies
-* Central management and organization of keys in Azure Key Vault
-* Ability to implement separation of duties between security officers, and DBA and system administrators
--
-## Terminology and description
-
-**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
-
-**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK.
-
-The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../security/fundamentals/encryption-atrest.md).
-
-## How data encryption with a customer-managed key work
--
-For a MySQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server:
-
-* **get**: For retrieving the public part and properties of the key in the key vault.
-* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL.
-* **unwrapKey**: To be able to decrypt the DEK. Azure Database for MySQL needs the decrypted DEK to encrypt/decrypt the data
-
-The key vault administrator can also [enable logging of Key Vault audit events](../azure-monitor/insights/key-vault-insights-overview.md), so they can be audited later.
-
-When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
-
-## Requirements for configuring data encryption for Azure Database for MySQL
-
-The following are requirements for configuring Key Vault:
-
-* Key Vault and Azure Database for MySQL must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving Key Vault resource afterwards requires you to reconfigure the data encryption.
-* Enable the [soft-delete](../key-vault/general/soft-delete-overview.md) feature on the key vault with retention period set to **90 days**, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days by default, unless the retention period is explicitly set to <=90 days. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal).
-* Enable the [Purge Protection](../key-vault/general/soft-delete-overview.md#purge-protection) feature on the key vault with retention period set to **90 days**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via Azure CLI or PowerShell. When purge protection is on, a vault or an object in the deleted state cannot be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed.
-* Grant the Azure Database for MySQL access to the key vault with the get, wrapKey, and unwrapKey permissions by using its unique managed identity. In the Azure portal, the unique 'Service' identity is automatically created when data encryption is enabled on the MySQL. See [Configure data encryption for MySQL](howto-data-encryption-portal.md) for detailed, step-by-step instructions when you're using the Azure portal.
-
-The following are requirements for configuring the customer-managed key:
-
-* The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048.
-* The key activation date (if set) must be a date and time in the past. The expiration date not set.
-* The key must be in the *Enabled* state.
-* The key must have [soft delete](../key-vault/general/soft-delete-overview.md) with retention period set to **90 days**.This implicitly sets the required key attribute recoveryLevel: ΓÇ£RecoverableΓÇ¥. If the retention is set to < 90 days, the recoveryLevel: "CustomizedRecoverable", which doesn't the requirement so ensure to set the retention period is set to **90 days**.
-* The key must have [purge protection enabled](../key-vault/general/soft-delete-overview.md#purge-protection).
-* If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key) into the key vault, make sure to provide it in the supported file formats (`.pfx`, `.byok`, `.backup`).
-
-## Recommendations
-
-When you're using data encryption by using a customer-managed key, here are recommendations for configuring Key Vault:
-
-* Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion.
-* Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated.
-* Ensure that Key Vault and Azure Database for MySQL reside in the same region, to ensure a faster access for DEK wrap, and unwrap operations.
-* Lock down the Azure KeyVault to only **private endpoint and selected networks** and allow only *trusted Microsoft* services to secure the resources.
-
- :::image type="content" source="media/concepts-data-access-and-security-data-encryption/keyvault-trusted-service.png" alt-text="trusted-service-with-AKV":::
-
-Here are recommendations for configuring a customer-managed key:
-
-* Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service.
-
-* If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey).
-
-## Inaccessible customer-managed key condition
-
-When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*. Some of the reason why the server can reach this state are:
-
-* If we create a Point In Time Restore server for your Azure Database for MySQL, which has data encryption enabled, the newly created server will be in *Inaccessible* state. You can fix this through [Azure portal](howto-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](howto-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
-* If we create a read replica for your Azure Database for MySQL, which has data encryption enabled, the replica server will be in *Inaccessible* state. You can fix this through [Azure portal](howto-data-encryption-portal.md#using-data-encryption-for-restore-or-replica-servers) or [CLI](howto-data-encryption-cli.md#using-data-encryption-for-restore-or-replica-servers).
-* If you delete the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key Vault](../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
-* If we delete the key from the KeyVault, the Azure Database for MySQL will be unable to access the key and will move to *Inaccessible* state. Recover the [Key](../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
-* If the key stored in the Azure KeyVault expires, the key will become invalid and the Azure Database for MySQL will transition into *Inaccessible* state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the server *Available*.
-
-### Accidental key access revocation from Key Vault
-
-It might happen that someone with sufficient access rights to Key Vault accidentally disables server access to the key by:
-
-* Revoking the key vault's `get`, `wrapKey`, and `unwrapKey` permissions from the server.
-* Deleting the key.
-* Deleting the key vault.
-* Changing the key vault's firewall rules.
-* Deleting the managed identity of the server in Azure AD.
-
-## Monitor the customer-managed key in Key Vault
-
-To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features:
-
-* [Azure Resource Health](../service-health/resource-health-overview.md): An inaccessible database that has lost access to the customer key shows as "Inaccessible" after the first connection to the database has been denied.
-* [Activity log](../service-health/alerts-activity-log-service-notifications-portal.md): When access to the customer key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible, if you create alerts for these events.
-
-* [Action groups](../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences.
-
-## Restore and replicate with a customer's managed key in Key Vault
-
-After Azure Database for MySQL is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. You can make this new copy either through a local or geo-restore operation, or through read replicas. However, the copy can be changed to reflect a new customer's managed key for encryption. When the customer-managed key is changed, old backups of the server start using the latest key.
-
-To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the source and restored/replica servers:
-
-* Initiate the restore or read replica creation process from the source Azure Database for MySQL.
-* Keep the newly created server (restored/replica) in an inaccessible state, because its unique identity hasn't yet been given permissions to Key Vault.
-* On the restored/replica server, revalidate the customer-managed key in the data encryption settings to ensures that the newly created server is given wrap and unwrap permissions to the key stored in Key Vault.
-
-## Limitations
-
-For Azure Database for MySQL, the support for encryption of data at rest using customers managed key (CMK) has few limitations -
-
-* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
-* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
-
- > [!NOTE]
- > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
- > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
-
-* Encryption is only supported with RSA 2048 cryptographic key.
-
-## Next steps
-
-* Learn how to set up data encryption with a customer-managed key for your Azure database for MySQL by using the [Azure portal](howto-data-encryption-portal.md) and [Azure CLI](howto-data-encryption-cli.md).
-* Learn about the storage type support for [Azure Database for MySQL - Single Server](concepts-pricing-tiers.md#storage)
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-in-replication.md
- Title: Data-in Replication - Azure Database for MySQL
-description: Learn about using Data-in Replication to synchronize from an external server into the Azure Database for MySQL service.
----- Previously updated : 04/08/2021--
-# Replicate data into Azure Database for MySQL
--
-Data-in Replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL service. The external server can be on-premises, in virtual machines, or a database service hosted by other cloud providers. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-
-## When to use Data-in Replication
-
-The main scenarios to consider about using Data-in Replication are:
--- **Hybrid Data Synchronization:** With Data-in Replication, you can keep data synchronized between your on-premises servers and Azure Database for MySQL. This synchronization is useful for creating hybrid applications. This method is appealing when you have an existing local database server but want to move the data to a region closer to end users.-- **Multi-Cloud Synchronization:** For complex cloud solutions, use Data-in Replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database services hosted in those clouds.-
-For migration scenarios, use the [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/)(DMS).
-
-## Limitations and considerations
-
-### Data not replicated
-
-The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html).
-
-### Filtering
-
-To skip replicating tables from your source server (hosted on-premises, in virtual machines, or a database service hosted by other cloud providers), the `replicate_wild_ignore_table` parameter is supported. Optionally, update this parameter on the replica server hosted in Azure using the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
-
-To learn more about this parameter, review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table).
-
-## Supported in General Purpose or Memory Optimized tier only
-
-Data-in Replication is only supported in General Purpose and Memory Optimized pricing tiers.
-
->[!Note]
->GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2).
-
-### Requirements
--- The source server version must be at least MySQL version 5.6.-- The source and replica server versions must be the same. For example, both must be MySQL version 5.6 or both must be MySQL version 5.7.-- Each table must have a primary key.-- The source server should use the MySQL InnoDB engine.-- User must have permissions to configure binary logging and create new users on the source server.-- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` or `mysql.az_replication_change_master_with_gtid` stored procedure. Refer to the following [examples](./howto-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter.-- Ensure that the source server's IP address has been added to the Azure Database for MySQL replica server's firewall rules. Update firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md).-- Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306.-- Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).-
-## Next steps
--- Learn how to [set up data-in replication](howto-data-in-replication.md)-- Learn about [replicating in Azure with read replicas](concepts-read-replicas.md)-- Learn about how to [migrate data with minimal downtime using DMS](howto-migrate-online.md)
mysql Concepts Database Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-database-application-development.md
- Title: Application development - Azure Database for MySQL
-description: Introduces design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL
----- Previously updated : 3/18/2020--
-# Application development overview for Azure Database for MySQL
--
-This article discusses design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL.
-
-> [!TIP]
-> For a tutorial showing you how to create a server, create a server-based firewall, view server properties, create database, and connect and query by using workbench and mysql.exe, see [Design your first Azure Database for MySQL database](tutorial-design-database-using-portal.md)
-
-## Language and platform
-There are code samples available for various programming languages and platforms. You can find links to the code samples at:
-[Connectivity libraries used to connect to Azure Database for MySQL](concepts-connection-libraries.md)
-
-## Tools
-Azure Database for MySQL uses the MySQL community version, compatible with MySQL common management tools such as Workbench or MySQL utilities such as mysql.exe, [phpMyAdmin](https://www.phpmyadmin.net/), [Navicat](https://www.navicat.com/products/navicat-for-mysql), [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/) and others. You can also use the Azure portal, Azure CLI, and REST APIs to interact with the database service.
-
-## Resource limitations
-Azure Database for MySQL manages the resources available to a server by using two different mechanisms:
-- Resources Governance.-- Enforcement of Limits.-
-## Security
-Azure Database for MySQL provides resources for limiting access, protecting data, configuring users and roles, and monitoring activities on a MySQL database.
-
-## Authentication
-Azure Database for MySQL supports server authentication of users and logins.
-
-## Resiliency
-When a transient error occurs while connecting to a MySQL database, your code should retry the call. We recommend that the retry logic use back off logic so that it does not overwhelm the SQL database with multiple clients retrying simultaneously.
--- Code samples: For code samples that illustrate retry logic, see samples for the language of your choice at: [Connectivity libraries used to connect to Azure Database for MySQL](concepts-connection-libraries.md)-
-## Managing connections
-Database connections are a limited resource, so we recommend sensible use of connections when accessing your MySQL database to achieve better performance.
-- Access the database by using connection pooling or persistent connections.-- Access the database by using short connection life span. -- Use retry logic in your application at the point of the connection attempt to catch failures resulting from concurrent connections have reached the maximum allowed. In the retry logic, set a short delay, and then wait for a random time before the additional connection attempts.
mysql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-firewall-rules.md
- Title: Firewall rules - Azure Database for MySQL
-description: Learn about using firewall rules to enable connections to your Azure Database for MySQL server.
----- Previously updated : 07/17/2020--
-# Azure Database for MySQL server firewall rules
--
-Firewalls prevent all access to your database server until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request.
-
-To configure a firewall, create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
-
-**Firewall rules:** These rules enable clients to access your entire Azure Database for MySQL server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
-
-## Firewall overview
-All database access to your Azure Database for MySQL server is by default blocked by the firewall. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules.
-
-Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your Azure Database for MySQL database, as shown in the following diagram:
--
-## Connecting from the Internet
-Server-level firewall rules apply to all databases on the Azure Database for MySQL server.
-
-If the IP address of the request is within one of the ranges specified in the server-level firewall rules, then the connection is granted.
-
-If the IP address of the request is outside the ranges specified in any of the database-level or server-level firewall rules, then the connection request fails.
-
-## Connecting from Azure
-It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
-
-If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL server.
-
-> [!IMPORTANT]
-> The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
--
-### Connecting from a VNet
-To connect securely to your Azure Database for MySQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
-
-## Programmatically managing firewall rules
-In addition to the Azure portal, firewall rules can be managed programmatically by using the Azure CLI. See also [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Troubleshooting firewall issues
-Consider the following points when access to the Microsoft Azure Database for MySQL server service does not behave as expected:
-
-* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect.
-
-* **The login is not authorized or an incorrect password was used:** If a login does not have permissions on the Azure Database for MySQL server or the password used is incorrect, the connection to the Azure Database for MySQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must provide the necessary security credentials.
-
-* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you can try one of the following solutions:
-
- * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for MySQL server, and then add the IP address range as a firewall rule.
-
- * Get static IP addressing instead for your client computers, and then add the IP addresses as firewall rules.
-
-* **Server's IP appears to be public:** Connections to the Azure Database for MySQL server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
-
-* **Cannot connect from Azure resource with allowed IP:** Check whether the **Microsoft.Sql** service endpoint is enabled for the subnet you are connecting from. If **Microsoft.Sql** is enabled, it indicates that you only want to use [VNet service endpoint rules](concepts-data-access-and-security-vnet.md) on that subnet.
-
- For example, you may see the following error if you are connecting from an Azure VM in a subnet that has **Microsoft.Sql** enabled but has no corresponding VNet rule:
- `FATAL: Client from Azure Virtual Networks is not allowed to access the server`
-
-* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error.
-
-## Next steps
-
-* [Create and manage Azure Database for MySQL firewall rules using the Azure portal](./howto-manage-firewall-using-portal.md)
-* [Create and manage Azure Database for MySQL firewall rules using Azure CLI](./howto-manage-firewall-using-cli.md)
-* [VNet service endpoints in Azure Database for MySQL](./concepts-data-access-and-security-vnet.md)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-high-availability.md
- Title: High availability - Azure Database for MySQL
-description: This article provides information on high availability in Azure Database for MySQL
----- Previously updated : 7/7/2020--
-# High availability in Azure Database for MySQL
--
-The Azure Database for MySQL service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/mysql) uptime. Azure Database for MySQL provides high availability during planned events such as user-initated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MySQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
-
-Azure Database for MySQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
-
-## Components in Azure Database for MySQL
-
-| **Component** | **Description**|
-| | -- |
-| <b>MySQL Database Server | Azure Database for MySQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in 60-120 seconds depending on the transactional activity on the database. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (ib_log) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://dev.mysql.com/doc/refman/5.7/en/innodb-checkpoints.html) process, data pages from the database server memory are also flushed to the storage. |
-| <b>Remote Storage | All MySQL physical data files and log files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within 60 seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
-| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. |
-
-## Planned downtime mitigation
-Azure Database for MySQL is architected to provide high availability during planned downtime operations.
--
-Here are some planned maintenance scenarios:
-
-| **Scenario** | **Description**|
-| | -- |
-| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.|
-| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.|
-| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
-| <b>Minor version upgrades | Azure Database for MySQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
--
-## Unplanned downtime mitigation
-
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in 60-120 seconds. The remote storage is automatically attached to the new database server. MySQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MySQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
---
-### Unplanned downtime: failure scenarios and service recovery
-Here are some failure scenarios and how Azure Database for MySQL automatically recovers:
-
-| **Scenario** | **Automatic recovery** |
-| - | - |
-| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. |
-| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
-
-Here are some failure scenarios that require user action to recover:
-
-| **Scenario** | **Recovery plan** |
-| - | - |
-| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](howto-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
-| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [mysqldump](concepts-migrate-dump-restore.md), and then use [restore](concepts-migrate-dump-restore.md#restore-your-mysql-database-using-command-line-or-mysql-workbench) to restore those tables into your database. |
---
-## Summary
-
-Azure Database for MySQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for MySQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/mysql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications.
-
-## Next steps
-- Learn about [Azure regions](../availability-zones/az-overview.md)-- Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
mysql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-infrastructure-double-encryption.md
- Title: Infrastructure double encryption - Azure Database for MySQL
-description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service managed keys.
----- Previously updated : 6/30/2020--
-# Azure Database for MySQL Infrastructure double encryption
--
-Azure Database for MySQL uses storage [encryption of data at-rest](concepts-security.md#at-rest) for data using Microsoft's managed keys. Data, including backups, are encrypted on disk and this encryption is always on and can't be disabled. The encryption uses FIPS 140-2 validated cryptographic module and an AES 256-bit cipher for the Azure storage encryption.
-
-Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for MySQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact.
-
-> [!NOTE]
-> Like data encryption at rest, this feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-infrastructure-double-encryption.md#limitations) section.
-
-Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for MySQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-mysql.md) for the provisioned MySQL server.
-
-Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
-
-> [!NOTE]
-> Using Infrastructure double encryption will have 5-10% impact on the throughput of your Azure Database for MySQL server due to the additional encryption process.
-
-## Benefits
-
-Infrastructure double encryption for Azure Database for MySQL provides the following benefits:
-
-1. **Additional diversity of crypto implementation** - The planned move to hardware-based encryption will further diversify the implementations by providing a hardware-based implementation in addition to the software-based implementation.
-2. **Implementation errors** - Two layers of encryption at infrastructure layer protects against any errors in caching or memory management in higher layers that exposes plaintext data. Additionally, the two layers also ensures against errors in the implementation of the encryption in general.
-
-The combination of these provides strong protection against common threats and weaknesses used to attack cryptography.
-
-## Supported scenarios with infrastructure double encryption
-
-The encryption capabilities that are provided by Azure Database for MySQL can be used together. Below is a summary of the various scenarios that you can use:
-
-| ## | Default encryption | Infrastructure double encryption | Data encryption using Customer-managed keys |
-|:|::|:--:|:--:|
-| 1 | *Yes* | *No* | *No* |
-| 2 | *Yes* | *Yes* | *No* |
-| 3 | *Yes* | *No* | *Yes* |
-| 4 | *Yes* | *Yes* | *Yes* |
-| | | | |
-
-> [!Important]
-> - Scenario 2 and 4 can introduce 5-10 percent drop in throughput based on the workload type for Azure Database for MySQL server due to the additional layer of infrastructure encryption.
-> - Configuring Infrastructure double encryption for Azure Database for MySQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption. However, you can still enable Data encryption using customer-managed keys for the server created with/without infrastructure double encryption.
-
-## Limitations
-
-For Azure Database for MySQL, the support for infrastruction double encryption has few limitations -
-
-* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
-* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
-
- > [!NOTE]
- > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
- > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
---
-## Next steps
-
-Learn how to [set up Infrastructure double encryption for Azure database for MySQL](howto-double-encryption.md).
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-limits.md
- Title: Limitations - Azure Database for MySQL
-description: This article describes limitations in Azure Database for MySQL, such as number of connection and storage engine options.
----- Previously updated : 10/1/2020-
-# Limitations in Azure Database for MySQL
--
-The following sections describe capacity, storage engine support, privilege support, data manipulation statement support, and functional limits in the database service. Also see [general limitations](https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.6/en/limits.html) applicable to the MySQL database engine.
-
-## Server parameters
-
-> [!NOTE]
-> If you are looking for min/max values for server parameters like `max_connections` and `innodb_buffer_pool_size`, this information has moved to the **[server parameters](./concepts-server-parameters.md)** article.
-
-Azure Database for MySQL supports tuning the values of server parameters. The min and max value of some parameters (ex. `max_connections`, `join_buffer_size`, `query_cache_size`) is determined by the pricing tier and vCores of the server. Refer to [server parameters](./concepts-server-parameters.md) for more information about these limits.
-
-Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables aren't populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](howto-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](howto-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
-
-Password plugins such as "validate_password" and "caching_sha2_password" aren't supported by the service.
-
-## Storage engines
-
-MySQL supports many storage engines. On Azure Database for MySQL, the following storage engines are supported and unsupported:
-
-### Supported
-- [InnoDB](https://dev.mysql.com/doc/refman/5.7/en/innodb-introduction.html)-- [MEMORY](https://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html)-
-### Unsupported
-- [MyISAM](https://dev.mysql.com/doc/refman/5.7/en/myisam-storage-engine.html)-- [BLACKHOLE](https://dev.mysql.com/doc/refman/5.7/en/blackhole-storage-engine.html)-- [ARCHIVE](https://dev.mysql.com/doc/refman/5.7/en/archive-storage-engine.html)-- [FEDERATED](https://dev.mysql.com/doc/refman/5.7/en/federated-storage-engine.html)-
-## Privileges & data manipulation support
-
-Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MySQL server. To maintain the service integrity and SLA at a product level, this service doesn't expose multiple roles.
-
-The MySQL service doesn't allow direct access to the underlying file system. Some data manipulation commands aren't supported.
-
-### Unsupported
-
-The following are unsupported:
-- DBA role: Restricted. Alternatively, you can use the administrator user (created during new server creation), allows you to perform most of DDL and DML statements. -- SUPER privilege: Similarly, [SUPER privilege](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_super) is restricted.-- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, remove the `CREATE DEFINER` commands manually or by using the `--skip-definer` command when performing a [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html).-- System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionality. You can't make changes to the `mysql` system database.-- `SELECT ... INTO OUTFILE`: Not supported in the service.-- `LOAD_FILE(file_name)`: Not supported in the service.-- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege is not supported for taking backups using any [utility tools](./how-to-decide-on-right-migration-tools.md).-
-### Supported
-- `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). Additionally, if you are using MySQL client version >= 8.0 you need to include `-ΓÇôlocal-infile=1` parameter in your connection string.--
-## Functional limitations
-
-### Scale operations
-- Dynamic scaling to and from the Basic pricing tiers is currently not supported.-- Decreasing server storage size is not supported.-
-### Major version upgrades
-- [Major version upgrade is supported for v5.6 to v5.7 upgrades only](how-to-major-version-upgrade.md). Upgrades to v8.0 is not supported yet.-
-### Point-in-time-restore
-- When using the PITR feature, the new server is created with the same configurations as the server it is based on.-- Restoring a deleted server is not supported.-
-### VNet service endpoints
-- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.-
-### Storage size
-- Please refer to [pricing tiers](concepts-pricing-tiers.md#storage) for the storage size limits per pricing tier.-
-## Current known issues
-- MySQL server instance displays the wrong server version after connection is established. To get the correct server instance engine version, use the `select version();` command.-
-## Next steps
-- [What's available in each service tier](concepts-pricing-tiers.md)-- [Supported MySQL database versions](concepts-supported-versions.md)
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-dbforge-studio-for-mysql.md
- Title: Use dbForge Studio for MySQL to migrate a MySQL database to Azure Database for MySQL
-description: The article demonstrates how to migrate to Azure Database for MySQL by using dbForge Studio for MySQL.
----- Previously updated : 03/03/2021-
-# Migrate data to Azure Database for MySQL with dbForge Studio for MySQL
--
-Looking to move your MySQL databases to Azure Database for MySQL? Consider using the migration tools in dbForge Studio for MySQL. With it, database transfer can be configured, saved, edited, automated, and scheduled.
-
-To complete the examples in this article, you'll need to download and install [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/).
-
-## Connect to Azure Database for MySQL
-
-1. In dbForge Studio for MySQL, select **New Connection** from the **Database** menu.
-
-1. Provide a host name and sign-in credentials.
-
-1. Select **Test Connection** to check the configuration.
--
-## Migrate with the Backup and Restore functionality
-
-You can choose from many options when using dbForge Studio for MySQL to migrate databases to Azure. If you need to move the entire database, it's best to use the **Backup and Restore** functionality.
-
-In this example, we migrate the *sakila* database from MySQL server to Azure Database for MySQL. The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
-
-### Back up the database
-
-1. In dbForge Studio for MySQL, select **Backup Database** from the **Backup and Restore** menu. The **Database Backup Wizard** appears.
-
-1. On the **Backup content** tab of the **Database Backup Wizard**, select database objects you want to back up.
-
-1. On the **Options** tab, configure the backup process to fit your requirements.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png" alt-text="Screenshot showing the options pane of the Backup wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png":::
-
-1. Select **Next**, and then specify error processing behavior and logging options.
-
-1. Select **Backup**.
-
-### Restore the database
-
-1. In dbForge Studio for MySQL, connect to Azure Database for MySQL. [Refer to the instructions](#connect-to-azure-database-for-mysql).
-
-1. Select **Restore Database** from the **Backup and Restore** menu. The **Database Restore Wizard** appears.
-
-1. In the **Database Restore Wizard**, select a file with a database backup.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png" alt-text="Screenshot showing the Restore step of the Database Restore wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png":::
-
-1. Select **Restore**.
-
-1. Check the result.
-
-## Migrate with the Copy Databases functionality
-
-The **Copy Databases** functionality in dbForge Studio for MySQL is similar to **Backup and Restore**, except that it doesn't require two steps to migrate a database. It also lets you transfer two or more databases at once.
-
->[!NOTE]
-> The **Copy Databases** functionality is only available in the Enterprise edition of dbForge Studio for MySQL.
-
-In this example, we migrate the *world_x* database from MySQL server to Azure Database for MySQL.
-
-To migrate a database using the Copy Databases functionality:
-
-1. In dbForge Studio for MySQL, select **Copy Databases** from the **Database** menu.
-
-1. On the **Copy Databases** tab, specify the source and target connection. Also select the databases to be migrated.
-
- We enter the Azure MySQL connection and select the *world_x* database. Select the green arrow to start the process.
-
-1. Check the result.
-
-You'll see that the *world_x* database has successfully appeared in Azure MySQL.
--
-## Migrate a database with schema and data comparison
-
-You can choose from many options when using dbForge Studio for MySQL to migrate databases, schemas, and/or data to Azure. If you need to move selective tables from a MySQL database to Azure, it's best to use the **Schema Comparison** and the **Data Comparison** functionality.
-
-In this example, we migrate the *world* database from MySQL server to Azure Database for MySQL.
-
-The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
-
-The logic behind this approach is to create an empty database in Azure Database for MySQL and synchronize it with the source MySQL database. We first use the **Schema Comparison** tool, and next we use the **Data Comparison** functionality. These steps ensure that the MySQL schemas and data are accurately moved to Azure.
-
-To complete this exercise, you'll first need to [connect to Azure Database for MySQL](#connect-to-azure-database-for-mysql) and create an empty database.
-
-### Schema synchronization
-
-1. On the **Comparison** menu, select **New Schema Comparison**. The **New Schema Comparison Wizard** appears.
-
-1. Choose your source and target, and then specify the schema comparison options. Select **Compare**.
-
-1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Schema Synchronization Wizard**.
-
-1. Walk through the steps of the wizard to configure synchronization. Select **Synchronize** to deploy the changes.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png" alt-text="Screenshot showing the schema synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png":::
-
-### Data Comparison
-
-1. On the **Comparison** menu, select **New Data Comparison**. The **New Data Comparison Wizard** appears.
-
-1. Choose your source and target, and then specify the data comparison options. Change mappings if necessary, and then select **Compare**.
-
-1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Data Synchronization Wizard**.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png" alt-text="Screenshot showing the results of the data comparison." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png":::
-
-1. Walk through the steps of the wizard configuring synchronization. Select **Synchronize** to deploy the changes.
-
-1. Check the result.
-
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png" alt-text="Screenshot showing the results of the Data Synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png":::
-
-## Next steps
-- [MySQL overview](overview.md)
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-dump-restore.md
- Title: Migrate using dump and restore - Azure Database for MySQL
-description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tools such as mysqldump, MySQL Workbench, and PHPMyAdmin.
----- Previously updated : 10/30/2020--
-# Migrate your MySQL database to Azure Database for MySQL using dump and restore
--
-This article explains two common ways to back up and restore databases in your Azure Database for MySQL
-- Dump and restore from the command-line (using mysqldump)-- Dump and restore using PHPMyAdmin-
-You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
-
-## Before you begin
-To step through this how-to guide, you need to have:
-- [Create Azure Database for MySQL server - Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)-- [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) command-line utility installed on a machine.-- [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool to do dump and restore commands.-
-> [!TIP]
-> If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
--
-## Common use-cases for dump and restore
-
-Most common use-cases are:
--- **Moving from other managed service provider** - Most managed service provider may not provide access to the physical storage file for security reasons so logical backup and restore is the only option to migrate.-- **Migrating from on-premises environment or Virtual machine** - Azure Database for MySQL doesn't support restore of physical backups which makes logical backup and restore as the ONLY approach.-- **Moving your backup storage from locally redundant to geo-redundant storage** - Azure Database for MySQL allows configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, dump and restore is the ONLY option. -- **Migrating from alternative storage engines to InnoDB** - Azure Database for MySQL supports only InnoDB Storage engine, and therefore does not support alternative storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before migration to Azure Database for MySQL.-
- For example, if you have a WordPress or WebApp using the MyISAM tables, first convert those tables by migrating into InnoDB format before restoring to Azure Database for MySQL. Use the clause `ENGINE=InnoDB` to set the engine used when creating a new table, then transfer the data into the compatible table before the restore.
-
- ```sql
- INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
- ```
-> [!Important]
-> - To avoid any compatibility issues, ensure the same version of MySQL is used on the source and destination systems when dumping databases. For example, if your existing MySQL server is version 5.7, then you should migrate to Azure Database for MySQL configured to run version 5.7. The `mysql_upgrade` command does not function in an Azure Database for MySQL server, and is not supported.
-> - If you need to upgrade across MySQL versions, first dump or export your lower version database into a higher version of MySQL in your own environment. Then run `mysql_upgrade`, before attempting migration into an Azure Database for MySQL.
-
-## Performance considerations
-To optimize performance, take notice of these considerations when dumping large databases:
-- Use the `exclude-triggers` option in mysqldump when dumping databases. Exclude triggers from dump files to avoid the trigger commands firing during the data restore.-- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and sends a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.-- Use the `extended-insert` multiple-row syntax that includes several VALUE lists. This results in a smaller dump file and speeds up inserts when the file is reloaded.-- Use the `order-by-primary` option in mysqldump when dumping databases, so that the data is scripted in primary key order.-- Use the `disable-keys` option in mysqldump when dumping data, to disable foreign key constraints before load. Disabling foreign key checks provides performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.-- Use partitioned tables when appropriate.-- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources using the metrics available in the Azure portal.-- Use the `defer-table-indexes` option in mysqldump when dumping databases, so that index creation happens after tables data is loaded.-- Use the `skip-definer` option in mysqldump to omit definer and SQL SECURITY clauses from the create statements for views and stored procedures. When you reload the dump file, it creates objects that use the default DEFINER and SQL SECURITY values.-- Copy the backup files to an Azure blob/store and perform the restore from there, which should be a lot faster than performing the restore across the Internet.-
-## Create a database on the target Azure Database for MySQL server
-Create an empty database on the target Azure Database for MySQL server where you want to migrate the data. Use a tool such as MySQL Workbench or mysql.exe to create the database. The database can have the same name as the database that is contained the dumped data or you can create a database with a different name.
-
-To get connected, locate the connection information in the **Overview** of your Azure Database for MySQL.
--
-Add the connection information into your MySQL Workbench.
--
-## Preparing the target Azure Database for MySQL server for fast data loads
-To prepare the target Azure Database for MySQL server for faster data loads, the following server parameters and configuration needs to be changed.
-- max_allowed_packet ΓÇô set to 1073741824 (i.e. 1GB) to prevent any overflow issue due to long rows.-- slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.-- query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.-- innodb_buffer_pool_size ΓÇô Scale up the server to 32 vCore Memory Optimized SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server.-- innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.-- innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.-- Scale up Storage tier ΓÇô The IOPs for Azure Database for MySQL server increases progressively with the increase in storage tier. For faster loads, you may want to increase the storage tier to increase the IOPs provisioned. Please do remember the storage can only be scaled up, not down.-
-Once the migration is completed, you can revert back the server parameters and compute tier configuration to its previous values.
-
-## Dump and restore using mysqldump utility
-
-### Create a backup file from the command-line using mysqldump
-To back up an existing MySQL database on the local on-premises server or in a virtual machine, run the following command:
-```bash
-$ mysqldump --opt -u [uname] -p[pass] [dbname] > [backupfile.sql]
-```
-
-The parameters to provide are:
-- [uname] Your database username-- [pass] The password for your database (note there is no space between -p and the password)-- [dbname] The name of your database-- [backupfile.sql] The filename for your database backup-- [--opt] The mysqldump option-
-For example, to back up a database named 'testdb' on your MySQL server with the username 'testuser' and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database. Make sure that the username 'testuser' has at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used.
-
-```bash
-GRANT SELECT, LOCK TABLES, SHOW VIEW ON *.* TO 'testuser'@'hostname' IDENTIFIED BY 'password';
-```
-Now run mysqldump to create the backup of `testdb` database
-
-```bash
-$ mysqldump -u root -p testdb > testdb_backup.sql
-```
-To select specific tables in your database to back up, list the table names separated by spaces. For example, to back up only table1 and table2 tables from the 'testdb', follow this example:
-
-```bash
-$ mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql
-```
-To back up more than one database at once, use the --database switch and list the database names separated by spaces.
-```bash
-$ mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
-```
-
-### Restore your MySQL database using command-line or MySQL Workbench
-Once you have created the target database, you can use the mysql command or MySQL Workbench to restore the data into the specific newly created database from the dump file.
-```bash
-mysql -h [hostname] -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
-```
-In this example, restore the data into the newly created database on the target Azure Database for MySQL server.
-
-Here is an example for how to use this **mysql** for **Single Server** :
-
-```bash
-$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p testdb < testdb_backup.sql
-```
-Here is an example for how to use this **mysql** for **Flexible Server** :
-
-```bash
-$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin -p testdb < testdb_backup.sql
-```
--
-## Dump and restore using PHPMyAdmin
-Follow these steps to dump and restore a database using PHPMyadmin.
-
-> [!NOTE]
-> For single server, the username must be in this format , 'username@servername' but for flexible server you can just use 'username' If you use 'username@servername' for flexible server, the connection will fail.
-
-### Export with PHPMyadmin
-To export, you can use the common tool phpMyAdmin, which you may already have installed locally in your environment. To export your MySQL database using PHPMyAdmin:
-1. Open phpMyAdmin.
-2. Select your database. Click the database name in the list on the left.
-3. Click the **Export** link. A new page appears to view the dump of database.
-4. In the Export area, click the **Select All** link to choose the tables in your database.
-5. In the SQL options area, click the appropriate options.
-6. Click the **Save as file** option and the corresponding compression option and then click the **Go** button. A dialog box should appear prompting you to save the file locally.
-
-### Import using PHPMyAdmin
-Importing your database is similar to exporting. Do the following actions:
-1. Open phpMyAdmin.
-2. In the phpMyAdmin setup page, click **Add** to add your Azure Database for MySQL server. Provide the connection details and login information.
-3. Create an appropriately named database and select it on the left of the screen. To rewrite the existing database, click the database name, select all the check boxes beside the table names, and select **Drop** to delete the existing tables.
-4. Click the **SQL** link to show the page where you can type in SQL commands, or upload your SQL file.
-5. Use the **browse** button to find the database file.
-6. Click the **Go** button to export the backup, execute the SQL commands, and re-create your database.
-
-## Known Issues
-For known issues, tips and tricks, we recommend you to look at our [techcommunity blog](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/tips-and-tricks-in-using-mysqldump-and-mysql-restore-to-azure/ba-p/916912).
-
-## Next steps
-- [Connect applications to Azure Database for MySQL](./howto-connection-string.md).-- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-import-export.md
- Title: Import and export - Azure Database for MySQL
-description: This article explains common ways to import and export databases in Azure Database for MySQL, by using tools such as MySQL Workbench.
----- Previously updated : 10/30/2020--
-# Migrate your MySQL database by using import and export
--
-This article explains two common approaches to importing and exporting data to an Azure Database for MySQL server by using MySQL Workbench.
-
-For detailed and comprehensive migration guidance, see the [migration guide resources](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
-
-For other migration scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
-
-## Prerequisites
-
-Before you begin migrating your MySQL database, you need to:
--- Create an [Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md).-- Download and install [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool for importing and exporting.-
-## Create a database on the Azure Database for MySQL server
-
-Create an empty database on the Azure Database for MySQL server by using MySQL Workbench, Toad, or Navicat. The database can have the same name as the database that contains the dumped data, or you can create a database with a different name.
-
-To get connected, do the following:
-
-1. In the Azure portal, look for the connection information on the **Overview** pane of your Azure Database for MySQL.
-
- :::image type="content" source="./media/concepts-migrate-import-export/1_server-overview-name-login.png" alt-text="Screenshot of the Azure Database for MySQL server connection information in the Azure portal.":::
-
-1. Add the connection information to MySQL Workbench.
-
- :::image type="content" source="./media/concepts-migrate-import-export/2_setup-new-connection.png" alt-text="Screenshot of the MySQL Workbench connection string.":::
-
-## Determine when to use import and export techniques
-
-> [!TIP]
-> For scenarios where you want to dump and restore the entire database, use the [dump and restore](concepts-migrate-dump-restore.md) approach instead.
-
-In the following scenarios, use MySQL tools to import and export databases into your MySQL database. For other tools, go to the "Migration Methods" section (page 22) of the [MySQL to Azure Database migration guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
--- When you need to selectively choose a few tables to import from an existing MySQL database into your Azure MySQL database, it's best to use the import and export technique. By doing so, you can omit any unneeded tables from the migration to save time and resources. For example, use the `--include-tables` or `--exclude-tables` switch with [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_include-tables), and the `--tables` switch with [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_tables).-- When you're moving database objects other than tables, explicitly create those objects. Include constraints (primary key, foreign key, and indexes), views, functions, procedures, triggers, and any other database objects that you want to migrate.-- When you're migrating data from external data sources other than a MySQL database, create flat files and import them by using [mysqlimport](https://dev.mysql.com/doc/refman/5.7/en/mysqlimport.html).-
-> [!Important]
-> Both Single Server and Flexible Server support only the InnoDB storage engine. Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MySQL.
->
-> If your source database uses another storage engine, convert to the InnoDB engine before you migrate the database. For example, if you have a WordPress or web app that uses the MyISAM engine, first convert the tables by migrating the data into InnoDB tables. Use the clause `ENGINE=INNODB` to set the engine for creating a table, and then transfer the data into the compatible table before the migration.
-
- ```sql
- INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
- ```
-
-## Performance recommendations for import and export
-
-For optimal data import and export performance, we recommend that you do the following:
--- Create clustered indexes and primary keys before you load data. Load the data in primary key order.-- Delay the creation of secondary indexes until after the data is loaded.-- Disable foreign key constraints before you load the data. Disabling foreign key checks provides significant performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.-- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal.-- Use partitioned tables when appropriate.-
-## Import and export data by using MySQL Workbench
-
-There are two ways to export and import data in MySQL Workbench: from the object browser context menu or from the Navigator pane. Each method serves a different purpose.
-
-> [!NOTE]
-> If you're adding a connection to MySQL Single Server or Flexible Server on MySQL Workbench, do the following:
->
-> - For MySQL Single Server, make sure that the user name is in the format *\<username@servername>*.
-> - For MySQL Flexible Server, use *\<username>* only. If you use *\<username@servername>* to connect, the connection will fail.
-
-### Run the table data export and import wizards from the object browser context menu
--
-The table data wizards support import and export operations by using CSV and JSON files. The wizards include several configuration options, such as separators, column selection, and encoding selection. You can run each wizard against local or remotely connected MySQL servers. The import action includes table, column, and type mapping.
-
-To access these wizards from the object browser context menu, right-click a table, and then select **Table Data Export Wizard** or **Table Data Import Wizard**.
-
-#### The table data export wizard
-
-To export a table to a CSV file:
-
-1. Right-click the table of the database to be exported.
-1. Select **Table Data Export Wizard**. Select the columns to be exported, row offset (if any), and count (if any).
-1. On the **Select data for export** pane, select **Next**. Select the file path, CSV, or JSON file type. Also select the line separator, method of enclosing strings, and field separator.
-1. On the **Select output file location** pane, select **Next**.
-1. On the **Export data** pane, select **Next**.
-
-#### The table data import wizard
-
-To import a table from a CSV file:
-
-1. Right-click the table of the database to be imported.
-1. Look for and select the CSV file to be imported, and then select **Next**.
-1. Select the destination table (new or existing), select or clear the **Truncate table before import** check box, and then select **Next**.
-1. Select the encoding and the columns to be imported, and then select **Next**.
-1. On the **Import data** pane, select **Next**. The wizard imports the data.
-
-### Run the SQL data export and import wizards from the Navigator pane
-
-Use a wizard to export or import SQL data that's generated from MySQL Workbench or from the mysqldump command. You can access the wizards from the **Navigator** pane or you can select **Server** from the main menu.
-
-#### Export data
--
-You can use the **Data Export** pane to export your MySQL data.
-
-1. In MySQL Workbench, on the **Navigator** pane, select **Data Export**.
-
-1. On the **Data Export** pane, select each schema that you want to export.
-
- For each schema, you can select specific schema objects or tables to export. Configuration options include export to a project folder or a self-contained SQL file, dump stored routines and events, or skip table data.
-
- Alternatively, use **Export a Result Set** to export a specific result set in the SQL editor to another format, such as CSV, JSON, HTML, and XML.
-
-1. Select the database objects to export, and configure the related options.
-1. Select **Refresh** to load the current objects.
-1. Optionally, select **Advanced Options** at the upper right to refine the export operation. For example, add table locks, use `replace` instead of `insert` statements, and quote identifiers with backtick characters.
-1. Select **Start Export** to begin the export process.
--
-#### Import data
--
-You can use the **Data Import** pane to import or restore exported data from the data export operation or from the mysqldump command.
-
-1. In MySQL Workbench, on the **Navigator** pane, select **Data Export/Restore**.
-1. Select the project folder or self-contained SQL file, select the schema to import into, or select the **New** button to define a new schema.
-1. Select **Start Import** to begin the import process.
-
-## Next steps
--- For another migration approach, see [Migrate your MySQL database to an Azure database for MySQL by using dump and restore](concepts-migrate-dump-restore.md).-- For more information about migrating databases to an Azure database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-mydumper-myloader.md
- Title: Migrate large databases to Azure Database for MySQL using mydumper/myloader
-description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tool mydumper/myloader
----- Previously updated : 06/18/2021--
-# Migrate large databases to Azure Database for MySQL using mydumper/myloader
--
-Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. To migrate MySQL databases larger than 1 TB to Azure Database for MySQL, consider using community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html), which provide the following benefits:
-
-* Parallelism, to help reduce the migration time.
-* Better performance, by avoiding expensive character set conversion routines.
-* An output format, with separate files for tables, metadata etc., that makes it easy to view/parse data. Consistency, by maintaining snapshot across all threads.
-* Accurate primary and replica log positions.
-* Easy management, as they support Perl Compatible Regular Expressions (PCRE) for specifying database and tables inclusions and exclusions.
-* Schema and data goes together. Don't need to handle it separately like other logical migration tools.
-
-This quickstart shows you how to install, back up, and restore a MySQL database by using mydumper/myloader.
-
-## Prerequisites
-
-Before you begin migrating your MySQL database, you need to:
-
-1. Create an Azure Database for MySQL server by using the [Azure portal](./flexible-server/quickstart-create-server-portal.md).
-
-2. Create an Azure VM running Linux by using the [Azure portal](../virtual-machines/linux/quick-create-portal.md) (preferably Ubuntu).
- > [!Note]
- > Prior to installing the tools, consider the following points:
- >
- > * If your source is on-premises and has a high bandwidth connection to Azure (using ExpressRoute), consider installing the tool on an Azure VM.<br>
- > * If you have a challenge in the bandwidth between the source and target, consider installing mydumper near the source and myloader near the target server. You can use tools **[Azcopy](../storage/common/storage-use-azcopy-v10.md)** to move the data from on-premises or other cloud solutions to Azure.
-
-3. Install mysql client, do the following steps:
-
-4.
-
- * Update the package index on the Azure VM running Linux by running the following command:
- ```bash
- $ sudo apt update
- ```
- * Install the mysql client package by running the following command:
- ```bash
- $ sudo apt install mysql-client
- ```
-
-## Install mydumper/myloader
-
-To install mydumper/myloader, do the following steps.
-
-1. Depending on your OS distribution, download the appropriate package for mydumper/myloader, running the following command:
-2.
- ```bash
- $ wget https://github.com/maxbube/mydumper/releases/download/v0.10.1/mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb
- ```
-
- > [!Note]
- > $(lsb_release -cs) helps to identify your distribution.
-
-3. To install the .deb package for mydumper, run the following command:
-
- ```bash
- $ dpkg -i mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb
- ```
-
- > [!Tip]
- > The command you use to install the package will differ based on the Linux distribution you have as the installers are different. The mydumper/myloader is available for following distributions Fedora, RedHat , Ubuntu, Debian, CentOS , openSUSE and MacOSX. For more information, see **[How to install mydumper](https://github.com/maxbube/mydumper#how-to-install-mydumpermyloader)**
-
-## Create a backup using mydumper
-
-* To create a backup using mydumper, run the following command:
-
- ```bash
- $ mydumper --host=<servername> --user=<username> --password=<Password> --outputdir=./backup --rows=100000 --compress --build-empty-files --threads=16 --compress-protocol --trx-consistency-only --ssl --regex '^(<Db_name>\.)' -L mydumper-logs.txt
- ```
-
-This command uses the following variables:
-
-* **--host:** The host to connect to
-* **--user:** Username with the necessary privileges
-* **--password:** User password
-* **--rows:** Try to split tables into chunks of this many rows
-* **--outputdir:** Directory to dump output files to
-* **--regex:** Regular expression for Database matching.
-* **--trx-consistency-only:** Transactional consistency only
-* **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer.
-
- >[!Note]
- >For more information on other options, you can use with mydumper, run the following command:
- **mydumper --help** . For more details see, [mydumper\myloader documentation](https://centminmod.com/mydumper.html)<br>
- >To dump multiple databases in parallel, you can modify regex variable as shown in the example: **regex ΓÇÖ^(DbName1\.|DbName2\.)**
-
-## Restore your database using myloader
-
-* To restore the database that you backed up using mydumper, run the following command:
-
- ```bash
- $ myloader --host=<servername> --user=<username> --password=<Password> --directory=./backup --queries-per-transaction=500 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt
- ```
-
-This command uses the following variables:
-
-* **--host:** The host to connect to
-* **--user:** Username with the necessary privileges
-* **--password:** User password
-* **--directory:** Location where the backup is stored.
-* **--queries-per-transaction:** Recommend setting to value not more than 500
-* **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer
-
-> [!Tip]
-> For more information on other options you can use with myloader, run the following command:
-**myloader --help**
-
-After the database is restored, itΓÇÖs always recommended to validate the data consistency between the source and the target databases.
-
-> [!Note]
-> Submit any issues or feedback regarding the mydumper/myloader tools **[here](https://github.com/maxbube/mydumper/issues)**.
-
-## Next steps
-
-* Learn more about the [mydumper/myloader project in GitHub](https://github.com/maxbube/mydumper).
-* Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
-* [Tutorial: Minimal Downtime Migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server](howto-migrate-single-flexible-minimum-downtime.md)
-* Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](./flexible-server/how-to-data-in-replication.md)
-* Commonly encountered [migration errors](./howto-troubleshoot-common-errors.md)
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-monitoring.md
- Title: Monitoring - Azure Database for MySQL
-description: This article describes the metrics for monitoring and alerting for Azure Database for MySQL, including CPU, storage, and connection statistics.
------ Previously updated : 10/21/2020-
-# Monitoring in Azure Database for MySQL
-
-Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for MySQL provides various metrics that give insight into the behavior of your server.
-
-## Metrics
-
-All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](howto-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../azure-monitor/data-platform.md).
-
-### List of metrics
-
-These metrics are available for Azure Database for MySQL:
-
-|Metric|Metric Display Name|Unit|Description|
-|||||
-|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
-|memory_percent|Memory percent|Percent|The percentage of memory in use.|
-|io_consumption_percent|IO percent|Percent|The percentage of IO in use. (Not applicable for Basic tier servers)|
-|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
-|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
-|serverlog_storage_percent|Server Log storage percent|Percent|The percentage of server log storage used out of the server's maximum server log storage.|
-|serverlog_storage_usage|Server Log storage used|Bytes|The amount of server log storage in use.|
-|serverlog_storage_limit|Server Log storage limit|Bytes|The maximum server log storage for this server.|
-|storage_limit|Storage limit|Bytes|The maximum storage for this server.|
-|active_connections|Active Connections|Count|The number of active connections to the server.|
-|connections_failed|Failed Connections|Count|The number of failed connections to the server.|
-|seconds_behind_master|Replication lag in seconds|Count|The number of seconds the replica server is lagging against the source server. (Not applicable for Basic tier servers)|
-|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
-|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
-|backup_storage_used|Backup Storage Used|Bytes|The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.|
-
-## Server logs
-
-You can enable slow query and audit logging on your server. These logs are also available through Azure Diagnostic Logs in Azure Monitor logs, Event Hubs, and Storage Account. To learn more about logging, visit the [audit logs](concepts-audit-logs.md) and [slow query logs](concepts-server-logs.md) articles.
-
-## Query Store
-
-[Query Store](concepts-query-store.md) is a feature that keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in the **mysql** schema. You can control the collection and storage of data via various configuration knobs.
-
-## Query Performance Insight
-
-[Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible in the **Intelligent Performance** section of your Azure Database for MySQL server's portal page.
-
-## Performance Recommendations
-
-The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
-
-## Planned maintenance notification
-
-[Planned maintenance notifications](./concepts-planned-maintenance-notification.md) allow you to receive alerts for upcoming planned maintenance to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 hours before the event.
-
-Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document.
-
-## Next steps
--- See [How to set up alerts](howto-alert-on-metric.md) for guidance on creating an alert on a metric.-- For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../azure-monitor/data-platform.md).-- Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-mysql-monitoring/).-- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for MySQL - Single Server
mysql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-performance-recommendations.md
- Title: Performance recommendations - Azure Database for MySQL
-description: This article describes the Performance Recommendation feature in Azure Database for MySQL
----- Previously updated : 6/3/2020-
-# Performance Recommendations in Azure Database for MySQL
--
-**Applies to:** Azure Database for MySQL 5.7, 8.0
-
-The Performance Recommendations feature analyzes your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. If performance schema is OFF, turning on Query Store enables performance_schema and a subset of performance schema instruments required for the feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
-
-## Permissions
-
-**Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature.
-
-## Performance recommendations
-
-The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance.
-
-Open **Performance Recommendations** from the **Intelligent Performance** section of the menu bar on the Azure portal page for your MySQL server.
--
-Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, the analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods.
-
-The **Recommendations** window will show a list of recommendations if any were found and the related query ID that generated this recommendation. With the query ID, you can use the [mysql.query_store](concepts-query-store.md#mysqlquery_store) view to learn more about the query.
--
-Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation.
-
-## Recommendation types
-
-### Index recommendations
-
-*Create Index* recommendations suggest new indexes to speed up the most frequently run or time-consuming queries in the workload. This recommendation type requires [Query Store](concepts-query-store.md) to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation.
-
-### Query recommendations
-
-Query recommendations suggest optimizations and rewrites for queries in the workload. By identifying MySQL query anti-patterns and fixing them syntactically, the performance of time-consuming queries can be improved. This recommendation type requires Query Store to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation.
-
-## Next steps
-- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL.
mysql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-planned-maintenance-notification.md
- Title: Planned maintenance notification - Azure Database for MySQL - Single Server
-description: This article describes the Planned maintenance notification feature in Azure Database for MySQL - Single Server
----- Previously updated : 10/21/2020-
-# Planned maintenance notification in Azure Database for MySQL - Single Server
--
-Learn how to prepare for planned maintenance events on your Azure Database for MySQL.
-
-## What is a planned maintenance?
-
-Azure Database for MySQL service performs automated patching of the underlying hardware, OS, and database engine. The patch includes new service features, security, and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patch is tested extensively and rolled out using safe deployment practices.
-
-A planned maintenance is a maintenance window when these service updates are deployed to servers in a given Azure region. During planned maintenance, a notification event is created to inform customers when the service update is deployed in the Azure region hosting their servers. Minimum duration between two planned maintenance is 30 days. You receive a notification of the next maintenance window 72 hours in advance.
-
-## Planned maintenance - duration and customer impact
-
-A planned maintenance for a given Azure region is typically expected to run 15 hrs. The window also includes buffer time to execute a rollback plan if necessary. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events.
-
-In summary, while the planned maintenance event runs for 15 hours, the individual server impact generally lasts 60 seconds depending on the transactional activity on the server. A notification is sent 72 calendar hours before planned maintenance starts and another one while maintenance is in progress for a given region.
-
-## How can I get notified of planned maintenance?
-
-You can utilize the planned maintenance notifications feature to receive alerts for an upcoming planned maintenance event. You will receive the notification about the upcoming maintenance 72 calendar hours before the event and another one while maintenance is in-progress for a given region.
-
-### Planned maintenance notification
-
-> [!IMPORTANT]
-> Planned maintenance notifications are currently available in preview in all regions **except** West Central US
-
-**Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for MySQL. These notifications are integrated with [Service Health's](../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event.
-
-We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted.
-
-You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
-
-### Check planned maintenance notification from Azure portal
-
-1. In the [Azure portal](https://portal.azure.com), select **Service Health**.
-2. Select **Planned Maintenance** tab
-3. Select **Subscription**, **Region**, and **Service** for which you want to check the planned maintenance notification.
-
-### To receive planned maintenance notification
-
-1. In the [portal](https://portal.azure.com), select **Service Health**.
-2. In the **Alerts** section, select **Health alerts**.
-3. Select **+ Add service health alert** and fill in the fields.
-4. Fill out the required fields.
-5. Choose the **Event type**, select **Planned maintenance** or **Select all**
-6. In **Action groups** define how you would like to receive the alert (get an email, trigger a logic app etc.)
-7. Ensure Enable rule upon creation is set to Yes.
-8. Select **Create alert rule** to complete your alert
-
-For detailed steps on how to create **service health alerts**, refer to [Create activity log alerts on service notifications](../service-health/alerts-activity-log-service-notifications-portal.md).
-
-## Can I cancel or postpone planned maintenance?
-
-Maintenance is needed to keep your server secure, stable, and up-to-date. The planned maintenance event cannot be canceled or postponed. Once the notification is sent to a given Azure region, the patching schedule changes cannot be made for any individual server in that region. The patch is rolled out for entire region at once. Azure Database for MySQL - Single server service is designed for cloud native application that doesn't require granular control or customization of the service. If you are looking to have ability to schedule maintenance for your servers, we recommend you consider [Flexible servers](./flexible-server/overview.md).
-
-## Are all the Azure regions patched at the same time?
-
-No, all the Azure regions are patched during the deployment wise window timings. The deployment wise window generally stretches from 5 PM - 8 AM local time next day, in a given Azure region. Geo-paired Azure regions are patched on different days. For high availability and business continuity of database servers, leveraging [cross region read replicas](./concepts-read-replicas.md#cross-region-replication) is recommended.
-
-## Retry logic
-
-A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors).
--
-## Next steps
--- For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team at AskAzureDBforMySQL@service.microsoft.com-- See [How to set up alerts](howto-alert-on-metric.md) for guidance on creating an alert on a metric.-- [Troubleshoot connection issues to Azure Database for MySQL - Single Server](howto-troubleshoot-common-connection-issues.md)-- [Handle transient errors and connect efficiently to Azure Database for MySQL - Single Server](concepts-connectivity.md)
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-pricing-tiers.md
- Title: Pricing tiers - Azure Database for MySQL
-description: Learn about the various pricing tiers for Azure Database for MySQL including compute generations, storage types, storage size, vCores, memory, and backup retention periods.
----- Previously updated : 02/07/2022--
-# Azure Database for MySQL pricing tiers
--
-You can create an Azure Database for MySQL server in one of three different pricing tiers: Basic, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MySQL server level. A server can have one or many databases.
-
-| Attribute | **Basic** | **General Purpose** | **Memory Optimized** |
-|:|:-|:--|:|
-| Compute generation | Gen 4, Gen 5 | Gen 4, Gen 5 | Gen 5 |
-| vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 |
-| Memory per vCore | 2 GB | 5 GB | 10 GB |
-| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB |
-| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days |
-
-To choose a pricing tier, use the following table as a starting point.
-
-| Pricing tier | Target workloads |
-|:-|:--|
-| Basic | Workloads that require light compute and I/O performance. Examples include servers used for development or testing or small-scale infrequently used applications. |
-| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.|
-| Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
-
-> [!NOTE]
-> Dynamic scaling to and from the Basic pricing tiers is currently not supported. Basic Tier SKUs servers can't be scaled up to General Purpose or Memory Optimized Tiers.
-
-After you create a General Purpose or Memory Optimized server, the number of vCores, hardware generation, and pricing tier can be changed up or down within seconds. You also can independently adjust the amount of storage up and the backup retention period up or down with no application downtime. You can't change the backup storage type after a server is created. For more information, see the [Scale resources](#scale-resources) section.
-
-## Compute generations and vCores
-
-Compute resources are provided as vCores, which represent the logical CPU of the underlying hardware. China East 1, China North 1, US DoD Central, and US DoD East utilize Gen 4 logical CPUs that are based on Intel E5-2673 v3 (Haswell) 2.4-GHz processors. All other regions utilize Gen 5 logical CPUs that are based on Intel E5-2673 v4 (Broadwell) 2.3-GHz processors.
-
-## Storage
-
-The storage you provision is the amount of storage capacity available to your Azure Database for MySQL server. The storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.
-
-Azure Database for MySQL ΓÇô Single Server supports the following the backend storage for the servers.
-
-| Storage type | Basic | General purpose v1 | General purpose v2 |
-|:|:-|:--|:|
-| Storage size | 5 GB to 1 TB | 5 GB to 4 TB | 5 GB to 16 TB |
-| Storage increment size | 1 GB | 1 GB | 1 GB |
-| IOPS | Variable |3 IOPS/GB<br/>Min 100 IOPS<br/>Max 6000 IOPS | 3 IOPS/GB<br/>Min 100 IOPS<br/>Max 20,000 IOPS |
-
->[!NOTE]
-> Basic storage does not provide an IOPS guarantee. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio.
-
-### Basic storage
-Basic storage is the backend storage supporting Basic pricing tier servers. Basic storage leverages Azure standard storage in the backend where iops provisioned are not guaranteed and latency is variable. Basic tier is best suited for workloads that require light compute, low cost and I/O performance for development or small-scale infrequently used applications.
-
-### General purpose storage
-General purpose storage is the backend storage supporting General Purpose and Memory Optimized tier server. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. There are two generations of general purpose storage as described below:
-
-#### General purpose storage v1 (Supports up to 4-TB)
-General purpose storage v1 is based on the legacy storage technology which can support up to 4-TB storage and 6000 IOPs per server. General purpose storage v1 is optimized to leverage memory from the compute nodes running MySQL engine for local caching and backups. The backup process on general purpose storage v1 reads from the data and log files in the memory of the compute nodes and copies it to the target backup storage for retention up to 35 days. As a result, the memory and io consumption of storage during backups is relatively higher.
-
-All Azure regions supports General purpose storage v1
-
-For General Purpose or Memory Optimized server on general purpose storage v1, we recommend you consider
-
-* Plan for compute sku tier accounting for 10-30% excess memory for storage caching and backup buffers
-* Provision 10% higher IOPs than required by the database workload to account for backup IOs
-* Alternatively, migrate to general purpose storage v2 described below that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred Azure regions shared below.
-
-#### General purpose storage v2 (Supports up to 16-TB storage)
-General purpose storage v2 is based on the latest storage infrastructure which can support up to 16-TB and 20000 IOPs. In a subset of Azure regions where the infrastructure is available, all newly provisioned servers land on general purpose storage v2 by default. General purpose storage v2 does not consume any memory from the compute node of MySQL and provides better predictable IO latencies compared to general purpose v1 storage. Backups on the general purpose v2 storage servers are snapshot-based with no additional IO overhead. On general purpose v2 storage, the MySQL server performance is expected to higher compared to general purpose storage v1 for the same storage and iops provisioned.There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.
-
-General purpose storage v2 is supported in the following Azure regions:
-
-| Region | General purpose storage v2 availability |
-| | |
-| Australia East | :heavy_check_mark: |
-| Australia South East | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: |
-| Canada Central | :heavy_check_mark: |
-| Canada East | :heavy_check_mark: |
-| Central US | :heavy_check_mark: |
-| East US | :heavy_check_mark: |
-| East US 2 | :heavy_check_mark: |
-| East Asia | :heavy_check_mark: |
-| Japan East | :heavy_check_mark: |
-| Japan West | :heavy_check_mark: |
-| Korea Central | :heavy_check_mark: |
-| Korea South | :heavy_check_mark: |
-| North Europe | :heavy_check_mark: |
-| North Central US | :heavy_check_mark: |
-| South Central US | :heavy_check_mark: |
-| Southeast Asia | :heavy_check_mark: |
-| UK South | :heavy_check_mark: |
-| UK West | :heavy_check_mark: |
-| West Central US | :heavy_check_mark: |
-| West US | :heavy_check_mark: |
-| West US 2 | :heavy_check_mark: |
-| West Europe | :heavy_check_mark: |
-| Central India* | :heavy_check_mark: |
-| France Central* | :heavy_check_mark: |
-| UAE North* | :heavy_check_mark: |
-| South Africa North* | :heavy_check_mark: |
-
-> [!Note]
-> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br />
-> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, following are the limitations, <br />
-> * Geo-Redundant Backup will not be supported<br />
-> * The replica server should be in the regions which support General purpose storage v2. <br />
-
-
-### How can I determine which storage type my server is running on?
-
-You can find the storage type of your server by going in the Pricing tier blade in portal.
-* If the server is provisioned using Basic SKU, the storage type is Basic storage.
-* If the server is provisioned using General Purpose or Memory Optimized SKU, the storage type is General Purpose storage
- * If the maximum storage that can be provisioned on your server is up to 4-TB, the storage type is General Purpose storage v1.
- * If the maximum storage that can be provisioned on your server is up to 16-TB, the storage type is General Purpose storage v2.
-
-### Can I move from general purpose storage v1 to general purpose storage v2? if yes, how and is there any additional cost?
-Yes, migration to general purpose storage v2 from v1 is supported if the underlying storage infrastructure is available in the Azure region of the source server. The migration and v2 storage is available at no additional cost.
-
-### Can I grow storage size after server is provisioned?
-You can add additional storage capacity during and after the creation of the server, and allow the system to grow storage automatically based on the storage consumption of your workload.
-
->[!IMPORTANT]
-> Storage can only be scaled up, not down.
-
-### Monitoring IO consumption
-You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and IO percent](concepts-monitoring.md).The monitoring metrics for the MySQL server with general purpose storage v1 reports the memory and IO consumed by the MySQL engine but may not capture the memory and IO consumption of the storage layer which is a limitation.
-
-### Reaching the storage limit
-
-Servers with less than or equal to 100 GB provisioned storage are marked read-only if the free storage is less than 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB.
-
-For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 256 MB.
-
-While the service attempts to make the server read-only, all new write transaction requests are blocked and existing active transactions will continue to execute. When the server is set to read-only, all subsequent write operations and transaction commits fail. Read queries will continue to work uninterrupted. After you increase the provisioned storage, the server will be ready to accept write transactions again.
-
-We recommend that you turn on storage auto-grow or to set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on [how to set up an alert](howto-alert-on-metric.md).
-
-### Storage auto-grow
-
-Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than or equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply.
-
-For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free.
-
-Remember that storage can only be scaled up, not down.
-
-## Backup storage
-
-Azure Database for MySQL provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any backup storage you use in excess of this amount is charged in GB per month. For example, if you provision a server with 250 GB of storage, youΓÇÖll have 250 GB of additional storage available for server backups at no charge. Storage for backups in excess of the 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/). To understand factors influencing backup storage usage, monitoring and controlling backup storage cost, you can refer to the [backup documentation](concepts-backup.md).
-
-## Scale resources
-
-After you create your server, you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI. For an example of scaling by using Azure CLI, see [Monitor and scale an Azure Database for MySQL server by using Azure CLI](scripts/sample-scale-server.md).
-
-When you change the number of vCores, the hardware generation, or the pricing tier, a copy of the original server is created with the new compute allocation. After the new server is up and running, connections are switched over to the new server. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This downtime during scaling can be around 60-120 seconds. The downtime during scaling is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of scaling operation. To avoid longer restart time, it is recommended to perform scaling operations during periods of low transactional activity on the server.
-
-Scaling storage and changing the backup retention period are true online operations. There is no downtime, and your application isn't affected. As IOPS scale with the size of the provisioned storage, you can increase the IOPS available to your server by scaling up storage.
-
-## Pricing
-
-For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/mysql/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.MySQLServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for MySQL** to customize the options.
-
-## Next steps
--- Learn how to [create a MySQL server in the portal](howto-create-manage-server-portal.md).-- Learn about [service limits](concepts-limits.md).-- Learn how to [scale out with read replicas](howto-read-replicas-portal.md).
mysql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-query-performance-insight.md
- Title: Query Performance Insight - Azure Database for MySQL
-description: This article describes the Query Performance Insight feature in Azure Database for MySQL
----- Previously updated : 01/12/2022-
-# Query Performance Insight in Azure Database for MySQL
--
-**Applies to:** Azure Database for MySQL 5.7, 8.0
-
-Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them.
-
-## Common scenarios
-
-### Long running queries
--- Identifying longest running queries in the past X hours-- Identifying top N queries that are waiting on resources
-
-### Wait statistics
--- Understanding wait nature for a query-- Understanding trends for resource waits and where resource contention exists-
-## Prerequisites
-
-For Query Performance Insight to function, data must exist in the [Query Store](concepts-query-store.md).
-
-## Viewing performance insights
-
-The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
-
-In the portal page of your Azure Database for MySQL server, select **Query Performance Insight** under the **Intelligent Performance** section of the menu bar.
-
-### Long running queries
-
-The **Long running queries** tab shows the top 5 Query IDs by average duration per execution, aggregated in 15-minute intervals. You can view more Query IDs by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
-
-> [!Note]
-> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk.
-
-The recommended steps to view the query text is shared below:
- 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal.
-1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries.
-
-```sql
- SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
- SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
-```
-
-You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger time period respectively.
--
-### Wait statistics
-
-> [!NOTE]
-> Wait statistics are meant for troubleshooting query performance issues. It is recommended to be turned on only for troubleshooting purposes. <br>If you receive the error message in the Azure portal "*The issue encountered for 'Microsoft.DBforMySQL'; cannot fulfill the request. If this issue continues or is unexpected, please contact support with this information.*" while viewing wait statistics, use a smaller time period.
-
-Wait statistics provides a view of the wait events that occur during the execution of a specific query. Learn more about the wait event types in the [MySQL engine documentation](https://go.microsoft.com/fwlink/?linkid=2098206).
-
-Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server.
-
-Queries displayed in the wait statistics view are grouped by the queries that exhibit the largest waits during the specified time interval.
-
-> [!Note]
-> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk.
-
-The recommended steps to view the query text is shared below:
- 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal.
-1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries.
-
-```sql
- SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
- SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
-```
--
-## Next steps
--- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MySQL.
mysql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-query-store.md
-
Title: Query Store - Azure Database for MySQL
-description: Learn about the Query Store feature in Azure Database for MySQL to help you track performance over time.
----- Previously updated : 5/12/2020-
-# Monitor Azure Database for MySQL performance with Query Store
--
-**Applies to:** Azure Database for MySQL 5.7, 8.0
-
-The Query Store feature in Azure Database for MySQL provides a way to track query performance over time. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in the **mysql** schema database in the Azure Database for MySQL instance.
-
-## Common scenarios for using Query Store
-
-Query store can be used in a number of scenarios, including the following:
--- Detecting regressed queries-- Determining the number of times a query was executed in a given time window-- Comparing the average execution time of a query across time windows to see large deltas-
-## Enabling Query Store
-
-Query Store is an opt-in feature, so it isn't active by default on a server. The query store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database.
-
-### Enable Query Store using the Azure portal
-
-1. Sign in to the Azure portal and select your Azure Database for MySQL server.
-1. Select **Server Parameters** in the **Settings** section of the menu.
-1. Search for the query_store_capture_mode parameter.
-1. Set the value to ALL and **Save**.
-
-To enable wait statistics in your Query Store:
-
-1. Search for the query_store_wait_sampling_capture_mode parameter.
-1. Set the value to ALL and **Save**.
-
-Allow up to 20 minutes for the first batch of data to persist in the mysql database.
-
-## Information in Query Store
-
-Query Store has two stores:
--- A runtime statistics store for persisting the query execution statistics information.-- A wait statistics store for persisting wait statistics information.-
-To minimize space usage, the runtime execution statistics in the runtime statistics store are aggregated over a fixed, configurable time window. The information in these stores is visible by querying the query store views.
-
-The following query returns information about queries in Query Store:
-
-```sql
-SELECT * FROM mysql.query_store;
-```
-
-Or this query for wait statistics:
-
-```sql
-SELECT * FROM mysql.query_store_wait_stats;
-```
-
-## Finding wait queries
-
-> [!NOTE]
-> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely.
-
-Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics.
-
-Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store:
-
-| **Observation** | **Action** |
-|||
-|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level. |
-|High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. |
-|High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries. |
-
-## Configuration options
-
-When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window.
-
-The following options are available for configuring Query Store parameters.
-
-| **Parameter** | **Description** | **Default** | **Range** |
-|||||
-| query_store_capture_mode | Turn the query store feature ON/OFF based on the value. Note: If performance_schema is OFF, turning on query_store_capture_mode will turn on performance_schema and a subset of performance schema instruments required for this feature. | ALL | NONE, ALL |
-| query_store_capture_interval | The query store capture interval in minutes. Allows specifying the interval in which the query metrics are aggregated | 15 | 5 - 60 |
-| query_store_capture_utility_queries | Turning ON or OFF to capture all the utility queries that is executing in the system. | NO | YES, NO |
-| query_store_retention_period_in_days | Time window in days to retain the data in the query store. | 7 | 1 - 30 |
-
-The following options apply specifically to wait statistics.
-
-| **Parameter** | **Description** | **Default** | **Range** |
-|||||
-| query_store_wait_sampling_capture_mode | Allows turning ON / OFF the wait statistics. | NONE | NONE, ALL |
-| query_store_wait_sampling_frequency | Alters frequency of wait-sampling in seconds. 5 to 300 seconds. | 30 | 5-300 |
-
-> [!NOTE]
-> Currently **query_store_capture_mode** supersedes this configuration, meaning both **query_store_capture_mode** and **query_store_wait_sampling_capture_mode** have to be enabled to ALL for wait statistics to work. If **query_store_capture_mode** is turned off, then wait statistics is turned off as well since wait statistics utilizes the performance_schema enabled, and the query_text captured by query store.
-
-Use the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md) to get or set a different value for a parameter.
-
-## Views and functions
-
-View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](howto-create-users.md) can use these views to see the data in Query Store. These views are only available in the **mysql** database.
-
-Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash.
-
-### mysql.query_store
-
-This view returns all the data in Query Store. There is one row for each distinct database ID, user ID, and query ID.
-
-| **Name** | **Data Type** | **IS_NULLABLE** | **Description** |
-|||||
-| `schema_name`| varchar(64) | NO | Name of the schema |
-| `query_id`| bigint(20) | NO| Unique ID generated for the specific query, if the same query executes in different schema, a new ID will be generated |
-| `timestamp_id` | timestamp| NO| Timestamp in which the query is executed. This is based on the query_store_interval configuration|
-| `query_digest_text`| longtext| NO| The normalized query text after removing all the literals|
-| `query_sample_text` | longtext| NO| First appearance of the actual query with literals|
-| `query_digest_truncated` | bit| YES| Whether the query text has been truncated. Value will be Yes if the query is longer than 1 KB|
-| `execution_count` | bigint(20)| NO| The number of times the query got executed for this timestamp ID / during the configured interval period|
-| `warning_count` | bigint(20)| NO| Number of warnings this query generated during the internal|
-| `error_count` | bigint(20)| NO| Number of errors this query generated during the interval|
-| `sum_timer_wait` | double| YES| Total execution time of this query during the interval|
-| `avg_timer_wait` | double| YES| Average execution time for this query during the interval|
-| `min_timer_wait` | double| YES| Minimum execution time for this query|
-| `max_timer_wait` | double| YES| Maximum execution time|
-| `sum_lock_time` | bigint(20)| NO| Total amount of time spent for all the locks for this query execution during this time window|
-| `sum_rows_affected` | bigint(20)| NO| Number of rows affected|
-| `sum_rows_sent` | bigint(20)| NO| Number of rows sent to client|
-| `sum_rows_examined` | bigint(20)| NO| Number of rows examined|
-| `sum_select_full_join` | bigint(20)| NO| Number of full joins|
-| `sum_select_scan` | bigint(20)| NO| Number of select scans |
-| `sum_sort_rows` | bigint(20)| NO| Number of rows sorted|
-| `sum_no_index_used` | bigint(20)| NO| Number of times when the query did not use any indexes|
-| `sum_no_good_index_used` | bigint(20)| NO| Number of times when the query execution engine did not use any good indexes|
-| `sum_created_tmp_tables` | bigint(20)| NO| Total number of temp tables created|
-| `sum_created_tmp_disk_tables` | bigint(20)| NO| Total number of temp tables created in disk (generates I/O)|
-| `first_seen` | timestamp| NO| The first occurrence (UTC) of the query during the aggregation window|
-| `last_seen` | timestamp| NO| The last occurrence (UTC) of the query during this aggregation window|
-
-### mysql.query_store_wait_stats
-
-This view returns wait events data in Query Store. There is one row for each distinct database ID, user ID, query ID, and event.
-
-| **Name**| **Data Type** | **IS_NULLABLE** | **Description** |
-|||||
-| `interval_start` | timestamp | NO| Start of the interval (15-minute increment)|
-| `interval_end` | timestamp | NO| End of the interval (15-minute increment)|
-| `query_id` | bigint(20) | NO| Generated unique ID on the normalized query (from query store)|
-| `query_digest_id` | varchar(32) | NO| The normalized query text after removing all the literals (from query store) |
-| `query_digest_text` | longtext | NO| First appearance of the actual query with literals (from query store) |
-| `event_type` | varchar(32) | NO| Category of the wait event |
-| `event_name` | varchar(128) | NO| Name of the wait event |
-| `count_star` | bigint(20) | NO| Number of wait events sampled during the interval for the query |
-| `sum_timer_wait_ms` | double | NO| Total wait time (in milliseconds) of this query during the interval |
-
-### Functions
-
-| **Name**| **Description** |
-|||
-| `mysql.az_purge_querystore_data(TIMESTAMP)` | Purges all query store data before the given time stamp |
-| `mysql.az_procedure_purge_querystore_event(TIMESTAMP)` | Purges all wait event data before the given time stamp |
-| `mysql.az_procedure_purge_recommendation(TIMESTAMP)` | Purges recommendations whose expiration is before the given time stamp |
-
-## Limitations and known issues
--- If a MySQL server has the parameter `read_only` on, Query Store cannot capture data.-- Query Store functionality can be interrupted if it encounters long Unicode queries (\>= 6000 bytes).-- The retention period for wait statistics is 24 hours.-- Wait statistics uses sample to capture a fraction of events. The frequency can be modified using the parameter `query_store_wait_sampling_frequency`.-
-## Next steps
--- Learn more about [Query Performance Insights](concepts-query-performance-insight.md)
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-read-replicas.md
- Title: Read replicas - Azure Database for MySQL
-description: 'Learn about read replicas in Azure Database for MySQL: choosing regions, creating replicas, connecting to replicas, monitoring replication, and stopping replication.'
----- Previously updated : 06/17/2021---
-# Read replicas in Azure Database for MySQL
--
-The read replica feature allows you to replicate data from an Azure Database for MySQL server to a read-only server. You can replicate from the source server to up to five replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-
-Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
-
-To learn more about MySQL replication features and issues, see the [MySQL replication documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html).
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
-
-## When to use a read replica
-
-The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the source.
-
-A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
-
-Because replicas are read-only, they don't directly reduce write-capacity burdens on the source. This feature isn't targeted at write-intensive workloads.
-
-The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay.
-
-## Cross-region replication
-
-You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
-
-You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its [paired region](./../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) or the universal replica regions. The following picture shows which replica regions are available depending on your source region.
-
-### Universal replica regions
-
-You can create a read replica in any of the following regions, regardless of where your source server is located. The supported universal replica regions include:
-
-| Region | Replica availability |
-| | |
-| Australia East | :heavy_check_mark: |
-| Australia South East | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: |
-| Canada Central | :heavy_check_mark: |
-| Canada East | :heavy_check_mark: |
-| Central US | :heavy_check_mark: |
-| East US | :heavy_check_mark: |
-| East US 2 | :heavy_check_mark: |
-| East Asia | :heavy_check_mark: |
-| Japan East | :heavy_check_mark: |
-| Japan West | :heavy_check_mark: |
-| Korea Central | :heavy_check_mark: |
-| Korea South | :heavy_check_mark: |
-| North Europe | :heavy_check_mark: |
-| North Central US | :heavy_check_mark: |
-| South Central US | :heavy_check_mark: |
-| Southeast Asia | :heavy_check_mark: |
-| Switzerland North | :heavy_check_mark: |
-| UK South | :heavy_check_mark: |
-| UK West | :heavy_check_mark: |
-| West Central US | :heavy_check_mark: |
-| West US | :heavy_check_mark: |
-| West US 2 | :heavy_check_mark: |
-| West Europe | :heavy_check_mark: |
-| Central India* | :heavy_check_mark: |
-| France Central* | :heavy_check_mark: |
-| UAE North* | :heavy_check_mark: |
-| South Africa North* | :heavy_check_mark: |
-
-> [!Note]
-> *Regions where Azure Database for MySQL has General purpose storage v2 in Public Preview <br />
-> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, you are limited to create replica server only in the Azure regions which support General purpose storage v2.
-
-### Paired regions
-
-In addition to the universal replica regions, you can create a read replica in the Azure paired region of your source server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../availability-zones/cross-region-replication-azure.md).
-
-If you're using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
-
-However, there are limitations to consider:
-
-* Regional availability: Azure Database for MySQL is available in France Central, UAE North, and Germany Central. However, their paired regions aren't available.
-
-* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia.
- This means that a source server in West India can create a replica in South India. However, a source server in South India can't create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region isn't West India.
-
-## Create a replica
-
-> [!IMPORTANT]
-> * The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
-> * If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
--
-When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the source server. The creation time depends on the amount of data on the source and the time since the last weekly full backup. The time can range from a few minutes to several hours. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation.
-
-Every replica is enabled for storage [auto-grow](concepts-pricing-tiers.md#storage-auto-grow). The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent an interruption in replication caused by out-of-storage errors.
-
-Learn how to [create a read replica in the Azure portal](howto-read-replicas-portal.md).
-
-## Connect to a replica
-
-At creation, a replica inherits the firewall rules of the source server. Afterwards, these rules are independent from the source server.
-
-The replica inherits the admin account from the source server. All user accounts on the source server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the source server.
-
-You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for MySQL server. For a server named **myreplica** with the admin username **myadmin**, you can connect to the replica by using the mysql CLI:
-
-```bash
-mysql -h myreplica.mysql.database.azure.com -u myadmin@myreplica -p
-```
-
-At the prompt, enter the password for the user account.
-
-## Monitor replication
-
-Azure Database for MySQL provides the **Replication lag in seconds** metric in Azure Monitor. This metric is available for replicas only. This metric is calculated using the `seconds_behind_master` metric available in MySQL's `SHOW SLAVE STATUS` command. Set an alert to inform you when the replication lag reaches a value that isn't acceptable for your workload.
-
-If you see increased replication lag, refer to [troubleshooting replication latency](howto-troubleshoot-replication-latency.md) to troubleshoot and understand possible causes.
-
-## Stop replication
-
-You can stop replication between a source and a replica. After replication is stopped between a source server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the source server.
-
-When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There's no automated failover between a source and its replica.
-
-> [!IMPORTANT]
-> The standalone server can't be made into a replica again.
-> Before you stop replication on a read replica, ensure the replica has all the data that you require.
-
-Learn how to [stop replication to a replica](howto-read-replicas-portal.md).
-
-## Failover
-
-There's no automated failover between source and replica servers.
-
-Since replication is asynchronous, there's lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
-
-> [!Tip]
-> If you failover to the replica, the lag at the time you delink the replica from the source will indicate how much data is lost.
-
-After you've decided you want to failover to a replica:
-
-1. Stop replication to the replica<br/>
- This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
-
-2. Point your application to the (former) replica<br/>
- Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
-
-After your application is successfully processing reads and writes, you've completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 listed previously.
-
-## Global transaction identifier (GTID)
-
-Global transaction identifier (GTID) is a unique identifier created with each committed transaction on a source server and is OFF by default in Azure Database for MySQL. GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB(General purpose storage v2). To learn more about GTID and how it's used in replication, refer to MySQL's [replication with GTID](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html) documentation.
-
-MySQL supports two types of transactions: GTID transactions (identified with GTID) and anonymous transactions (don't have a GTID allocated)
-
-The following server parameters are available for configuring GTID:
-
-|**Server parameter**|**Description**|**Default Value**|**Values**|
-|--|--|--|--|
-|`gtid_mode`|Indicates if GTIDs are used to identify transactions. Changes between modes can only be done one step at a time in ascending order (ex. `OFF` -> `OFF_PERMISSIVE` -> `ON_PERMISSIVE` -> `ON`)|`OFF`|`OFF`: Both new and replication transactions must be anonymous <br> `OFF_PERMISSIVE`: New transactions are anonymous. Replicated transactions can either be anonymous or GTID transactions. <br> `ON_PERMISSIVE`: New transactions are GTID transactions. Replicated transactions can either be anonymous or GTID transactions. <br> `ON`: Both new and replicated transactions must be GTID transactions.|
-|`enforce_gtid_consistency`|Enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. This value must be set to `ON` before enabling GTID replication. |`OFF`|`OFF`: All transactions are allowed to violate GTID consistency. <br> `ON`: No transaction is allowed to violate GTID consistency. <br> `WARN`: All transactions are allowed to violate GTID consistency, but a warning is generated. |
-
-> [!NOTE]
-> * After GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support.
->
-> * To change GTID's from one value to another can only be one step at a time in ascending order of modes. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to ON_PERMISSIVE but not to ON.
->
-> * To keep replication consistent, you cannot update it for a master/replica server.
->
-> * Recommended to SET enforce_gtid_consistency to ON before you can set gtid_mode=ON
--
-To enable GTID and configure the consistency behavior, update the `gtid_mode` and `enforce_gtid_consistency` server parameters using the [Azure portal](howto-server-parameters.md), [Azure CLI](howto-configure-server-parameters-using-cli.md), or [PowerShell](howto-configure-server-parameters-using-powershell.md).
-
-If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID replication. In order to make sure that the replication is consistent, `gtid_mode` cannot be changed once the master or replica server(s) is created with GTID enabled.
-
-## Considerations and limitations
-
-### Pricing tiers
-
-Read replicas are currently only available in the General Purpose and Memory Optimized pricing tiers.
-
-> [!NOTE]
-> The cost of running the replica server is based on the region where the replica server is running.
-
-### Source server restart
-
-Server that has General purpose storage v1, the `log_bin` parameter will be OFF by default. The value will be turned ON when you create the first read replica. If a source server has no existing read replicas, source server will first restart to prepare itself for replication. Please consider server restart and perform this operation during off-peak hours.
-
-Source server that has General purpose storage v2, the `log_bin` parameter will be ON by default and does not require a restart when you add a read replica.
-
-### New replicas
-
-A read replica is created as a new Azure Database for MySQL server. An existing server can't be made into a replica. You can't create a replica of another read replica.
-
-### Replica configuration
-
-A replica is created by using the same server configuration as the source. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and backup retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
-
-> [!IMPORTANT]
-> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the source.
-
-Firewall rules and parameter settings are inherited from the source server to the replica when the replica is created. Afterwards, the replica's rules are independent.
-
-### Stopped replicas
-
-If you stop replication between a source server and a read replica, the stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again.
-
-### Deleted source and standalone servers
-
-When a source server is deleted, replication is stopped to all read replicas. These replicas automatically become standalone servers and can accept both reads and writes. The source server itself is deleted.
-
-### User accounts
-
-Users on the source server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the source server.
-
-### Server parameters
-
-To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas.
-
-The following server parameters are locked on both the source and replica servers:
-
-* [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html)
-* [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators)
-
-The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers.
-
-To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas.
-
-### GTID
-
-GTID is supported on:
-
-* MySQL versions 5.7 and 8.0.
-* Servers that support storage up to 16 TB. Refer to the [pricing tier](concepts-pricing-tiers.md#storage) article for the full list of regions that support 16 TB storage.
-
-GTID is OFF by default. After GTID is enabled, you can't turn it back off. If you need to turn GTID OFF, contact support.
-
-If GTID is enabled on a source server, newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you can't update `gtid_mode` on the source or replica server(s).
-
-### Other
-
-* Creating a replica of a replica isn't supported.
-* In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information.
-* Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.
-* Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html)
-
-## Next steps
-
-* Learn how to [create and manage read replicas using the Azure portal](howto-read-replicas-portal.md)
-* Learn how to [create and manage read replicas using the Azure CLI and REST API](howto-read-replicas-cli.md)
mysql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-security.md
- Title: Security - Azure Database for MySQL
-description: An overview of the security features in Azure Database for MySQL.
----- Previously updated : 3/18/2020--
-# Security in Azure Database for MySQL
--
-There are multiple layers of security that are available to protect the data on your Azure Database for MySQL server. This article outlines those security options.
-
-## Information protection and encryption
-
-### In-transit
-Azure Database for MySQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default.
-
-### At-rest
-The Azure Database for MySQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
--
-## Network security
-Connections to an Azure Database for MySQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
-
-A newly created Azure Database for MySQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
-
-### IP firewall rules
-IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information.
-
-### Virtual network firewall rules
-Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for MySQL server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-and-security-vnet.md).
-
-### Private IP
-Private Link allows you to connect to your Azure Database for MySQL in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-security-private-link.md)
-
-## Access management
-
-While creating the Azure Database for MySQL server, you provide credentials for an administrator user. This administrator can be used to create additional MySQL users.
--
-## Threat protection
-
-You can opt in to [Microsoft Defender for open-source relational databases](../security-center/defender-for-databases-introduction.md) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
-
-[Audit logging](concepts-audit-logs.md) is available to track activity in your databases.
--
-## Next steps
-- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md)
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-server-logs.md
- Title: Slow query logs - Azure Database for MySQL
-description: Describes the slow query logs available in Azure Database for MySQL, and the available parameters for enabling different logging levels.
----- Previously updated : 11/6/2020-
-# Slow query logs in Azure Database for MySQL
-
-In Azure Database for MySQL, the slow query log is available to users. Access to the transaction log is not supported. The slow query log can be used to identify performance bottlenecks for troubleshooting.
-
-For more information about the MySQL slow query log, see the MySQL reference manual's [slow query log section](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html).
-
-When [Query Store](concepts-query-store.md) is enabled on your server, you may see the queries like "`CALL mysql.az_procedure_collect_wait_stats (900, 30);`" logged in your slow query logs. This behavior is expected as the Query Store feature collects statistics about your queries.
-
-## Configure slow query logging
-By default the slow query log is disabled. To enable it, set `slow_query_log` to ON. This can be enabled using the Azure portal or Azure CLI.
-
-Other parameters you can adjust include:
--- **long_query_time**: if a query takes longer than long_query_time (in seconds) that query is logged. The default is 10 seconds.-- **log_slow_admin_statements**: if ON includes administrative statements like ALTER_TABLE and ANALYZE_TABLE in the statements written to the slow_query_log.-- **log_queries_not_using_indexes**: determines whether queries that do not use indexes are logged to the slow_query_log-- **log_throttle_queries_not_using_indexes**: This parameter limits the number of non-index queries that can be written to the slow query log. This parameter takes effect when log_queries_not_using_indexes is set to ON.-- **log_output**: if "File", allows the slow query log to be written to both the local server storage and to Azure Monitor Diagnostic Logs. If "None", the slow query log will only be written to Azure Monitor Diagnostics Logs. -
-> [!IMPORTANT]
-> If your tables are not indexed, setting the `log_queries_not_using_indexes` and `log_throttle_queries_not_using_indexes` parameters to ON may affect MySQL performance since all queries running against these non-indexed tables will be written to the slow query log.<br><br>
-> If you plan on logging slow queries for an extended period of time, it is recommended to set `log_output` to "None". If set to "File", these logs are written to the local server storage and can affect MySQL performance.
-
-See the MySQL [slow query log documentation](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) for full descriptions of the slow query log parameters.
-
-## Access slow query logs
-There are two options for accessing slow query logs in Azure Database for MySQL: local server storage or Azure Monitor Diagnostic Logs. This is set using the `log_output` parameter.
-
-For local server storage, you can list and download slow query logs using the Azure portal or the Azure CLI. In the Azure portal, navigate to your server in the Azure portal. Under the **Monitoring** heading, select the **Server Logs** page. For more information on Azure CLI, see [Configure and access slow query logs using Azure CLI](howto-configure-server-logs-in-cli.md).
-
-Azure Monitor Diagnostic Logs allows you to pipe slow query logs to Azure Monitor Logs (Log Analytics), Azure Storage, or Event Hubs. See [below](concepts-server-logs.md#diagnostic-logs) for more information.
-
-## Local server storage log retention
-When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available.The 7 GB storage limit for the server logs is available free of cost and cannot be extended.
-
-Logs are rotated every 24 hours or 7 GB, whichever comes first.
-
-> [!Note]
-> The above log retention does not apply to logs that are piped using Azure Monitor Diagnostic Logs. You can change the retention period for the data sinks being emitted to (ex. Azure Storage).
-
-## Diagnostic logs
-Azure Database for MySQL is integrated with Azure Monitor Diagnostic Logs. Once you have enabled slow query logs on your MySQL server, you can choose to have them emitted to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs, see the how to section of the [diagnostic logs documentation](../azure-monitor/essentials/platform-logs-overview.md).
-
->[!Note]
->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings
-
-The following table describes what's in each log. Depending on the output method, the fields included and the order in which they appear may vary.
-
-| **Property** | **Description** |
-|||
-| `TenantId` | Your tenant ID |
-| `SourceSystem` | `Azure` |
-| `TimeGenerated` [UTC] | Time stamp when the log was recorded in UTC |
-| `Type` | Type of the log. Always `AzureDiagnostics` |
-| `SubscriptionId` | GUID for the subscription that the server belongs to |
-| `ResourceGroup` | Name of the resource group the server belongs to |
-| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` |
-| `ResourceType` | `Servers` |
-| `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
-| `Category` | `MySqlSlowLogs` |
-| `OperationName` | `LogEvent` |
-| `Logical_server_name_s` | Name of the server |
-| `start_time_t` [UTC] | Time the query began |
-| `query_time_s` | Total time in seconds the query took to execute |
-| `lock_time_s` | Total time in seconds the query was locked |
-| `user_host_s` | Username |
-| `rows_sent_d` | Number of rows sent |
-| `rows_examined_s` | Number of rows examined |
-| `last_insert_id_s` | [last_insert_id](https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id) |
-| `insert_id_s` | Insert ID |
-| `sql_text_s` | Full query |
-| `server_id_s` | The server's ID |
-| `thread_id_s` | Thread ID |
-| `\_ResourceId` | Resource URI |
-
-> [!Note]
-> For `sql_text`, log will be truncated if it exceeds 2048 characters.
-
-## Analyze logs in Azure Monitor Logs
-
-Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Logs, you can perform further analysis of your slow queries. Below are some sample queries to help you get started. Make sure to update the below with your server name.
--- Queries longer than 10 seconds on a particular server-
- ```Kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | where query_time_d > 10
- ```
--- List top 5 longest queries on a particular server-
- ```Kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | order by query_time_d desc
- | take 5
- ```
--- Summarize slow queries by minimum, maximum, average, and standard deviation query time on a particular server-
- ```Kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | summarize count(), min(query_time_d), max(query_time_d), avg(query_time_d), stdev(query_time_d), percentile(query_time_d, 95) by LogicalServerName_s
- ```
--- Graph the slow query distribution on a particular server-
- ```Kusto
- AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m)
- | render timechart
- ```
--- Display queries longer than 10 seconds across all MySQL servers with Diagnostic Logs enabled-
- ```Kusto
- AzureDiagnostics
- | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | where query_time_d > 10
- ```
-
-## Next Steps
-- [How to configure slow query logs from the Azure portal](howto-configure-server-logs-in-portal.md)-- [How to configure slow query logs from the Azure CLI](howto-configure-server-logs-in-cli.md)
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-server-parameters.md
- Title: Server parameters - Azure Database for MySQL
-description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL.
----- Previously updated : 1/26/2021-
-# Server parameters in Azure Database for MySQL
--
-This article provides considerations and guidelines for configuring server parameters in Azure Database for MySQL.
-
-## What are server parameters?
-
-The MySQL engine provides many different server variables and parameters that you use to configure and tune engine behavior. Some parameters can be set dynamically during runtime, while others are static, and require a server restart in order to apply.
-
-Azure Database for MySQL exposes the ability to change the value of various MySQL server parameters by using the [Azure portal](./howto-server-parameters.md), the [Azure CLI](./howto-configure-server-parameters-using-cli.md), and [PowerShell](./howto-configure-server-parameters-using-powershell.md) to match your workload's needs.
-
-## Configurable server parameters
-
-The list of supported server parameters is constantly growing. In the Azure portal, use the server parameters tab to view the full list and configure server parameters values.
-
-Refer to the following sections to learn more about the limits of several commonly updated server parameters. The limits are determined by the pricing tier and vCores of the server.
-
-### Thread pools
-
-MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there is a corresponding drop in performance. Many active threads can affect the performance significantly, due to increased context switching, thread contention, and bad locality for CPU caches.
-
-*Thread pools*, a server-side feature and distinct from connection pooling, maximize performance by introducing a dynamic pool of worker threads. You use this feature to limit the number of active threads running on the server and minimize thread churn. This helps ensure that a burst of connections won't cause the server to run out of resources or memory. Thread pools are most efficient for short queries and CPU intensive workloads, such as OLTP workloads.
-
-For more information, see [Introducing thread pools in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/introducing-thread-pools-in-azure-database-for-mysql-service/ba-p/1504173).
-
-> [!NOTE]
-> Thread pools aren't supported for MySQL 5.6.
-
-### Configure the thread pool
-
-To enable a thread pool, update the `thread_handling` server parameter to `pool-of-threads`. By default, this parameter is set to `one-thread-per-connection`, which means MySQL creates a new thread for each new connection. This is a static parameter, and requires a server restart to apply.
-
-You can also configure the maximum and minimum number of threads in the pool by setting the following server parameters:
--- `thread_pool_max_threads`: This value ensures that there won't be more than this number of threads in the pool.-- `thread_pool_min_threads`: This value sets the number of threads that will be reserved even after connections are closed.-
-To improve performance issues of short queries on the thread pool, you can enable *batch execution*. Instead of returning back to the thread pool immediately after running a query, threads will keep active for a short time to wait for the next query through this connection. The thread then runs the query rapidly and, when this is complete, the thread waits for the next one. This process continues until the overall time spent exceeds a threshold.
-
-You determine the behavior of batch execution by using the following server parameters:
--- `thread_pool_batch_wait_timeout`: This value specifies the time a thread waits for another query to process.-- `thread_pool_batch_max_time`: This value determines the maximum time a thread will repeat the cycle of query execution and waiting for the next query.-
-> [!IMPORTANT]
-> Don't turn on the thread pool in production until you've tested it.
-
-### log_bin_trust_function_creators
-
-In Azure Database for MySQL, binary logs are always enabled (the `log_bin` parameter is set to `ON`). If you want to use triggers, you get error similar to the following: *You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*.
-
-The binary logging format is always **ROW**, and all connections to the server *always* use row-based binary logging. Row-based binary logging helps maintain security, and binary logging can't break, so you can safely set [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to `TRUE`.
-
-### innodb_buffer_pool_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) to learn more about this parameter.
-
-#### Servers on [general purpose storage v1 (supporting up to 4 TB)](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb)
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|872415232|134217728|872415232|
-|Basic|2|2684354560|134217728|2684354560|
-|General Purpose|2|3758096384|134217728|3758096384|
-|General Purpose|4|8053063680|134217728|8053063680|
-|General Purpose|8|16106127360|134217728|16106127360|
-|General Purpose|16|32749125632|134217728|32749125632|
-|General Purpose|32|66035122176|134217728|66035122176|
-|General Purpose|64|132070244352|134217728|132070244352|
-|Memory Optimized|2|7516192768|134217728|7516192768|
-|Memory Optimized|4|16106127360|134217728|16106127360|
-|Memory Optimized|8|32212254720|134217728|32212254720|
-|Memory Optimized|16|65498251264|134217728|65498251264|
-|Memory Optimized|32|132070244352|134217728|132070244352|
-
-#### Servers on [general purpose storage v2 (supporting up to 16 TB)](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage)
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|872415232|134217728|872415232|
-|Basic|2|2684354560|134217728|2684354560|
-|General Purpose|2|7516192768|134217728|7516192768|
-|General Purpose|4|16106127360|134217728|16106127360|
-|General Purpose|8|32212254720|134217728|32212254720|
-|General Purpose|16|65498251264|134217728|65498251264|
-|General Purpose|32|132070244352|134217728|132070244352|
-|General Purpose|64|264140488704|134217728|264140488704|
-|Memory Optimized|2|15032385536|134217728|15032385536|
-|Memory Optimized|4|32212254720|134217728|32212254720|
-|Memory Optimized|8|64424509440|134217728|64424509440|
-|Memory Optimized|16|130996502528|134217728|130996502528|
-|Memory Optimized|32|264140488704|134217728|264140488704|
-
-### innodb_file_per_table
-
-MySQL stores the `InnoDB` table in different tablespaces, based on the configuration you provide during the table creation. The [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html) is the storage area for the `InnoDB` data dictionary. A [file-per-table tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html) contains data and indexes for a single `InnoDB` table, and is stored in the file system in its own data file.
-
-You control this behavior by using the `innodb_file_per_table` server parameter. Setting `innodb_file_per_table` to `OFF` causes `InnoDB` to create tables in the system tablespace. Otherwise, `InnoDB` creates tables in file-per-table tablespaces.
-
-> [!NOTE]
-> You can only update `innodb_file_per_table` in the general purpose and memory optimized pricing tiers on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) and [general purpose storage v1](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb).
-
-Azure Database for MySQL supports 4 TB (at the largest) in a single data file on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage). If your database size is larger than 4 TB, you should create the table in the [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size that is larger than 4 TB, you should use the partition table.
-
-### join_buffer_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_join_buffer_size) to learn more about this parameter.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|262144|128|268435455|
-|General Purpose|4|262144|128|536870912|
-|General Purpose|8|262144|128|1073741824|
-|General Purpose|16|262144|128|2147483648|
-|General Purpose|32|262144|128|4294967295|
-|General Purpose|64|262144|128|4294967295|
-|Memory Optimized|2|262144|128|536870912|
-|Memory Optimized|4|262144|128|1073741824|
-|Memory Optimized|8|262144|128|2147483648|
-|Memory Optimized|16|262144|128|4294967295|
-|Memory Optimized|32|262144|128|4294967295|
-
-### max_connections
-
-|**Pricing tier**|**vCore(s)**|**Default value**|**Min value**|**Max value**|
-||||||
-|Basic|1|50|10|50|
-|Basic|2|100|10|100|
-|General Purpose|2|300|10|600|
-|General Purpose|4|625|10|1250|
-|General Purpose|8|1250|10|2500|
-|General Purpose|16|2500|10|5000|
-|General Purpose|32|5000|10|10000|
-|General Purpose|64|10000|10|20000|
-|Memory Optimized|2|625|10|1250|
-|Memory Optimized|4|1250|10|2500|
-|Memory Optimized|8|2500|10|5000|
-|Memory Optimized|16|5000|10|10000|
-|Memory Optimized|32|10000|10|20000|
-
-When the number of connections exceeds the limit, you might receive an error.
-
-> [!TIP]
-> To manage connections efficiently, it's a good idea to use a connection pooler, like ProxySQL. To learn about setting up ProxySQL, see the blog post [Load balance read replicas using ProxySQL in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042). Note that ProxySQL is an open source community tool. It's supported by Microsoft on a best-effort basis.
-
-### max_heap_table_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_heap_table_size) to learn more about this parameter.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|16777216|16384|268435455|
-|General Purpose|4|16777216|16384|536870912|
-|General Purpose|8|16777216|16384|1073741824|
-|General Purpose|16|16777216|16384|2147483648|
-|General Purpose|32|16777216|16384|4294967295|
-|General Purpose|64|16777216|16384|4294967295|
-|Memory Optimized|2|16777216|16384|536870912|
-|Memory Optimized|4|16777216|16384|1073741824|
-|Memory Optimized|8|16777216|16384|2147483648|
-|Memory Optimized|16|16777216|16384|4294967295|
-|Memory Optimized|32|16777216|16384|4294967295|
-
-### query_cache_size
-
-The query cache is turned off by default. To enable the query cache, configure the `query_cache_type` parameter.
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_query_cache_size) to learn more about this parameter.
-
-> [!NOTE]
-> The query cache is deprecated as of MySQL 5.7.20 and has been removed in MySQL 8.0.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|0|0|16777216|
-|General Purpose|4|0|0|33554432|
-|General Purpose|8|0|0|67108864|
-|General Purpose|16|0|0|134217728|
-|General Purpose|32|0|0|134217728|
-|General Purpose|64|0|0|134217728|
-|Memory Optimized|2|0|0|33554432|
-|Memory Optimized|4|0|0|67108864|
-|Memory Optimized|8|0|0|134217728|
-|Memory Optimized|16|0|0|134217728|
-|Memory Optimized|32|0|0|134217728|
-
-### lower_case_table_names
-
-The `lower_case_table_name` parameter is set to 1 by default, and you can update this parameter in MySQL 5.6 and MySQL 5.7.
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_lower_case_table_names) to learn more about this parameter.
-
-> [!NOTE]
-> In MySQL 8.0, `lower_case_table_name` is set to 1 by default, and you can't change it.
-
-### innodb_strict_mode
-
-If you receive an error similar to `Row size too large (> 8126)`, consider turning off the `innodb_strict_mode` parameter. You can't modify `innodb_strict_mode` globally at the server level. If row data size is larger than 8K, the data is truncated, without an error notification, leading to potential data loss. It's a good idea to modify the schema to fit the page size limit.
-
-You can set this parameter at a session level, by using `init_connect`. To set `innodb_strict_mode` at a session level, refer to [setting parameter not listed](./howto-server-parameters.md#setting-parameters-not-listed).
-
-> [!NOTE]
-> If you have a read replica server, setting `innodb_strict_mode` to `OFF` at the session-level on a source server will break the replication. We suggest keeping the parameter set to `ON` if you have read replicas.
-
-### sort_buffer_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sort_buffer_size) to learn more about this parameter.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|524288|32768|4194304|
-|General Purpose|4|524288|32768|8388608|
-|General Purpose|8|524288|32768|16777216|
-|General Purpose|16|524288|32768|33554432|
-|General Purpose|32|524288|32768|33554432|
-|General Purpose|64|524288|32768|33554432|
-|Memory Optimized|2|524288|32768|8388608|
-|Memory Optimized|4|524288|32768|16777216|
-|Memory Optimized|8|524288|32768|33554432|
-|Memory Optimized|16|524288|32768|33554432|
-|Memory Optimized|32|524288|32768|33554432|
-
-### tmp_table_size
-
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_tmp_table_size) to learn more about this parameter.
-
-|**Pricing tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**|
-||||||
-|Basic|1|Not configurable in Basic tier|N/A|N/A|
-|Basic|2|Not configurable in Basic tier|N/A|N/A|
-|General Purpose|2|16777216|1024|67108864|
-|General Purpose|4|16777216|1024|134217728|
-|General Purpose|8|16777216|1024|268435456|
-|General Purpose|16|16777216|1024|536870912|
-|General Purpose|32|16777216|1024|1073741824|
-|General Purpose|64|16777216|1024|1073741824|
-|Memory Optimized|2|16777216|1024|134217728|
-|Memory Optimized|4|16777216|1024|268435456|
-|Memory Optimized|8|16777216|1024|536870912|
-|Memory Optimized|16|16777216|1024|1073741824|
-|Memory Optimized|32|16777216|1024|1073741824|
-
-### InnoDB buffer pool warmup
-
-After you restart Azure Database for MySQL, the data pages that reside in the disk are loaded, as the tables are queried. This leads to increased latency and slower performance for the first run of the queries. For workloads that are sensitive to latency, you might find this slower performance unacceptable.
-
-You can use `InnoDB` buffer pool warmup to shorten the warmup period. This process reloads disk pages that were in the buffer pool *before* the restart, rather than waiting for DML or SELECT operations to access corresponding rows. For more information, see [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html).
-
-Note that improved performance comes at the expense of longer start-up time for the server. When you enable this parameter, the server startup and restart time is expected to increase, depending on the IOPS provisioned on the server. It's a good idea to test and monitor the restart time, to ensure that the start-up or restart performance is acceptable, because the server is unavailable during that time. Don't use this parameter when the IOPS provisioned is less than 1000 IOPS (in other words, when the storage provisioned is less than 335 GB).
-
-To save the state of the buffer pool at server shutdown, set the server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set the server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up or restart by lowering and fine-tuning the value of the server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`.
-
-> [!Note]
-> `InnoDB` buffer pool warmup parameters are only supported in general purpose storage servers with up to 16 TB storage. For more information, see [Azure Database for MySQL storage options](./concepts-pricing-tiers.md#storage).
-
-### time_zone
-
-Upon initial deployment, a server running Azure Database for MySQL includes systems tables for time zone information, but these tables aren't populated. You can populate the tables by calling the `mysql.az_load_timezone` stored procedure from tools like the MySQL command line or MySQL Workbench. For information about how to call the stored procedures and set the global or session-level time zones, see [Working with the time zone parameter (Azure portal)](howto-server-parameters.md#working-with-the-time-zone-parameter) or [Working with the time zone parameter (Azure CLI)](howto-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter).
-
-### binlog_expire_logs_seconds
-
-In Azure Database for MySQL, this parameter specifies the number of seconds the service waits before purging the binary log file.
-
-The *binary log* contains events that describe database changes, such as table creation operations or changes to table data. It also contains events for statements that can potentially make changes. The binary log is used mainly for two purposes, replication and data recovery operations.
-
-Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. In case of multiple replicas, the binary logs wait for the slowest replica to read the changes before being purged. If you want binary logs to persist longer, you can configure the parameter `binlog_expire_logs_seconds`. If you set `binlog_expire_logs_seconds` to `0`, which is the default value, it purges as soon as the handle to the binary log is freed. If you set `binlog_expire_logs_seconds` to greater than 0, then the binary log only purges after that period of time.
-
-For Azure Database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data out from the Azure Database for MySQL service, you must set this parameter in the primary to avoid purging binary logs before the replica reads from the changes from the primary. If you set the `binlog_expire_logs_seconds` to a higher value, then the binary logs won't get purged soon enough. This can lead to an increase in the storage billing.
-
-## Non-configurable server parameters
-
-The following server parameters aren't configurable in the service:
-
-|**Parameter**|**Fixed value**|
-| : | :-- |
-|`innodb_file_per_table` in the basic tier|OFF|
-|`innodb_flush_log_at_trx_commit`|1|
-|`sync_binlog`|1|
-|`innodb_log_file_size`|256 MB|
-|`innodb_log_files_in_group`|2|
-
-Other variables not listed here are set to the default MySQL values. Refer to the MySQL docs for versions [8.0](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html), [5.7](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html), and [5.6](https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html).
-
-## Next steps
--- Learn how to [configure server parameters by using the Azure portal](./howto-server-parameters.md)-- Learn how to [configure server parameters by using the Azure CLI](./howto-configure-server-parameters-using-cli.md)-- Learn how to [configure server parameters by using PowerShell](./howto-configure-server-parameters-using-powershell.md)
mysql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-servers.md
- Title: Server concepts - Azure Database for MySQL
-description: This topic provides considerations and guidelines for working with Azure Database for MySQL servers.
----- Previously updated : 3/18/2020-
-# Server concepts in Azure Database for MySQL
--
-This article provides considerations and guidelines for working with Azure Database for MySQL servers.
-
-## What is an Azure Database for MySQL server?
-
-An Azure Database for MySQL server is a central administrative point for multiple databases. It is the same MySQL server construct that you may be familiar with in the on-premises world. Specifically, the Azure Database for MySQL service is managed, provides performance guarantees, and exposes access and features at server-level.
-
-An Azure Database for MySQL server:
--- Is created within an Azure subscription.-- Is the parent resource for databases.-- Provides a namespace for databases.-- Is a container with strong lifetime semantics - delete a server and it deletes the contained databases.-- Collocates resources in a region.-- Provides a connection endpoint for server and database access.-- Provides the scope for management policies that apply to its databases: login, firewall, users, roles, configurations, etc.-- Is available in multiple versions. For more information, see [Supported Azure Database for MySQL database versions](./concepts-supported-versions.md).-
-Within an Azure Database for MySQL server, you can create one or multiple databases. You can opt to create a single database per server to use all the resources or to create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Pricing tiers](./concepts-pricing-tiers.md).
-
-## How do I connect and authenticate to an Azure Database for MySQL server?
-
-The following elements help ensure safe access to your database.
-
-| Security concept | Description |
-| :-- | :-- |
-| **Authentication and authorization** | Azure Database for MySQL server supports native MySQL authentication. You can connect and authenticate to a server with the server's admin login. |
-| **Protocol** | The service supports a message-based protocol used by MySQL. |
-| **TCP/IP** | The protocol is supported over TCP/IP and over Unix-domain sockets. |
-| **Firewall** | To help protect your data, a firewall rule prevents all access to your database server, until you specify which computers have permission. See [Azure Database for MySQL Server firewall rules](./concepts-firewall-rules.md). |
-| **SSL** | The service supports enforcing SSL connections between your applications and your database server. See [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./howto-configure-ssl.md). |
-
-## Stop/Start an Azure Database for MySQL
-
-Azure Database for MySQL gives you the ability to **Stop** the server when not in use and **Start** the server when you resume activity. This is essentially done to save costs on the database servers and only pay for the resource when in use. This becomes even more important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. Later, when you want to bring the server back online, you can either use the [Azure portal](how-to-stop-start-server.md) or [CLI](how-to-stop-start-server.md).
-
-When the server is in the **Stopped** state, the server's compute is not billed. However, storage continues to to be billed as the server's storage remains to ensure that data files are available when the server is started again.
-
-> [!IMPORTANT]
-> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can chose to **Stop** it again if you are not using the server.
-
-During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to [start the server](how-to-stop-start-server.md).
-
-### Limitations of Stop/start operation
-- Not supported with read replica configurations (both source and replicas).-
-## How do I manage a server?
-
-You can manage the creation, deletion, server parameter configuration (my.cnf), scaling, networking, security, high availability, backup & restore, monitoring of your Azure Database for MySQL servers by using the Azure portal or the Azure CLI. In addition, following stored procedures are available in Azure Database for MySQL to perform certain database administration tasks required as SUPER user privilege is not supported on the server.
-
-|**Stored Procedure Name**|**Input Parameters**|**Output Parameters**|**Usage Note**|
-|--|--|--|--|
-|*mysql.az_kill*|processlist_id|N/A|Equivalent to [`KILL CONNECTION`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the connection associated with the provided processlist_id after terminating any statement the connection is executing.|
-|*mysql.az_kill_query*|processlist_id|N/A|Equivalent to [`KILL QUERY`](https://dev.mysql.com/doc/refman/8.0/en/kill.html) command. Will terminate the statement the connection is currently executing. Leaves the connection itself alive.|
-|*mysql.az_load_timezone*|N/A|N/A|Loads [time zone tables](howto-server-parameters.md#working-with-the-time-zone-parameter) to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").|
-
-## Next steps
--- For an overview of the service, see [Azure Database for MySQL Overview](./overview.md)-- For information about specific resource quotas and limitations based on your **pricing tier**, see [Pricing tiers](./concepts-pricing-tiers.md)-- For information about connecting to the service, see [Connection libraries for Azure Database for MySQL](./concepts-connection-libraries.md).
mysql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-ssl-connection-security.md
- Title: SSL/TLS connectivity - Azure Database for MySQL
-description: Information for configuring Azure Database for MySQL and associated applications to properly use SSL connections
----- Previously updated : 07/09/2020--
-# SSL/TLS connectivity in Azure Database for MySQL
--
-Azure Database for MySQL supports connecting your database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application.
-
-> [!NOTE]
-> Updating the `require_secure_transport` server parameter value does not affect the MySQL service's behavior. Use the SSL and TLS enforcement features outlined in this article to secure connections to your database.
-
->[!NOTE]
-> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021).
-
-> [!IMPORTANT]
-> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md)
-
-## SSL Default settings
-
-By default, the database service should be configured to require SSL connections when connecting to MySQL. We recommend to avoid disabling the SSL option whenever possible.
-
-When provisioning a new Azure Database for MySQL server through the Azure portal and CLI, enforcement of SSL connections is enabled by default.
-
-Connection strings for various programming languages are shown in the Azure portal. Those connection strings include the required SSL parameters to connect to your database. In the Azure portal, select your server. Under the **Settings** heading, select the **Connection strings**. The SSL parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations.
-
-In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MySQL server which is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem.
-
-Similarly, the following links point to the certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
-
-To learn how to enable or disable SSL connection when developing application, refer to [How to configure SSL](howto-configure-ssl.md).
-
-## TLS enforcement in Azure Database for MySQL
-
-Azure Database for MySQL supports encryption for clients connecting to your database server using Transport Layer Security (TLS). TLS is an industry standard protocol that ensures secure network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-
-### TLS settings
-
-Azure Database for MySQL provides the ability to enforce the TLS version for the client connections. To enforce the TLS version, use the **Minimum TLS version** option setting. The following values are allowed for this option setting:
-
-| Minimum TLS setting | Client TLS version supported |
-|:|-:|
-| TLSEnforcementDisabled (default) | No TLS required |
-| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher |
-| TLS1_1 | TLS 1.1, TLS 1.2 and higher |
-| TLS1_2 | TLS version 1.2 and higher |
--
-For example, setting the value of minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected.
-
-> [!Note]
-> By default, Azure Database for MySQL does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`).
->
-> Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement.
-
-The minimum TLS version setting doesnt require any restart of the server can be set while the server is online. To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](howto-tls-configurations.md).
-
-## Cipher support by Azure Database for MySQL Single server
-
-As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database serer. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites doesn't match one of suites listed below, incoming client connections will be rejected.
-
-### Cipher suite supported
-
-* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-
-## Next steps
--- [Connection libraries for Azure Database for MySQL](concepts-connection-libraries.md)-- Learn how to [configure SSL](howto-configure-ssl.md)-- Learn how to [configure TLS](howto-tls-configurations.md)
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-supported-versions.md
- Title: Supported versions - Azure Database for MySQL
-description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL service.
------ Previously updated : 11/4/2021-
-# Supported Azure Database for MySQL server versions
--
-Azure Database for MySQL has been developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports all the current major version supported by the community namely MySQL 5.7 and 8.0. MySQL uses the X.Y.Z naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
-
-## Connect to a gateway node that is running a specific MySQL version
-
-In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL service architecture.
-
-As Azure Database for MySQL supports major version v5.7 and v8.0, the default port 3306 to connect to Azure Database for MySQL runs MySQL client version 5.6 (least common denominator) to support connections to servers of all 2 supported major versions. However, if your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string.
-
-In Azure Database for MySQL service, gateway nodes listens on port 3308 for v5.7 clients and port 3309 for v8.0 clients. In other words, if you would like to connect to v5.7 gateway client, you should use your fully qualified server name and port 3308 to connect to your server from client application. Similarly, if you would like to connect to v8.0 gateway client, you can use your fully qualified server name and port 3309 to connect to your server. Check the following example for further clarity.
--
-> [!NOTE]
-> Connecting to Azure Database for MySQL via ports 3308 and 3309 are only supported for public connectivity, Private Link and VNet service endpoints can only be used with port 3306.
-
-## Azure Database for MySQL currently supports the following major and minor versions of MySQL:
-
-| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](./flexible-server/overview.md) <br/> Current minor version |
-|:-|:-|:|
-|MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html) (Retired) | Not supported|
-|MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-32.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
-|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
-
-Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
-
-## Managing updates and upgrades
-
-The service automatically manages patching for bug fix version updates. For example, 5.7.20 to 5.7.21.
-
-Major version upgrade is currently supported by service for upgrades from MySQL v5.6 to v5.7. For more details, refer [how to perform major version upgrades](how-to-major-version-upgrade.md). If you'd like to upgrade from 5.7 to 8.0, we recommend you perform [dump and restore](./concepts-migrate-dump-restore.md) to a server that was created with the new engine version.
-
-## Next steps
--- For details around Azure Database for MySQL versioning policy, see [this document](concepts-version-policy.md).-- For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](./concepts-pricing-tiers.md)
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
- Title: Version support policy - Azure Database for MySQL - Single Server and Flexible Server (Preview)
-description: Describes the policy around MySQL major and minor versions in Azure Database for MySQL
------ Previously updated : 11/03/2020-
-# Azure Database for MySQL version support policy
--
-This page describes the Azure Database for MySQL versioning policy, and is applicable to Azure Database for MySQL - Single Server and Azure Database for MySQL - Flexible Server (Preview) deployment modes.
-
-## Supported MySQL versions
-
-Azure Database for MySQL has been developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports all the current major version supported by the community namely MySQL 5.6, 5.7 and 8.0. MySQL uses the X.Y.Z naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
-
-Azure Database for MySQL currently supports the following major and minor versions of MySQL:
-
-| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](./flexible-server/overview.md) <br/> Current minor version |
-|:-|:-|:|
-|MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html)(Retired) | Not supported|
-|MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
-|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
-
-> [!NOTE]
-> In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. If your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string as explained in our documentation [here.](concepts-supported-versions.md#connect-to-a-gateway-node-that-is-running-a-specific-mysql-version)
-
-> [!IMPORTANT]
-> MySQL v5.6 is retired on Single Server as of Febuary 2021. Starting from September 1st 2021, you will not be able to create new v5.6 servers on Azure Database for MySQL - Single Server deployment option. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
-
-Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
-
-## Major version support
-
-Each major version of MySQL will be supported by Azure Database for MySQL from the date on which Azure begins supporting the version until the version is retired by the MySQL community, as provided in the [versioning policy](https://www.mysql.com/support/eol-notice.html).
-
-## Minor version support
-
-Azure Database for MySQL automatically performs minor version upgrades to the Azure preferred MySQL version as part of periodic maintenance.
-
-## Major version retirement policy
-
-The table below provides the retirement details for MySQL major versions. The dates follow the [MySQL versioning policy](https://www.mysql.com/support/eol-notice.html).
-
-| Version | What's New | Azure support start date | Retirement date|
-| -- | -- | | -- |
-| [MySQL 5.6](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/)| [Features](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-49.html) | March 20, 2018 | February 2021
-| [MySQL 5.7](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-31.html) | March 20, 2018 | October 2023
-| [MySQL 8](https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html)) | December 11, 2019 | April 2026
-
-## Retired MySQL engine versions not supported in Azure Database for MySQL
-
-After the retirement date for each MySQL database version, if you continue running the retired version, note the following restrictions:
--- As the community will not be releasing any further bug fixes or security fixes, Azure Database for MySQL will not patch the retired database engine for any bugs or security issues or otherwise take security measures with regard to the retired database engine. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.-- If any support issue you may experience relates to the MySQL database, we may not be able to provide you with support. In such cases, you will have to upgrade your database in order for us to provide you with any support.-- You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.-- New service capabilities developed by Azure Database for MySQL may only be available to supported database server versions.-- Uptime SLAs will apply solely to Azure Database for MySQL service-related issues and not to any downtime caused by database engine-related bugs. -- In the extreme event of a serious threat to the service caused by the MySQL database engine vulnerability identified in the retired database version, Azure may chose to stop the compute node of your database server to secure the service first. You will be asked to upgrade the server before bringing the server online. During the upgrade process, your data will always be protected using automatic backups performed on the service which can be used to restore back to the older version if desired. -
-## Next steps
--- See Azure Database for MySQL - Single Server [supported versions](./concepts-supported-versions.md)-- See Azure Database for MySQL - Flexible Server [supported versions](flexible-server/concepts-supported-versions.md)-- See MySQL [dump and restore](./concepts-migrate-dump-restore.md) to perform upgrades.
mysql Connect Cpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-cpp.md
- Title: 'Quickstart: Connect using C++ - Azure Database for MySQL'
-description: This quickstart provides a C++ code sample you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 5/26/2020
-adobe-target: true
--
-# Quickstart: Use Connector/C++ to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C++ application. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes you're familiar with developing using C++ and you're new to working with Azure Database for MySQL.
-
-## Prerequisites
-
-This quickstart uses the resources created in either of the following guides as a starting point:
-- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)-
-You also need to:
-- Install [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework)-- Install [Visual Studio](https://www.visualstudio.com/downloads/)-- Install [MySQL Connector/C++](https://dev.mysql.com/downloads/connector/cpp/) -- Install [Boost](https://www.boost.org/)-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install Visual Studio and .NET
-The steps in this section assume that you're familiar with developing using .NET.
-
-### **Windows**
-- Install Visual Studio 2019 Community. Visual Studio 2019 Community is a full featured, extensible, free IDE. With this IDE, you can create modern applications for Android, iOS, Windows, web and database applications, and cloud services. You can install either the full .NET Framework or just .NET Core: the code snippets in the Quickstart work with either. If you already have Visual Studio installed on your computer, skip the next two steps.
- 1. Download the [Visual Studio 2019 installer](https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15).
- 2. Run the installer and follow the installation prompts to complete the installation.
-
-### **Configure Visual Studio**
-1. From Visual Studio, Project -> Properties -> Linker -> General > Additional Library Directories, add the "\lib\opt" directory (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\lib\opt) of the C++ connector.
-2. From Visual Studio, Project -> Properties -> C/C++ -> General -> Additional Include Directories:
- - Add the "\include" directory of c++ connector (for example: C:\Program Files (x86)\MySQL\MySQL Connector C++ 1.1.9\include\).
- - Add the Boost library's root directory (for example: C:\boost_1_64_0\).
-3. From Visual Studio, Project -> Properties -> Linker -> Input > Additional Dependencies, add **mysqlcppconn.lib** into the text field.
-4. Either copy **mysqlcppconn.dll** from the C++ connector library folder in step 3 to the same directory as the application executable or add it to the environment variable so your application can find it.
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-cpp/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-## Connect, create table, and insert data
-Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method createStatement() and execute() to run the database commands.
-
-Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
-
-```c++
-#include <stdlib.h>
-#include <iostream>
-#include "stdafx.h"
-
-#include "mysql_connection.h"
-#include <cppconn/driver.h>
-#include <cppconn/exception.h>
-#include <cppconn/prepared_statement.h>
-using namespace std;
-
-//for demonstration only. never save your password in the code!
-const string server = "tcp://yourservername.mysql.database.azure.com:3306";
-const string username = "username@servername";
-const string password = "yourpassword";
-
-int main()
-{
- sql::Driver *driver;
- sql::Connection *con;
- sql::Statement *stmt;
- sql::PreparedStatement *pstmt;
-
- try
- {
- driver = get_driver_instance();
- con = driver->connect(server, username, password);
- }
- catch (sql::SQLException e)
- {
- cout << "Could not connect to server. Error message: " << e.what() << endl;
- system("pause");
- exit(1);
- }
-
- //please create database "quickstartdb" ahead of time
- con->setSchema("quickstartdb");
-
- stmt = con->createStatement();
- stmt->execute("DROP TABLE IF EXISTS inventory");
- cout << "Finished dropping table (if existed)" << endl;
- stmt->execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);");
- cout << "Finished creating table" << endl;
- delete stmt;
-
- pstmt = con->prepareStatement("INSERT INTO inventory(name, quantity) VALUES(?,?)");
- pstmt->setString(1, "banana");
- pstmt->setInt(2, 150);
- pstmt->execute();
- cout << "One row inserted." << endl;
-
- pstmt->setString(1, "orange");
- pstmt->setInt(2, 154);
- pstmt->execute();
- cout << "One row inserted." << endl;
-
- pstmt->setString(1, "apple");
- pstmt->setInt(2, 100);
- pstmt->execute();
- cout << "One row inserted." << endl;
-
- delete pstmt;
- delete con;
- system("pause");
- return 0;
-}
-```
-
-## Read data
-
-Use the following code to connect and read the data by using a **SELECT** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the select commands. Next, the code uses next() to advance to the records in the results. Finally, the code uses getInt() and getString() to parse the values in the record.
-
-Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
-
-```c++
-#include <stdlib.h>
-#include <iostream>
-#include "stdafx.h"
-
-#include "mysql_connection.h"
-#include <cppconn/driver.h>
-#include <cppconn/exception.h>
-#include <cppconn/resultset.h>
-#include <cppconn/prepared_statement.h>
-using namespace std;
-
-//for demonstration only. never save your password in the code!
-const string server = "tcp://yourservername.mysql.database.azure.com:3306";
-const string username = "username@servername";
-const string password = "yourpassword";
-
-int main()
-{
- sql::Driver *driver;
- sql::Connection *con;
- sql::PreparedStatement *pstmt;
- sql::ResultSet *result;
-
- try
- {
- driver = get_driver_instance();
- //for demonstration only. never save password in the code!
- con = driver->connect(server, username, password);
- }
- catch (sql::SQLException e)
- {
- cout << "Could not connect to server. Error message: " << e.what() << endl;
- system("pause");
- exit(1);
- }
-
- con->setSchema("quickstartdb");
-
- //select
- pstmt = con->prepareStatement("SELECT * FROM inventory;");
- result = pstmt->executeQuery();
-
- while (result->next())
- printf("Reading from table=(%d, %s, %d)\n", result->getInt(1), result->getString(2).c_str(), result->getInt(3));
-
- delete result;
- delete pstmt;
- delete con;
- system("pause");
- return 0;
-}
-```
-
-## Update data
-Use the following code to connect and read the data by using an **UPDATE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the update commands.
-
-Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
-
-```c++
-#include <stdlib.h>
-#include <iostream>
-#include "stdafx.h"
-
-#include "mysql_connection.h"
-#include <cppconn/driver.h>
-#include <cppconn/exception.h>
-#include <cppconn/resultset.h>
-#include <cppconn/prepared_statement.h>
-using namespace std;
-
-//for demonstration only. never save your password in the code!
-const string server = "tcp://yourservername.mysql.database.azure.com:3306";
-const string username = "username@servername";
-const string password = "yourpassword";
-
-int main()
-{
- sql::Driver *driver;
- sql::Connection *con;
- sql::PreparedStatement *pstmt;
-
- try
- {
- driver = get_driver_instance();
- //for demonstration only. never save password in the code!
- con = driver->connect(server, username, password);
- }
- catch (sql::SQLException e)
- {
- cout << "Could not connect to server. Error message: " << e.what() << endl;
- system("pause");
- exit(1);
- }
-
- con->setSchema("quickstartdb");
-
- //update
- pstmt = con->prepareStatement("UPDATE inventory SET quantity = ? WHERE name = ?");
- pstmt->setInt(1, 200);
- pstmt->setString(2, "banana");
- pstmt->executeQuery();
- printf("Row updated\n");
-
- delete con;
- delete pstmt;
- system("pause");
- return 0;
-}
-```
--
-## Delete data
-Use the following code to connect and read the data by using a **DELETE** SQL statement. The code uses sql::Driver class with the connect() method to establish a connection to MySQL. Then the code uses method prepareStatement() and executeQuery() to run the delete commands.
-
-Replace the Host, DBName, User, and Password parameters. You can replace the parameters with the values that you specified when you created the server and database.
-
-```c++
-#include <stdlib.h>
-#include <iostream>
-#include "stdafx.h"
-
-#include "mysql_connection.h"
-#include <cppconn/driver.h>
-#include <cppconn/exception.h>
-#include <cppconn/resultset.h>
-#include <cppconn/prepared_statement.h>
-using namespace std;
-
-//for demonstration only. never save your password in the code!
-const string server = "tcp://yourservername.mysql.database.azure.com:3306";
-const string username = "username@servername";
-const string password = "yourpassword";
-
-int main()
-{
- sql::Driver *driver;
- sql::Connection *con;
- sql::PreparedStatement *pstmt;
- sql::ResultSet *result;
-
- try
- {
- driver = get_driver_instance();
- //for demonstration only. never save password in the code!
- con = driver->connect(server, username, password);
- }
- catch (sql::SQLException e)
- {
- cout << "Could not connect to server. Error message: " << e.what() << endl;
- system("pause");
- exit(1);
- }
-
- con->setSchema("quickstartdb");
-
- //delete
- pstmt = con->prepareStatement("DELETE FROM inventory WHERE name = ?");
- pstmt->setString(1, "orange");
- result = pstmt->executeQuery();
- printf("Row deleted\n");
-
- delete pstmt;
- delete con;
- delete result;
- system("pause");
- return 0;
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md)
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-csharp.md
- Title: 'Quickstart: Connect using C# - Azure Database for MySQL'
-description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for MySQL."
------ Previously updated : 10/18/2020--
-# Quickstart: Use .NET (C#) to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database.
-
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-- Install the [.NET SDK for your platform](https://dotnet.microsoft.com/download) (Windows, Ubuntu Linux, or macOS) for your platform.-
-|Action| Connectivity method|How-to guide|
-|: |: |: |
-| **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
-| **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
-| **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- [Create a database and non-admin user](./howto-create-users.md)-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Create a C# project
-At a command prompt, run:
-
-```
-mkdir AzureMySqlExample
-cd AzureMySqlExample
-dotnet new console
-dotnet add package MySqlConnector
-```
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-csharp/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-## Step 1: Connect and insert data
-Use the following code to connect and load the data by using `CREATE TABLE` and `INSERT INTO` SQL statements. The code uses the methods of the `MySqlConnection` class:
-- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand), sets the CommandText property-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. -
-Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using MySqlConnector;
-
-namespace AzureMySqlExample
-{
- class MySqlCreate
- {
- static async Task Main(string[] args)
- {
- var builder = new MySqlConnectionStringBuilder
- {
- Server = "YOUR-SERVER.mysql.database.azure.com",
- Database = "YOUR-DATABASE",
- UserID = "USER@YOUR-SERVER",
- Password = "PASSWORD",
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.WriteLine("Opening connection");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "DROP TABLE IF EXISTS inventory;";
- await command.ExecuteNonQueryAsync();
- Console.WriteLine("Finished dropping table (if existed)");
-
- command.CommandText = "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
- await command.ExecuteNonQueryAsync();
- Console.WriteLine("Finished creating table");
-
- command.CommandText = @"INSERT INTO inventory (name, quantity) VALUES (@name1, @quantity1),
- (@name2, @quantity2), (@name3, @quantity3);";
- command.Parameters.AddWithValue("@name1", "banana");
- command.Parameters.AddWithValue("@quantity1", 150);
- command.Parameters.AddWithValue("@name2", "orange");
- command.Parameters.AddWithValue("@quantity2", 154);
- command.Parameters.AddWithValue("@name3", "apple");
- command.Parameters.AddWithValue("@quantity3", 100);
-
- int rowCount = await command.ExecuteNonQueryAsync();
- Console.WriteLine(String.Format("Number of rows inserted={0}", rowCount));
- }
-
- // connection will be closed by the 'using' block
- Console.WriteLine("Closing connection");
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
--
-## Step 2: Read data
-
-Use the following code to connect and read the data by using a `SELECT` SQL statement. The code uses the `MySqlConnection` class with methods:
-- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.-- [ExecuteReaderAsync()](/dotnet/api/system.data.common.dbcommand.executereaderasync) to run the database commands. -- [ReadAsync()](/dotnet/api/system.data.common.dbdatareader.readasync#System_Data_Common_DbDataReader_ReadAsync) to advance to the records in the results. Then the code uses GetInt32 and GetString to parse the values in the record.--
-Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using MySqlConnector;
-
-namespace AzureMySqlExample
-{
- class MySqlRead
- {
- static async Task Main(string[] args)
- {
- var builder = new MySqlConnectionStringBuilder
- {
- Server = "YOUR-SERVER.mysql.database.azure.com",
- Database = "YOUR-DATABASE",
- UserID = "USER@YOUR-SERVER",
- Password = "PASSWORD",
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.WriteLine("Opening connection");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "SELECT * FROM inventory;";
-
- using (var reader = await command.ExecuteReaderAsync())
- {
- while (await reader.ReadAsync())
- {
- Console.WriteLine(string.Format(
- "Reading from table=({0}, {1}, {2})",
- reader.GetInt32(0),
- reader.GetString(1),
- reader.GetInt32(2)));
- }
- }
- }
-
- Console.WriteLine("Closing connection");
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Step 3: Update data
-Use the following code to connect and read the data by using an `UPDATE` SQL statement. The code uses the `MySqlConnection` class with method:
-- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL. -- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. --
-Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using MySqlConnector;
-
-namespace AzureMySqlExample
-{
- class MySqlUpdate
- {
- static async Task Main(string[] args)
- {
- var builder = new MySqlConnectionStringBuilder
- {
- Server = "YOUR-SERVER.mysql.database.azure.com",
- Database = "YOUR-DATABASE",
- UserID = "USER@YOUR-SERVER",
- Password = "PASSWORD",
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.WriteLine("Opening connection");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "UPDATE inventory SET quantity = @quantity WHERE name = @name;";
- command.Parameters.AddWithValue("@quantity", 200);
- command.Parameters.AddWithValue("@name", "banana");
-
- int rowCount = await command.ExecuteNonQueryAsync();
- Console.WriteLine(String.Format("Number of rows updated={0}", rowCount));
- }
-
- Console.WriteLine("Closing connection");
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Step 4: Delete data
-Use the following code to connect and delete the data by using a `DELETE` SQL statement.
-
-The code uses the `MySqlConnection` class with method
-- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.-- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.-- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. --
-Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using MySqlConnector;
-
-namespace AzureMySqlExample
-{
- class MySqlDelete
- {
- static async Task Main(string[] args)
- {
- var builder = new MySqlConnectionStringBuilder
- {
- Server = "YOUR-SERVER.mysql.database.azure.com",
- Database = "YOUR-DATABASE",
- UserID = "USER@YOUR-SERVER",
- Password = "PASSWORD",
- SslMode = MySqlSslMode.Required,
- };
-
- using (var conn = new MySqlConnection(builder.ConnectionString))
- {
- Console.WriteLine("Opening connection");
- await conn.OpenAsync();
-
- using (var command = conn.CreateCommand())
- {
- command.CommandText = "DELETE FROM inventory WHERE name = @name;";
- command.Parameters.AddWithValue("@name", "orange");
-
- int rowCount = await command.ExecuteNonQueryAsync();
- Console.WriteLine(String.Format("Number of rows deleted={0}", rowCount));
- }
-
- Console.WriteLine("Closing connection");
- }
-
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-
-[Cannot find what you are looking for?Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-go.md
- Title: 'Quickstart: Connect using Go - Azure Database for MySQL'
-description: This quickstart provides several Go code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 5/26/2020--
-# Quickstart: Use Go language to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL from Windows, Ubuntu Linux, and Apple macOS platforms by using code written in the [Go](https://go.dev/) language. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Go and that you are new to working with Azure Database for MySQL.
-
-## Prerequisites
-This quickstart uses the resources created in either of these guides as a starting point:
-- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install Go and MySQL connector
-Install [Go](https://go.dev/doc/install) and the [go-sql-driver for MySQL](https://github.com/go-sql-driver/mysql#installation) on your own computer. Depending on your platform, follow the steps in the appropriate section:
-
-### Windows
-1. [Download](https://go.dev/dl/) and install Go for Microsoft Windows according to the [installation instructions](https://go.dev/doc/install).
-2. Launch the command prompt from the start menu.
-3. Make a folder for your project such. `mkdir %USERPROFILE%\go\src\mysqlgo`.
-4. Change directory into the project folder, such as `cd %USERPROFILE%\go\src\mysqlgo`.
-5. Set the environment variable for GOPATH to point to the source code directory. `set GOPATH=%USERPROFILE%\go`.
-6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
-
- In summary, install Go, then run these commands in the command prompt:
- ```cmd
- mkdir %USERPROFILE%\go\src\mysqlgo
- cd %USERPROFILE%\go\src\mysqlgo
- set GOPATH=%USERPROFILE%\go
- go get github.com/go-sql-driver/mysql
- ```
-
-### Linux (Ubuntu)
-1. Launch the Bash shell.
-2. Install Go by running `sudo apt-get install golang-go`.
-3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`.
-4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`.
-5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
-6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
-
- In summary, run these bash commands:
- ```bash
- sudo apt-get install golang-go
- mkdir -p ~/go/src/mysqlgo/
- cd ~/go/src/mysqlgo/
- export GOPATH=~/go/
- go get github.com/go-sql-driver/mysql
- ```
-
-### Apple macOS
-1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform.
-2. Launch the Bash shell.
-3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`.
-4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`.
-5. Set the GOPATH environment variable to point to a valid source directory, such as your current home directory's go folder. At the Bash shell, run `export GOPATH=~/go` to add the go directory as the GOPATH for the current shell session.
-6. Install the [go-sql-driver for mysql](https://github.com/go-sql-driver/mysql#installation) by running the `go get github.com/go-sql-driver/mysql` command.
-
- In summary, install Go, then run these bash commands:
- ```bash
- mkdir -p ~/go/src/mysqlgo/
- cd ~/go/src/mysqlgo/
- export GOPATH=~/go/
- go get github.com/go-sql-driver/mysql
- ```
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-go/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-
-## Build and run Go code
-1. To write Golang code, you can use a simple text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE), try [Gogland](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/).
-2. Paste the Go code from the sections below into text files, and then save them into your project folder with file extension \*.go (such as Windows path `%USERPROFILE%\go\src\mysqlgo\createtable.go` or Linux path `~/go/src/mysqlgo/createtable.go`).
-3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and then replace the example values with your own values.
-4. Launch the command prompt or Bash shell. Change directory into your project folder. For example, on Windows `cd %USERPROFILE%\go\src\mysqlgo\`. On Linux `cd ~/go/src/mysqlgo/`. Some of the IDE editors mentioned offer debug and runtime capabilities without requiring shell commands.
-5. Run the code by typing the command `go run createtable.go` to compile the application and run it.
-6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application.
-
-## Connect, create table, and insert data
-Use the following code to connect to the server, create a table, and load the data by using an **INSERT** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and it checks the connection by using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several DDL commands. The code also uses [Prepare()](http://go-database-sql.org/prepared.html) and Exec() to run prepared statements with different parameters to insert three rows. Each time, a custom checkError() method is used to check if an error occurred and panic to exit.
-
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
-
-```Go
-package main
-
-import (
- "database/sql"
- "fmt"
-
- _ "github.com/go-sql-driver/mysql"
-)
-
-const (
- host = "mydemoserver.mysql.database.azure.com"
- database = "quickstartdb"
- user = "myadmin@mydemoserver"
- password = "yourpassword"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
-
- // Initialize connection object.
- db, err := sql.Open("mysql", connectionString)
- checkError(err)
- defer db.Close()
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database.")
-
- // Drop previous table of same name if one exists.
- _, err = db.Exec("DROP TABLE IF EXISTS inventory;")
- checkError(err)
- fmt.Println("Finished dropping table (if existed).")
-
- // Create table.
- _, err = db.Exec("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
- checkError(err)
- fmt.Println("Finished creating table.")
-
- // Insert some data into table.
- sqlStatement, err := db.Prepare("INSERT INTO inventory (name, quantity) VALUES (?, ?);")
- res, err := sqlStatement.Exec("banana", 150)
- checkError(err)
- rowCount, err := res.RowsAffected()
- fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
-
- res, err = sqlStatement.Exec("orange", 154)
- checkError(err)
- rowCount, err = res.RowsAffected()
- fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
-
- res, err = sqlStatement.Exec("apple", 100)
- checkError(err)
- rowCount, err = res.RowsAffected()
- fmt.Printf("Inserted %d row(s) of data.\n", rowCount)
- fmt.Println("Done.")
-}
-
-```
-
-## Read data
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Query()](https://go.dev/pkg/database/sql/#DB.Query) method to run the select command. Then it runs [Next()](https://go.dev/pkg/database/sql/#Rows.Next) to iterate through the result set and [Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) to parse the column values, saving the value into variables. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
-
-```Go
-package main
-
-import (
- "database/sql"
- "fmt"
-
- _ "github.com/go-sql-driver/mysql"
-)
-
-const (
- host = "mydemoserver.mysql.database.azure.com"
- database = "quickstartdb"
- user = "myadmin@mydemoserver"
- password = "yourpassword"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
-
- // Initialize connection object.
- db, err := sql.Open("mysql", connectionString)
- checkError(err)
- defer db.Close()
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database.")
-
- // Variables for printing column data when scanned.
- var (
- id int
- name string
- quantity int
- )
-
- // Read some data from the table.
- rows, err := db.Query("SELECT id, name, quantity from inventory;")
- checkError(err)
- defer rows.Close()
- fmt.Println("Reading data:")
- for rows.Next() {
- err := rows.Scan(&id, &name, &quantity)
- checkError(err)
- fmt.Printf("Data row = (%d, %s, %d)\n", id, name, quantity)
- }
- err = rows.Err()
- checkError(err)
- fmt.Println("Done.")
-}
-```
-
-## Update data
-Use the following code to connect and update the data using a **UPDATE** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the update command. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
-
-```Go
-package main
-
-import (
- "database/sql"
- "fmt"
-
- _ "github.com/go-sql-driver/mysql"
-)
-
-const (
- host = "mydemoserver.mysql.database.azure.com"
- database = "quickstartdb"
- user = "myadmin@mydemoserver"
- password = "yourpassword"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
-
- // Initialize connection object.
- db, err := sql.Open("mysql", connectionString)
- checkError(err)
- defer db.Close()
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database.")
-
- // Modify some data in table.
- rows, err := db.Exec("UPDATE inventory SET quantity = ? WHERE name = ?", 200, "banana")
- checkError(err)
- rowCount, err := rows.RowsAffected()
- fmt.Printf("Updated %d row(s) of data.\n", rowCount)
- fmt.Println("Done.")
-}
-```
-
-## Delete data
-Use the following code to connect and remove data using a **DELETE** SQL statement.
-
-The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
-
-The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the delete command. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
-
-```Go
-package main
-
-import (
- "database/sql"
- "fmt"
- _ "github.com/go-sql-driver/mysql"
-)
-
-const (
- host = "mydemoserver.mysql.database.azure.com"
- database = "quickstartdb"
- user = "myadmin@mydemoserver"
- password = "yourpassword"
-)
-
-func checkError(err error) {
- if err != nil {
- panic(err)
- }
-}
-
-func main() {
-
- // Initialize connection string.
- var connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true", user, password, host, database)
-
- // Initialize connection object.
- db, err := sql.Open("mysql", connectionString)
- checkError(err)
- defer db.Close()
-
- err = db.Ping()
- checkError(err)
- fmt.Println("Successfully created connection to database.")
-
- // Modify some data in table.
- rows, err := db.Exec("DELETE FROM inventory WHERE name = ?", "orange")
- checkError(err)
- rowCount, err := rows.RowsAffected()
- fmt.Printf("Deleted %d row(s) of data.\n", rowCount)
- fmt.Println("Done.")
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-java.md
- Title: 'Quickstart: Use Java and JDBC with Azure Database for MySQL'
-description: Learn how to use Java and JDBC with an Azure Database for MySQL database.
------ Previously updated : 08/17/2020--
-# Quickstart: Use Java and JDBC with Azure Database for MySQL
--
-This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL](./index.yml).
-
-JDBC is the standard Java API to connect to traditional relational databases.
-
-## Prerequisites
--- An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).-- [Azure Cloud Shell](../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.-- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).-- The [Apache Maven](https://maven.apache.org/) build tool.-
-## Prepare the working environment
-
-We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
-
-Set up those environment variables by using the following commands:
-
-```bash
-AZ_RESOURCE_GROUP=database-workshop
-AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
-AZ_LOCATION=<YOUR_AZURE_REGION>
-AZ_MYSQL_USERNAME=demo
-AZ_MYSQL_PASSWORD=<YOUR_MYSQL_PASSWORD>
-AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
-```
-
-Replace the placeholders with the following values, which are used throughout this article:
--- `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.-- `<YOUR_MYSQL_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).-
-Next, create a resource group:
-
-```azurecli
-az group create \
- --name $AZ_RESOURCE_GROUP \
- --location $AZ_LOCATION \
- | jq
-```
-
-> [!NOTE]
-> We use the `jq` utility, which is installed by default on [Azure Cloud Shell](https://shell.azure.com/) to display JSON data and make it more readable.
-> If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
-
-## Create an Azure Database for MySQL instance
-
-The first thing we'll create is a managed MySQL server.
-
-> [!NOTE]
-> You can read more detailed information about creating MySQL servers in [Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md).
-
-In [Azure Cloud Shell](https://shell.azure.com/), run the following script:
-
-```azurecli
-az mysql server create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME \
- --location $AZ_LOCATION \
- --sku-name B_Gen5_1 \
- --storage-size 5120 \
- --admin-user $AZ_MYSQL_USERNAME \
- --admin-password $AZ_MYSQL_PASSWORD \
- | jq
-```
-
-This command creates a small MySQL server.
-
-### Configure a firewall rule for your MySQL server
-
-Azure Database for MySQL instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
-
-Because you configured our local IP address at the beginning of this article, you can open the server's firewall by running:
-
-```azurecli
-az mysql server firewall-rule create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME-database-allow-local-ip \
- --server $AZ_DATABASE_NAME \
- --start-ip-address $AZ_LOCAL_IP_ADDRESS \
- --end-ip-address $AZ_LOCAL_IP_ADDRESS \
- | jq
-```
-
-### Configure a MySQL database
-
-The MySQL server that you created earlier is empty. It doesn't have any database that you can use with the Java application. Create a new database called `demo`:
-
-```azurecli
-az mysql db create \
- --resource-group $AZ_RESOURCE_GROUP \
- --name demo \
- --server-name $AZ_DATABASE_NAME \
- | jq
-```
-
-### Create a new Java project
-
-Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
-
-```xml
-<?xml version="1.0" encoding="UTF-8"?>
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>com.example</groupId>
- <artifactId>demo</artifactId>
- <version>0.0.1-SNAPSHOT</version>
- <name>demo</name>
-
- <properties>
- <java.version>1.8</java.version>
- <maven.compiler.source>1.8</maven.compiler.source>
- <maven.compiler.target>1.8</maven.compiler.target>
- </properties>
-
- <dependencies>
- <dependency>
- <groupId>mysql</groupId>
- <artifactId>mysql-connector-java</artifactId>
- <version>8.0.20</version>
- </dependency>
- </dependencies>
-</project>
-```
-
-This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
--- Java 8-- A recent MySQL driver for Java-
-### Prepare a configuration file to connect to Azure Database for MySQL
-
-Create a *src/main/resources/application.properties* file, and add:
-
-```properties
-url=jdbc:mysql://$AZ_DATABASE_NAME.mysql.database.azure.com:3306/demo?serverTimezone=UTC
-user=demo@$AZ_DATABASE_NAME
-password=$AZ_MYSQL_PASSWORD
-```
--- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.-- Replace the `$AZ_MYSQL_PASSWORD` variable with the value that you configured at the beginning of this article.-
-> [!NOTE]
-> We append `?serverTimezone=UTC` to the configuration property `url`, to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, our Java server would not use the same date format as the database, which would result in an error.
-
-### Create an SQL file to generate the database schema
-
-We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
-
-```sql
-DROP TABLE IF EXISTS todo;
-CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN);
-```
-
-## Code the application
-
-### Connect to the database
-
-Next, add the Java code that will use JDBC to store and retrieve data from your MySQL server.
-
-Create a *src/main/java/DemoApplication.java* file, that contains:
-
-```java
-package com.example.demo;
-
-import com.mysql.cj.jdbc.AbandonedConnectionCleanupThread;
-
-import java.sql.*;
-import java.util.*;
-import java.util.logging.Logger;
-
-public class DemoApplication {
-
- private static final Logger log;
-
- static {
- System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
- log =Logger.getLogger(DemoApplication.class.getName());
- }
-
- public static void main(String[] args) throws Exception {
- log.info("Loading application properties");
- Properties properties = new Properties();
- properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
-
- log.info("Connecting to the database");
- Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
- log.info("Database connection test: " + connection.getCatalog());
-
- log.info("Create database schema");
- Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
- Statement statement = connection.createStatement();
- while (scanner.hasNextLine()) {
- statement.execute(scanner.nextLine());
- }
-
- /*
- Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
- insertData(todo, connection);
- todo = readData(connection);
- todo.setDetails("congratulations, you have updated data!");
- updateData(todo, connection);
- deleteData(todo, connection);
- */
-
- log.info("Closing database connection");
- connection.close();
- AbandonedConnectionCleanupThread.uncheckedShutdown();
- }
-}
-```
-
-This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the MySQL server and create a schema that will store our data.
-
-In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
-
-> [!NOTE]
-> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
-
-> [!NOTE]
-> The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver specific command to destroy an internal thread when shutting down the application.
-> It can be safely ignored.
-
-You can now execute this main class with your favorite tool:
--- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.-- Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.-
-The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection, as you should see in the console logs:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Closing database connection
-```
-
-### Create a domain class
-
-Create a new `Todo` Java class, next to the `DemoApplication` class, and add the following code:
-
-```java
-package com.example.demo;
-
-public class Todo {
-
- private Long id;
- private String description;
- private String details;
- private boolean done;
-
- public Todo() {
- }
-
- public Todo(Long id, String description, String details, boolean done) {
- this.id = id;
- this.description = description;
- this.details = details;
- this.done = done;
- }
-
- public Long getId() {
- return id;
- }
-
- public void setId(Long id) {
- this.id = id;
- }
-
- public String getDescription() {
- return description;
- }
-
- public void setDescription(String description) {
- this.description = description;
- }
-
- public String getDetails() {
- return details;
- }
-
- public void setDetails(String details) {
- this.details = details;
- }
-
- public boolean isDone() {
- return done;
- }
-
- public void setDone(boolean done) {
- this.done = done;
- }
-
- @Override
- public String toString() {
- return "Todo{" +
- "id=" + id +
- ", description='" + description + '\'' +
- ", details='" + details + '\'' +
- ", done=" + done +
- '}';
- }
-}
-```
-
-This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script.
-
-### Insert data into Azure Database for MySQL
-
-In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database:
-
-```java
-private static void insertData(Todo todo, Connection connection) throws SQLException {
- log.info("Insert data");
- PreparedStatement insertStatement = connection
- .prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);");
-
- insertStatement.setLong(1, todo.getId());
- insertStatement.setString(2, todo.getDescription());
- insertStatement.setString(3, todo.getDetails());
- insertStatement.setBoolean(4, todo.isDone());
- insertStatement.executeUpdate();
-}
-```
-
-You can now uncomment the two following lines in the `main` method:
-
-```java
-Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
-insertData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Closing database connection
-```
-
-### Reading data from Azure Database for MySQL
-
-Let's read the data previously inserted, to validate that our code works correctly.
-
-In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
-
-```java
-private static Todo readData(Connection connection) throws SQLException {
- log.info("Read data");
- PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;");
- ResultSet resultSet = readStatement.executeQuery();
- if (!resultSet.next()) {
- log.info("There is no data in the database!");
- return null;
- }
- Todo todo = new Todo();
- todo.setId(resultSet.getLong("id"));
- todo.setDescription(resultSet.getString("description"));
- todo.setDetails(resultSet.getString("details"));
- todo.setDone(resultSet.getBoolean("done"));
- log.info("Data read from the database: " + todo.toString());
- return todo;
-}
-```
-
-You can now uncomment the following line in the `main` method:
-
-```java
-todo = readData(connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Closing database connection
-```
-
-### Updating data in Azure Database for MySQL
-
-Let's update the data we previously inserted.
-
-Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
-
-```java
-private static void updateData(Todo todo, Connection connection) throws SQLException {
- log.info("Update data");
- PreparedStatement updateStatement = connection
- .prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;");
-
- updateStatement.setString(1, todo.getDescription());
- updateStatement.setString(2, todo.getDetails());
- updateStatement.setBoolean(3, todo.isDone());
- updateStatement.setLong(4, todo.getId());
- updateStatement.executeUpdate();
- readData(connection);
-}
-```
-
-You can now uncomment the two following lines in the `main` method:
-
-```java
-todo.setDetails("congratulations, you have updated data!");
-updateData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
-[INFO ] Closing database connection
-```
-
-### Deleting data in Azure Database for MySQL
-
-Finally, let's delete the data we previously inserted.
-
-Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
-
-```java
-private static void deleteData(Todo todo, Connection connection) throws SQLException {
- log.info("Delete data");
- PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;");
- deleteStatement.setLong(1, todo.getId());
- deleteStatement.executeUpdate();
- readData(connection);
-}
-```
-
-You can now uncomment the following line in the `main` method:
-
-```java
-deleteData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
-[INFO ] Delete data
-[INFO ] Read data
-[INFO ] There is no data in the database!
-[INFO ] Closing database connection
-```
-
-## Clean up resources
-
-Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for MySQL.
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](concepts-migrate-dump-restore.md)
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-nodejs.md
- Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL'
-description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 12/11/2020-
-# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL
--
-In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
-
-This topic assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Database for MySQL server. [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install Node.js and the MySQL connector
-
-Depending on your platform, follow the instructions in the appropriate section to install [Node.js](https://nodejs.org). Use npm to install the [mysql](https://www.npmjs.com/package/mysql) package and its dependencies into your project folder.
-
-### Windows
-
-1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your desired Windows installer option.
-2. Make a local project folder such as `nodejsmysql`.
-3. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\`
-4. Run the NPM tool to install the mysql library into the project folder.
-
- ```cmd
- cd c:\nodejsmysql\
- "C:\Program Files\nodejs\npm" install mysql
- "C:\Program Files\nodejs\npm" list
- ```
-
-5. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
-
-### Linux (Ubuntu)
-
-1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js.
-
- ```bash
- # Using Ubuntu
- curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
- sudo apt-get install -y nodejs
-
- # Using Debian, as root
- curl -sL https://deb.nodesource.com/setup_14.x | bash -
- apt-get install -y nodejs
- ```
-
-2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
-
- ```bash
- mkdir nodejsmysql
- cd nodejsmysql
- npm install --save mysql
- npm list
- ```
-3. Verify the installation by checking npm list output text. The version number may vary as new patches are released.
-
-### macOS
-
-1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your macOS installer.
-
-2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
-
- ```bash
- mkdir nodejsmysql
- cd nodejsmysql
- npm install --save mysql
- npm list
- ```
-
-3. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
-
-## Get connection information
-
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Select the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-nodejs/server-name-azure-database-mysql.png" alt-text="Azure Database for MySQL server name":::
-
-## Running the code samples
-
-1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js).
-1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the server and database.
-1. **Obtain SSL certificate**: Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive.
-
- **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
-
- See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
-1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file.
-1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`.
-1. To run the application, enter the node command followed by the file name, such as `node createtable.js`.
-1. On Windows, if the node application is not in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
-
-## Connect, create table, and insert data
-
-Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) function is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) function is used to execute the SQL query against MySQL database.
-
-```javascript
-const mysql = require('mysql');
-const fs = require('fs');
-
-var config =
-{
- host: 'mydemoserver.mysql.database.azure.com',
- user: 'myadmin@mydemoserver',
- password: 'your_password',
- database: 'quickstartdb',
- port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
-};
-
-const conn = new mysql.createConnection(config);
-
-conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else
- {
- console.log("Connection established.");
- queryDatabase();
- }
-});
-
-function queryDatabase(){
- conn.query('DROP TABLE IF EXISTS inventory;', function (err, results, fields) {
- if (err) throw err;
- console.log('Dropped inventory table if existed.');
- })
- conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);',
- function (err, results, fields) {
- if (err) throw err;
- console.log('Created inventory table.');
- })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150],
- function (err, results, fields) {
- if (err) throw err;
- else console.log('Inserted ' + results.affectedRows + ' row(s).');
- })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 154],
- function (err, results, fields) {
- if (err) throw err;
- console.log('Inserted ' + results.affectedRows + ' row(s).');
- })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100],
- function (err, results, fields) {
- if (err) throw err;
- console.log('Inserted ' + results.affectedRows + ' row(s).');
- })
- conn.end(function (err) {
- if (err) throw err;
- else console.log('Done.')
- });
-};
-```
-
-## Read data
-
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query.
-
-```javascript
-const mysql = require('mysql');
-const fs = require('fs');
-
-var config =
-{
- host: 'mydemoserver.mysql.database.azure.com',
- user: 'myadmin@mydemoserver',
- password: 'your_password',
- database: 'quickstartdb',
- port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
-};
-
-const conn = new mysql.createConnection(config);
-
-conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else {
- console.log("Connection established.");
- readData();
- }
- });
-
-function readData(){
- conn.query('SELECT * FROM inventory',
- function (err, results, fields) {
- if (err) throw err;
- else console.log('Selected ' + results.length + ' row(s).');
- for (i = 0; i < results.length; i++) {
- console.log('Row: ' + JSON.stringify(results[i]));
- }
- console.log('Done.');
- })
- conn.end(
- function (err) {
- if (err) throw err;
- else console.log('Closing connection.')
- });
-};
-```
-
-## Update data
-
-Use the following code to connect and update data by using an **UPDATE** SQL statement.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
-
-```javascript
-const mysql = require('mysql');
-const fs = require('fs');
-
-var config =
-{
- host: 'mydemoserver.mysql.database.azure.com',
- user: 'myadmin@mydemoserver',
- password: 'your_password',
- database: 'quickstartdb',
- port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
-};
-
-const conn = new mysql.createConnection(config);
-
-conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else {
- console.log("Connection established.");
- updateData();
- }
- });
-
-function updateData(){
- conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [200, 'banana'],
- function (err, results, fields) {
- if (err) throw err;
- else console.log('Updated ' + results.affectedRows + ' row(s).');
- })
- conn.end(
- function (err) {
- if (err) throw err;
- else console.log('Done.')
- });
-};
-```
-
-## Delete data
-
-Use the following code to connect and delete data by using a **DELETE** SQL statement.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
--
-```javascript
-const mysql = require('mysql');
-const fs = require('fs');
-
-var config =
-{
- host: 'mydemoserver.mysql.database.azure.com',
- user: 'myadmin@mydemoserver',
- password: 'your_password',
- database: 'quickstartdb',
- port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
-};
-
-const conn = new mysql.createConnection(config);
-
-conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else {
- console.log("Connection established.");
- deleteData();
- }
- });
-
-function deleteData(){
- conn.query('DELETE FROM inventory WHERE name = ?', ['orange'],
- function (err, results, fields) {
- if (err) throw err;
- else console.log('Deleted ' + results.affectedRows + ' row(s).');
- })
- conn.end(
- function (err) {
- if (err) throw err;
- else console.log('Done.')
- });
-};
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-php.md
- Title: 'Quickstart: Connect using PHP - Azure Database for MySQL'
-description: This quickstart provides several PHP code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 10/28/2020--
-# Quickstart: Use PHP to connect and query data in Azure Database for MySQL
-
-This quickstart demonstrates how to connect to an Azure Database for MySQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database.
-
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-
- |Action| Connectivity method|How-to guide|
- |: |: |: |
- | **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
- | **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
- | **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- [Create a database and non-admin user](./howto-create-users.md?tabs=single-server)-- Install latest PHP version for your operating system
- - [PHP on macOS](https://secure.php.net/manual/install.macosx.php)
- - [PHP on Linux](https://secure.php.net/manual/install.unix.php)
- - [PHP on Windows](https://secure.php.net/manual/install.windows.php)
-
-> [!NOTE]
-> We are using [MySQLi](https://www.php.net/manual/en/book.mysqli.php) library to manage connect and query the server in this quickstart.
-
-## Get connection information
-You can get the database server connection information from the Azure portal by following these steps:
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. Navigate to the Azure Databases for MySQL page. You can search for and select **Azure Database for MySQL**.
-
-2. Select your MySQL server (such as **mydemoserver**).
-3. In the **Overview** page, copy the fully qualified server name next to **Server name** and the admin user name next to **Server admin login name**. To copy the server name or host name, hover over it and select the **Copy** icon.
-
-> [!IMPORTANT]
-> - If you forgot your password, you can [reset the password](./howto-create-manage-server-portal.md#update-admin-password).
-> - Replace the **host, username, password,** and **db_name** parameters with your own values**
-
-## Step 1: Connect to the server
-SSL is enabled by default. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. This code calls:
-- [mysqli_init](https://secure.php.net/manual/mysqli.init.php) to initialize MySQLi.-- [mysqli_ssl_set](https://www.php.net/manual/en/mysqli.ssl-set.php) to point to the SSL certificate path. This is required for your local environment but not required for App Service Web App or Azure Virtual machines.-- [mysqli_real_connect](https://secure.php.net/manual/mysqli.real-connect.php) to connect to MySQL.-- [mysqli_close](https://secure.php.net/manual/mysqli.close.php) to close the connection.--
-```php
-$host = 'mydemoserver.mysql.database.azure.com';
-$username = 'myadmin@mydemoserver';
-$password = 'your_password';
-$db_name = 'your_database';
-
-//Initializes MySQLi
-$conn = mysqli_init();
-
-mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootG2.crt.pem", NULL, NULL);
-
-// Establish the connection
-mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, NULL, MYSQLI_CLIENT_SSL);
-
-//If connection failed, show the error
-if (mysqli_connect_errno())
-{
- die('Failed to connect to MySQL: '.mysqli_connect_error());
-}
-```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Step 2: Create a Table
-Use the following code to connect. This code calls:
-- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) to run the query.
-```php
-// Run the create table query
-if (mysqli_query($conn, '
-CREATE TABLE Products (
-`Id` INT NOT NULL AUTO_INCREMENT ,
-`ProductName` VARCHAR(200) NOT NULL ,
-`Color` VARCHAR(50) NOT NULL ,
-`Price` DOUBLE NOT NULL ,
-PRIMARY KEY (`Id`)
-);
-')) {
-printf("Table created\n");
-}
-```
-
-## Step 3: Insert data
-Use the following code to insert data by using an **INSERT** SQL statement. This code uses the methods:
-- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared insert statement-- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) to bind the parameters for each inserted column value.-- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php)-- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) to close the statement by using method--
-```php
-//Create an Insert prepared statement and run it
-$product_name = 'BrandNewProduct';
-$product_color = 'Blue';
-$product_price = 15.5;
-if ($stmt = mysqli_prepare($conn, "INSERT INTO Products (ProductName, Color, Price) VALUES (?, ?, ?)"))
-{
- mysqli_stmt_bind_param($stmt, 'ssd', $product_name, $product_color, $product_price);
- mysqli_stmt_execute($stmt);
- printf("Insert: Affected %d rows\n", mysqli_stmt_affected_rows($stmt));
- mysqli_stmt_close($stmt);
-}
-
-```
-
-## Step 4: Read data
-Use the following code to read the data by using a **SELECT** SQL statement. The code uses the method:
-- [mysqli_query](https://secure.php.net/manual/mysqli.query.php) execute the **SELECT** query-- [mysqli_fetch_assoc](https://secure.php.net/manual/mysqli-result.fetch-assoc.php) to fetch the resulting rows.-
-```php
-//Run the Select query
-printf("Reading data from table: \n");
-$res = mysqli_query($conn, 'SELECT * FROM Products');
-while ($row = mysqli_fetch_assoc($res))
- {
- var_dump($row);
- }
-
-```
--
-## Step 5: Delete data
-Use the following code delete rows by using a **DELETE** SQL statement. The code uses the methods:
-- [mysqli_prepare](https://secure.php.net/manual/mysqli.prepare.php) to create a prepared delete statement-- [mysqli_stmt_bind_param](https://secure.php.net/manual/mysqli-stmt.bind-param.php) binds the parameters-- [mysqli_stmt_execute](https://secure.php.net/manual/mysqli-stmt.execute.php) executes the prepared delete statement-- [mysqli_stmt_close](https://secure.php.net/manual/mysqli-stmt.close.php) closes the statement-
-```php
-//Run the Delete statement
-$product_name = 'BrandNewProduct';
-if ($stmt = mysqli_prepare($conn, "DELETE FROM Products WHERE ProductName = ?")) {
-mysqli_stmt_bind_param($stmt, 's', $product_name);
-mysqli_stmt_execute($stmt);
-printf("Delete: Affected %d rows\n", mysqli_stmt_affected_rows($stmt));
-mysqli_stmt_close($stmt);
-}
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-
-[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-python.md
- Title: 'Quickstart: Connect using Python - Azure Database for MySQL'
-description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 10/28/2020--
-# Quickstart: Use Python to connect and query data in Azure Database for MySQL
--
-In this quickstart, you connect to an Azure Database for MySQL by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
-
-## Prerequisites
-For this quickstart you need:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for MySQL single server using [Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) <br/> or [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md) if you do not have one.-- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.-
- |Action| Connectivity method|How-to guide|
- |: |: |: |
- | **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
- | **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
- | **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
--- [Create a database and non-admin user](./howto-create-users.md)-
-## Install Python and the MySQL connector
-
-Install Python and the MySQL connector for Python on your computer by using the following steps:
-
-> [!NOTE]
-> This quickstart is using [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/).
-
-1. Download and install [Python 3.7 or above](https://www.python.org/downloads/) for your OS. Make sure to add Python to your `PATH`, because the MySQL connector requires that.
-
-2. Open a command prompt or `bash` shell, and check your Python version by running `python -V` with the uppercase V switch.
-
-3. The `pip` package installer is included in the latest versions of Python. Update `pip` to the latest version by running `pip install -U pip`.
-
- If `pip` isn't installed, you can download and install it with `get-pip.py`. For more information, see [Installation](https://pip.pypa.io/en/stable/installing/).
-
-4. Use `pip` to install the MySQL connector for Python and its dependencies:
-
- ```bash
- pip install mysql-connector-python
- ```
-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Get connection information
-
-Get the connection information you need to connect to Azure Database for MySQL from the Azure portal. You need the server name, database name, and login credentials.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the portal search bar, search for and select the Azure Database for MySQL server you created, such as **mydemoserver**.
-
- :::image type="content" source="./media/connect-python/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-1. From the server's **Overview** page, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this page.
-
- :::image type="content" source="./media/connect-python/azure-database-for-mysql-server-overview-name-login.png" alt-text="Azure Database for MySQL server name 2":::
-
-## Running the Python code samples
-
-For each code example in this article:
-
-1. Create a new file in a text editor.
-2. Add the code example to the file. In the code, replace the `<mydemoserver>`, `<myadmin>`, `<mypassword>`, and `<mydatabase>` placeholders with the values for your MySQL server and database.
-1. SSL is enabled by default on Azure Database for MySQL servers. You may need to download the [DigiCertGlobalRootG2 SSL certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) to connect from your local environment. Replace the `ssl_ca` value in the code with path to this file on your computer.
-1. Save the file in a project folder with a *.py* extension, such as *C:\pythonmysql\createtable.py* or */home/username/pythonmysql/createtable.py*.
-1. To run the code, open a command prompt or `bash` shell and change directory into your project folder, for example `cd pythonmysql`. Type the `python` command followed by the file name, for example `python createtable.py`, and press Enter.
-
- > [!NOTE]
- > On Windows, if *python.exe* is not found, you may need to add the Python path into your PATH environment variable, or provide the full path to *python.exe*, for example `C:\python27\python.exe createtable.py`.
-
-## Step 1: Create a table and insert data
-
-Use the following code to connect to the server and database, create a table, and load data by using an **INSERT** SQL statement.The code imports the mysql.connector library, and uses the method:
-- [connect()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysql-connector-connect.html) function to connect to Azure Database for MySQL using the [arguments](https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html) in the config collection. -- [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database. -- [cursor.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-close.html) when you are done using a cursor.-- [conn.close()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-close.html) to close the connection the connection.-
-```python
-import mysql.connector
-from mysql.connector import errorcode
-
-# Obtain connection string information from the portal
-
-config = {
- 'host':'<mydemoserver>.mysql.database.azure.com',
- 'user':'<myadmin>@<mydemoserver>',
- 'password':'<mypassword>',
- 'database':'<mydatabase>',
- 'client_flags': [mysql.connector.ClientFlag.SSL],
- 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
-}
-
-# Construct connection string
-
-try:
- conn = mysql.connector.connect(**config)
- print("Connection established")
-except mysql.connector.Error as err:
- if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
- print("Something is wrong with the user name or password")
- elif err.errno == errorcode.ER_BAD_DB_ERROR:
- print("Database does not exist")
- else:
- print(err)
-else:
- cursor = conn.cursor()
-
- # Drop previous table of same name if one exists
- cursor.execute("DROP TABLE IF EXISTS inventory;")
- print("Finished dropping table (if existed).")
-
- # Create table
- cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);")
- print("Finished creating table.")
-
- # Insert some data into table
- cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150))
- print("Inserted",cursor.rowcount,"row(s) of data.")
- cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154))
- print("Inserted",cursor.rowcount,"row(s) of data.")
- cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100))
- print("Inserted",cursor.rowcount,"row(s) of data.")
-
- # Cleanup
- conn.commit()
- cursor.close()
- conn.close()
- print("Done.")
-```
-
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-
-## Step 2: Read data
-
-Use the following code to connect and read the data by using a **SELECT** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
-
-The code reads the data rows using the [fetchall()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchall.html) method, keeps the result set in a collection row, and uses a `for` iterator to loop over the rows.
-
-```python
-import mysql.connector
-from mysql.connector import errorcode
-
-# Obtain connection string information from the portal
-
-config = {
- 'host':'<mydemoserver>.mysql.database.azure.com',
- 'user':'<myadmin>@<mydemoserver>',
- 'password':'<mypassword>',
- 'database':'<mydatabase>',
- 'client_flags': [mysql.connector.ClientFlag.SSL],
- 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
-}
-
-# Construct connection string
-
-try:
- conn = mysql.connector.connect(**config)
- print("Connection established")
-except mysql.connector.Error as err:
- if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
- print("Something is wrong with the user name or password")
- elif err.errno == errorcode.ER_BAD_DB_ERROR:
- print("Database does not exist")
- else:
- print(err)
-else:
- cursor = conn.cursor()
-
- # Read data
- cursor.execute("SELECT * FROM inventory;")
- rows = cursor.fetchall()
- print("Read",cursor.rowcount,"row(s) of data.")
-
- # Print all rows
- for row in rows:
- print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2])))
-
- # Cleanup
- conn.commit()
- cursor.close()
- conn.close()
- print("Done.")
-```
-
-## Step 3: Update data
-
-Use the following code to connect and update the data by using an **UPDATE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
-
-```python
-import mysql.connector
-from mysql.connector import errorcode
-
-# Obtain connection string information from the portal
-
-config = {
- 'host':'<mydemoserver>.mysql.database.azure.com',
- 'user':'<myadmin>@<mydemoserver>',
- 'password':'<mypassword>',
- 'database':'<mydatabase>',
- 'client_flags': [mysql.connector.ClientFlag.SSL],
- 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
-}
-
-# Construct connection string
-
-try:
- conn = mysql.connector.connect(**config)
- print("Connection established")
-except mysql.connector.Error as err:
- if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
- print("Something is wrong with the user name or password")
- elif err.errno == errorcode.ER_BAD_DB_ERROR:
- print("Database does not exist")
- else:
- print(err)
-else:
- cursor = conn.cursor()
-
- # Update a data row in the table
- cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (300, "apple"))
- print("Updated",cursor.rowcount,"row(s) of data.")
-
- # Cleanup
- conn.commit()
- cursor.close()
- conn.close()
- print("Done.")
-```
-
-## Step 4: Delete data
-
-Use the following code to connect and remove data by using a **DELETE** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
-
-```python
-import mysql.connector
-from mysql.connector import errorcode
-
-# Obtain connection string information from the portal
-
-config = {
- 'host':'<mydemoserver>.mysql.database.azure.com',
- 'user':'<myadmin>@<mydemoserver>',
- 'password':'<mypassword>',
- 'database':'<mydatabase>',
- 'client_flags': [mysql.connector.ClientFlag.SSL],
- 'ssl_ca': '<path-to-SSL-cert>/DigiCertGlobalRootG2.crt.pem'
-}
-
-# Construct connection string
-
-try:
- conn = mysql.connector.connect(**config)
- print("Connection established")
-except mysql.connector.Error as err:
- if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
- print("Something is wrong with the user name or password")
- elif err.errno == errorcode.ER_BAD_DB_ERROR:
- print("Database does not exist")
- else:
- print(err)
-else:
- cursor = conn.cursor()
-
- # Delete a data row in the table
- cursor.execute("DELETE FROM inventory WHERE name=%(param1)s;", {'param1':"orange"})
- print("Deleted",cursor.rowcount,"row(s) of data.")
-
- # Cleanup
- conn.commit()
- cursor.close()
- conn.close()
- print("Done.")
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using Portal](./howto-create-manage-server-portal.md)<br/>
-
-> [!div class="nextstepaction"]
-> [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-
-[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-ruby.md
- Title: 'Quickstart: Connect using Ruby - Azure Database for MySQL'
-description: This quickstart provides several Ruby code samples you can use to connect and query data from Azure Database for MySQL.
------ Previously updated : 5/26/2020--
-# Quickstart: Use Ruby to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL using a [Ruby](https://www.ruby-lang.org) application and the [mysql2](https://rubygems.org/gems/mysql2) gem from Windows, Ubuntu Linux, and Mac platforms. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Ruby and that you are new to working with Azure Database for MySQL.
-
-## Prerequisites
-
-This quickstart uses the resources created in either of these guides as a starting point:
--- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install Ruby
-
-Install Ruby, Gem, and the MySQL2 library on your own computer.
-
-### Windows
-
-1. Download and Install the 2.3 version of [Ruby](https://rubyinstaller.org/downloads/).
-2. Launch a new command prompt (cmd) from the Start menu.
-3. Change directory into the Ruby directory for version 2.3. `cd c:\Ruby23-x64\bin`
-4. Test the Ruby installation by running the command `ruby -v` to see the version installed.
-5. Test the Gem installation by running the command `gem -v` to see the version installed.
-6. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`.
-
-### macOS
-
-1. Install Ruby using Homebrew by running the command `brew install ruby`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/#homebrew).
-2. Test the Ruby installation by running the command `ruby -v` to see the version installed.
-3. Test the Gem installation by running the command `gem -v` to see the version installed.
-4. Build the Mysql2 module for Ruby using Gem by running the command `gem install mysql2`.
-
-### Linux (Ubuntu)
-
-1. Install Ruby by running the command `sudo apt-get install ruby-full`. For more installation options, see the Ruby [installation documentation](https://www.ruby-lang.org/en/documentation/installation/).
-2. Test the Ruby installation by running the command `ruby -v` to see the version installed.
-3. Install the latest updates for Gem by running the command `sudo gem update --system`.
-4. Test the Gem installation by running the command `gem -v` to see the version installed.
-5. Install the gcc, make, and other build tools by running the command `sudo apt-get install build-essential`.
-6. Install the MySQL client developer libraries by running the command `sudo apt-get install libmysqlclient-dev`.
-7. Build the mysql2 module for Ruby using Gem by running the command `sudo gem install mysql2`.
-
-## Get connection information
-
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-ruby/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-## Run Ruby code
-
-1. Paste the Ruby code from the sections below into text files, and then save the files into a project folder with file extension .rb (such as `C:\rubymysql\createtable.rb` or `/home/username/rubymysql/createtable.rb`).
-2. To run the code, launch the command prompt or Bash shell. Change directory into your project folder `cd rubymysql`
-3. Then type the Ruby command followed by the file name, such as `ruby createtable.rb` to run the application.
-4. On the Windows OS, if the Ruby application is not in your path environment variable, you may need to use the full path to launch the node application, such as `"c:\Ruby23-x64\bin\ruby.exe" createtable.rb`
-
-## Connect and create a table
-
-Use the following code to connect and create a table by using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
-
-The code uses a mysql2::client class to connect to MySQL server. Then it calls method ```query()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. Finally, call the ```close()``` to close the connection before terminating.
-
-Replace the `host`, `database`, `username`, and `password` strings with your own values.
-
-```ruby
-require 'mysql2'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.mysql.database.azure.com')
- database = String('quickstartdb')
- username = String('myadmin@mydemoserver')
- password = String('yourpassword')
-
- # Initialize connection object.
- client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
- puts 'Successfully created connection to database.'
-
- # Drop previous table of same name if one exists
- client.query('DROP TABLE IF EXISTS inventory;')
- puts 'Finished dropping table (if existed).'
-
- # Drop previous table of same name if one exists.
- client.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);')
- puts 'Finished creating table.'
-
- # Insert some data into table.
- client.query("INSERT INTO inventory VALUES(1, 'banana', 150)")
- client.query("INSERT INTO inventory VALUES(2, 'orange', 154)")
- client.query("INSERT INTO inventory VALUES(3, 'apple', 100)")
- puts 'Inserted 3 rows of data.'
-
-# Error handling
-
-rescue Exception => e
- puts e.message
-
-# Cleanup
-
-ensure
- client.close if client
- puts 'Done.'
-end
-```
-
-## Read data
-
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
-
-The code uses a mysql2::client class to connect to Azure Database for MySQL with ```new()```method. Then it calls method ```query()``` to run the SELECT commands. Then it calls method ```close()``` to close the connection before terminating.
-
-Replace the `host`, `database`, `username`, and `password` strings with your own values.
-
-```ruby
-require 'mysql2'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.mysql.database.azure.com')
- database = String('quickstartdb')
- username = String('myadmin@mydemoserver')
- password = String('yourpassword')
-
- # Initialize connection object.
- client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
- puts 'Successfully created connection to database.'
-
- # Read data
- resultSet = client.query('SELECT * from inventory;')
- resultSet.each do |row|
- puts 'Data row = (%s, %s, %s)' % [row['id'], row['name'], row['quantity']]
- end
- puts 'Read ' + resultSet.count.to_s + ' row(s).'
-
-# Error handling
-
-rescue Exception => e
- puts e.message
-
-# Cleanup
-
-ensure
- client.close if client
- puts 'Done.'
-end
-```
-
-## Update data
-
-Use the following code to connect and update the data by using an **UPDATE** SQL statement.
-
-The code uses a [mysql2::client](https://rubygems.org/gems/mysql2-client-general_log) class .new() method to connect to Azure Database for MySQL. Then it calls method ```query()``` to run the UPDATE commands. Then it calls method ```close()``` to close the connection before terminating.
-
-Replace the `host`, `database`, `username`, and `password` strings with your own values.
-
-```ruby
-require 'mysql2'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.mysql.database.azure.com')
- database = String('quickstartdb')
- username = String('myadmin@mydemoserver')
- password = String('yourpassword')
-
- # Initialize connection object.
- client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
- puts 'Successfully created connection to database.'
-
- # Update data
- client.query('UPDATE inventory SET quantity = %d WHERE name = %s;' % [200, '\'banana\''])
- puts 'Updated 1 row of data.'
-
-# Error handling
-
-rescue Exception => e
- puts e.message
-
-# Cleanup
-
-ensure
- client.close if client
- puts 'Done.'
-end
-```
-
-## Delete data
-
-Use the following code to connect and read the data by using a **DELETE** SQL statement.
-
-The code uses a [mysql2::client](https://rubygems.org/gems/mysql2/) class to connect to MySQL server, run the DELETE command and then close the connection to the server.
-
-Replace the `host`, `database`, `username`, and `password` strings with your own values.
-
-```ruby
-require 'mysql2'
-
-begin
- # Initialize connection variables.
- host = String('mydemoserver.mysql.database.azure.com')
- database = String('quickstartdb')
- username = String('myadmin@mydemoserver')
- password = String('yourpassword')
-
- # Initialize connection object.
- client = Mysql2::Client.new(:host => host, :username => username, :database => database, :password => password)
- puts 'Successfully created connection to database.'
-
- # Delete data
- resultSet = client.query('DELETE FROM inventory WHERE name = %s;' % ['\'orange\''])
- puts 'Deleted 1 row.'
-
-# Error handling
--
-rescue Exception => e
- puts e.message
-
-# Cleanup
--
-ensure
- client.close if client
- puts 'Done.'
-end
-```
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
-
-> [!div class="nextstepaction"]
-> [Learn more about MySQL2 client](https://rubygems.org/gems/mysql2-client-general_log)
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-workbench.md
- Title: 'Quickstart: Connect - MySQL Workbench - Azure Database for MySQL'
-description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL.
------ Previously updated : 5/26/2020--
-# Quickstart: Use MySQL Workbench to connect and query data in Azure Database for MySQL
--
-This quickstart demonstrates how to connect to an Azure Database for MySQL using the MySQL Workbench application.
-
-## Prerequisites
-
-This quickstart uses the resources created in either of these guides as a starting point:
-- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md)-- [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)-
-> [!IMPORTANT]
-> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md)
-
-## Install MySQL Workbench
-Download and install MySQL Workbench on your computer from [the MySQL website](https://dev.mysql.com/downloads/workbench/).
-
-## Get connection information
-Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
-
-1. Log in to the [Azure portal](https://portal.azure.com/).
-
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-
-3. Click the server name.
-
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-php/1_server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
-
-## Connect to the server by using MySQL Workbench
-To connect to Azure MySQL Server by using the GUI tool MySQL Workbench:
-
-1. Launch the MySQL Workbench application on your computer.
-
-2. In **Setup New Connection** dialog box, enter the following information on the **Parameters** tab:
--
-| **Setting** | **Suggested value** | **Field description** |
-||||
-| Connection Name | Demo Connection | Specify a label for this connection. |
-| Connection Method | Standard (TCP/IP) | Standard (TCP/IP) is sufficient. |
-| Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you do not remember your server name. |
-| Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. |
-| Username | *server admin login name* | Type in the server admin login username supplied when you created the Azure Database for MySQL earlier. Our example username is myadmin@mydemoserver. Follow the steps in the previous section to get the connection information if you do not remember the username. The format is *username\@servername*.
-| Password | your password | Click **Store in Vault...** button to save the password. |
-
-3. Click **Test Connection** to test if all parameters are correctly configured.
-
-4. Click **OK** to save the connection.
-
-5. In the listing of **MySQL Connections**, click the tile corresponding to your server, and then wait for the connection to be established.
-
- A new SQL tab opens with a blank editor where you can type your queries.
-
- > [!NOTE]
- > By default, SSL connection security is required and enforced on your Azure Database for MySQL server. Although typically no additional configuration with SSL certificates is required for MySQL Workbench to connect to your server, we recommend binding the SSL CA certification with MySQL Workbench. For more information on how to download and bind the certification, see [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./howto-configure-ssl.md). If you need to disable SSL, visit the Azure portal and click the Connection security page to disable the Enforce SSL connection toggle button.
-
-## Create a table, insert data, read data, update data, delete data
-1. Copy and paste the sample SQL code into a blank SQL tab to illustrate some sample data.
-
- This code creates an empty database named quickstartdb, and then creates a sample table named inventory. It inserts some rows, then reads the rows. It changes the data with an update statement, and reads the rows again. Finally it deletes a row, and then reads the rows again.
-
- ```sql
- -- Create a database
- -- DROP DATABASE IF EXISTS quickstartdb;
- CREATE DATABASE quickstartdb;
- USE quickstartdb;
-
- -- Create a table and insert rows
- DROP TABLE IF EXISTS inventory;
- CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
- INSERT INTO inventory (name, quantity) VALUES ('banana', 150);
- INSERT INTO inventory (name, quantity) VALUES ('orange', 154);
- INSERT INTO inventory (name, quantity) VALUES ('apple', 100);
-
- -- Read
- SELECT * FROM inventory;
-
- -- Update
- UPDATE inventory SET quantity = 200 WHERE id = 1;
- SELECT * FROM inventory;
-
- -- Delete
- DELETE FROM inventory WHERE id = 2;
- SELECT * FROM inventory;
- ```
-
- The screenshot shows an example of the SQL code in SQL Workbench and the output after it has been run.
-
- :::image type="content" source="media/connect-workbench/3-workbench-sql-tab.png" alt-text="MySQL Workbench SQL Tab to run sample SQL code":::
-
-2. To run the sample SQL Code, click the lightening bolt icon in the toolbar of the **SQL File** tab.
-3. Notice the three tabbed results in the **Result Grid** section in the middle of the page.
-4. Notice the **Output** list at the bottom of the page. The status of each command is shown.
-
-Now, you have connected to Azure Database for MySQL by using MySQL Workbench, and you have queried data using the SQL language.
-
-## Clean up resources
-
-To clean up all resources used during this quickstart, delete the resource group using the following command:
-
-```azurecli
-az group delete \
- --name $AZ_RESOURCE_GROUP \
- --yes
-```
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
mysql How To Connect Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-connect-overview-single-server.md
- Title: Connect and query - Single Server MySQL
-description: Links to quickstarts showing how to connect to your Azure My SQL Database Single Server and run queries.
------ Previously updated : 09/22/2020--
-# Connect and query overview for Azure database for MySQL- Single Server
--
-The following document includes links to examples showing how to connect and query with Azure Database for MySQL Single Server. This guide also includes TLS recommendations and libraries that you can use to connect to the server in supported languages below.
-
-## Quickstarts
-
-| Quickstart | Description |
-|||
-|[MySQL workbench](connect-workbench.md)|This quickstart demonstrates how to use MySQL Workbench Client to connect to a database. You can then use MySQL statements to query, insert, update, and delete data in the database.|
-|[Azure Cloud Shell](./quickstart-create-mysql-server-database-using-azure-cli.md#connect-to-azure-database-for-mysql-server-using-mysql-command-line-client)|This article shows how to run **mysql.exe** in [Azure Cloud Shell](../cloud-shell/overview.md) to connect to your server and then run statements to query, insert, update, and delete data in the database.|
-|[MySQL with Visual Studio](https://www.mysql.com/why-mysql/windows/visualstudio)|You can use MySQL for Visual Studio for connecting to your MySQL server. MySQL for Visual Studio integrates directly into Server Explorer making it easy to setup new connections and working with database objects.|
-|[PHP](connect-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use MySQL statements to query data.|
-|[Java](connect-java.md)|This quickstart demonstrates how to use Java to connect to a database and then use MySQL statements to query data.|
-|[Node.js](connect-nodejs.md)|This quickstart demonstrates how to use Node.js to create a program to connect to a database and use MySQL statements to query data.|
-|[.NET(C#)](connect-csharp.md)|This quickstart demonstrates how to use.NET (C#) to create a C# program to connect to a database and use MySQL statements to query data.|
-|[Go](connect-go.md)|This quickstart demonstrates how to use Go to connect to a database. Transact-SQL statements to query and modify data are also demonstrated.|
-|[Python](connect-python.md)|This quickstart demonstrates how to use Python to connect to a database and use MySQL statements to query data. |
-|[Ruby](connect-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use MySQL statements to query data.|
-|[C++](connect-cpp.md)|This quickstart demonstrates how to use C+++ to create a program to connect to a database and use query data.|
-
-## TLS considerations for database connectivity
-
-Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to databases in Azure Database for MySQL. No special configuration is necessary but do enforce TLS 1.2 for newly created servers. We recommend if you are using TLS 1.0 and 1.1, then you update the TLS version for your servers. See [How to configure TLS](howto-tls-configurations.md)
-
-## Libraries
-
-Azure Database for MySQL uses the world's most popular community edition of MySQL database. Hence, it is compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open source community to constantly improve the functionality and usability of MySQL drivers continue.
-
-See what [drivers](concepts-compatibility.md) are compatible with Azure Database for MySQL Single server.
-
-## Next steps
--- [Migrate data using dump and restore](concepts-migrate-dump-restore.md)-- [Migrate data using import and export](concepts-migrate-import-export.md)
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-decide-on-right-migration-tools.md
- Title: "Select the right tools for migration to Azure Database for MySQL"
-description: "This topic provides a decision table which helps customers in picking the right tools for migrating into Azure Database for MySQL"
------- Previously updated : 10/12/2021--
-# Select the right tools for migration to Azure Database for MySQL
--
-## Overview
-
-Migrations are multi-step projects that are tough to pull off. Migrating database servers across platforms involves more than data and schema migration. There are also several other components, such as server configuration parameters, networking, access control rules, etc., to move. These are required to ensure that the functionality of the database server in the new target platform mimics the source.
-
-For detailed information and use cases about migrating databases to Azure Database for MySQL, you can refer to the [Database Migration Guide](migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md). This document provides pointers that will help you successfully plan and execute a MySQL migration to Azure.
-
-In general, migrations can be categorized as either offline or online.
--- With an offline migration, the source server is taken offline and a dump and restore of the databases is performed on the target server. --- With an online migration (migration with minimal downtime), the source server allows updates, and the migration solution will take care of replicating the ongoing changes between the source and target server along with the initial dump and restore on the target. -
-If your application can afford some downtime, offline migrations are always the preferred choice, as they are simple and easy to execute. However, if your application can only afford minimal downtime, an online migration is the best choice. Migrations of the majority of OLTP systems, such as payment processing and e-commerce, fall into this category.
-
-## Decision table
-
-To help you with selecting the right tools for migrating to Azure Database for MySQL, consider the detail in the following table.
-
-| Scenarios | Recommended Tools | Links |
-|-|||
-| Offline Migrations to move databases >= 1 TB | Dump and Restore using **MyDumper/MyLoader** + High Compute VM | [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md) <br><br> [Best Practices for migrating large databases to Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699)|
-| Offline Migrations to move databases < 1TB | If network bandwidth between source and target is good (e.g: Highspeed express route), use **Azure DMS** (database migration service) <br><br> **-OR-** <br><br> If you have low network bandwidth between source and Azure, use **Mydumper/Myloader + High compute VM** to take advantage of compression settings to efficiently move data over low speed networks <br><br> **-OR-** <br><br> Use **mysqldump** and **MySQL Workbench Export/Import** utility to perform offline migrations for smaller databases. | [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS - Azure Database Migration Service](../dms/tutorial-mysql-azure-mysql-offline-portal.md)<br><br> [Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench](how-to-migrate-rds-mysql-workbench.md)<br><br> [Import and export - Azure Database for MySQL](concepts-migrate-import-export.md)|
-| Online Migration | **Mydumper/Myloader with Data-in replication** <br><br> **Mysqldump with data-in replication** can be considered for small databases( less than 100GB). These methods are applicable to both external and intra-platform migrations. | [Configure Data-in replication - Azure Database for MySQL Flexible Server](flexible-server/how-to-data-in-replication.md) <br><br> [Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime](howto-migrate-single-flexible-minimum-downtime.md) |
-|Single to Flexible Server Migrations | **Offline**: Custom shell script hosted in [GitHub](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) This script also moves other server components such as security settings and server parameter configurations. <br><br>**Online**: **Mydumper/Myloader with Data-in replication** | [Migrate from Azure Database for MySQL - Single Server to Flexible Server in 5 easy steps!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057)<br><br> [Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime](howto-migrate-single-flexible-minimum-downtime.md)|
-
-## Next steps
-* [Migrate MySQL on-premises to Azure Database for MySQL](migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md)
-
-<br><br>
mysql How To Fix Corrupt Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-fix-corrupt-database.md
- Title: Resolve database corruption - Azure Database for MySQL
-description: In this article, you'll learn about how to fix database corruption problems in Azure Database for MySQL.
----- Previously updated : 09/21/2020--
-# Troubleshoot database corruption in Azure Database for MySQL
--
-Database corruption can cause downtime for your application. It's also critical to resolve corruption problems in time to avoid data loss. When database corruption occurs, you'll see this error in your server logs: `InnoDB: Database page corruption on disk or a failed.`
-
-In this article, you'll learn how to resolve database or table corruption problems. Azure Database for MySQL uses the InnoDB engine. It features automated corruption checking and repair operations. InnoDB checks for corrupt pages by running checksums on every page it reads. If it finds a checksum discrepancy, it will automatically stop the MySQL server.
-
-Try the following options to quickly mitigate your database corruption problems.
-
-## Restart your MySQL server
-
-You typically notice a database or table is corrupt when your application accesses the table or database. InnoDB features a crash recovery mechanism that can resolve most problems when the server is restarted. So restarting the server can help the server recover from a crash that caused the database to be in bad state.
-
-## Use the dump and restore method
-
-We recommend that you resolve corruption problems by using a *dump and restore* method. This method involves:
-
-1. Accessing the corrupt table.
-2. Using the mysqldump utility to create a logical backup of the table. The backup will retain the table structure and the data within it.
-3. Reloading the table into the database.
-
-### Back up your database or tables
-
-> [!Important]
->
-> - Make sure you have configured a firewall rule to access the server from your client machine. For more information, see [configure a firewall rule on Single Server](howto-manage-firewall-using-portal.md) and [configure a firewall rule on Flexible Server](flexible-server/how-to-connect-tls-ssl.md).
-> - Use SSL option `--ssl-cert` for mysqldump if you have SSL enabled.
-
-Create a backup file from the command line by using mysqldump. Use this command:
-
-```
-$ mysqldump [--ssl-cert=/path/to/pem] -h [host] -u [uname] -p[pass] [dbname] > [backupfile.sql]
-```
-
-Parameter descriptions:
-- `[ssl-cert=/path/to/pem]`: The path to the SSL certificate. Download the SSL certificate on your client machine and set the path in it in the command. Don't use this parameter if SSL is disabled.-- `[host]`: Your Azure Database for MySQL server.-- `[uname]`: Your server admin user name.-- `[pass]`: The password for your admin user.-- `[dbname]`: The name of your database.-- `[backupfile.sql]`: The file name of your database backup.-
-> [!Important]
-> - For Single Server, use the format `admin-user@servername` to replace `myserveradmin` in the following commands.
-> - For Flexible Server, use the format `admin-user` to replace `myserveradmin` in the following commands.
-
-If a specific table is corrupt, select specific tables in your database to back up:
-```
-$ mysqldump --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p testdb table1 table2 > testdb_tables_backup.sql
-```
-
-To back up one or more databases, use the `--database` switch and list the database names, separated by spaces:
-
-```
-$ mysqldump --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
-```
-
-### Restore your database or tables
-
-The following steps show how to restore your database or tables. After you create the backup file, you can restore the tables or databases by using the mysql utility. Run this command:
-
-```
-mysql --ssl-cert=</path/to/pem> -h [hostname] -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
-```
-Here's an example that restores `testdb` from a backup file created with mysqldump:
-
-> [!Important]
-> - For Single Server, use the format `admin-user@servername` to replace `myserveradmin` in the following command.
-> - For Flexible Server, use the format ```admin-user``` to replace `myserveradmin` in the following command.
-
-```
-$ mysql --ssl-cert=</path/to/pem> -h mydemoserver.mysql.database.azure.com -u myserveradmin -p testdb < testdb_backup.sql
-```
-
-## Next steps
-If the preceding steps don't resolve the problem, you can always restore the entire server:
-- [Restore server in Azure Database for MySQL - Single Server](howto-restore-server-portal.md)-- [Restore server in Azure Database for MySQL - Flexible Server](flexible-server/how-to-restore-server-portal.md)---
mysql How To Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-major-version-upgrade.md
- Title: Major version upgrade in Azure Database for MySQL - Single Server
-description: This article describes how you can upgrade major version for Azure Database for MySQL - Single Server
----- Previously updated : 1/28/2021-
-# Major version upgrade in Azure Database for MySQL Single Server
--
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
->
-
-> [!IMPORTANT]
-> Major version upgrade for Azure database for MySQL Single Server is in public preview.
-
-This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL single server.
-
-This feature will enable customers to perform in-place upgrades of their MySQL 5.6 servers to MySQL 5.7 with a click of button without any data movement or the need of any application connection string changes.
-
-> [!Note]
-> * Major version upgrade is only available for major version upgrade from MySQL 5.6 to MySQL 5.7.
-> * The server will be unavailable throughout the upgrade operation. It is therefore recommended to perform upgrades during your planned maintenance window. You can consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
-
-## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure portal
-
-Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.6 server using Azure portal
-
-> [!IMPORTANT]
-> We recommend to perform upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](howto-restore-server-portal.md#point-in-time-restore).
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 server.
-
-2. From the **Overview** page, click the **Upgrade** button in the toolbar.
-
-3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 server to 5.7 server.
-
- :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade":::
-
-4. A notification will confirm that upgrade is successful.
--
-## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure CLI
-
-Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.6 server using Azure CLI
-
-> [!IMPORTANT]
-> We recommend to perform upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](howto-restore-server-cli.md#server-point-in-time-restore).
-
-1. Install [Azure CLI for Windows](/cli/azure/install-azure-cli) or use Azure CLI in [Azure Cloud Shell](../cloud-shell/overview.md) to run the upgrade commands.
-
- This upgrade requires version 2.16.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
-
-2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command:
-
- ```azurecli
- az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --target-server-version 5.7"
- ```
-
- The command prompt shows the "-Running" message. After this message is no longer displayed, the version upgrade is complete.
-
-## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 on read replica using Azure portal
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 read replica server.
-
-2. From the **Overview** page, click the **Upgrade** button in the toolbar.
-
-3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 read replica server to 5.7 server.
-
- :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade":::
-
-4. A notification will confirm that upgrade is successful.
-
-5. From the **Overview** page, confirm that your Azure database for MySQL read replica server version is 5.7.
-
-6. Now go to your primary server and [Perform major version upgrade](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-using-azure-portal) on it.
-
-## Perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replicas
-
-You can perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 by utilizing read replicas. The idea is to upgrade the read replica of your server to 5.7 first and later failover your application to point to read replica and make it a new primary.
-
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6.
-
-2. Create a [read replica](./concepts-read-replicas.md#create-a-replica) from your primary server.
-
-3. [Upgrade your read replica](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-on-read-replica-using-azure-portal) to version 5.7.
-
-4. Once you confirm that the replica server is running on version 5.7, stop your application from connecting to your primary server.
-
-5. Check replication status, and make sure replica is all caught up with primary so all the data is in sync and ensure there are no new operations performed in primary.
-
- Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
-
- ```sql
- SHOW SLAVE STATUS\G
- ```
-
- If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm `Seconds_Behind_Master` is "0" it's safe to stop replication.
-
-6. Promote your read replica to primary by [stopping replication](./howto-read-replicas-portal.md#stop-replication-to-a-replica-server).
-
-7. Point your application to the new primary (former replica) which is running server 5.7. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
-
-> [!Note]
-> This scenario will have downtime during steps 4, 5 and 6 only.
--
-## Frequently asked questions
-
-### When will this upgrade feature be GA as we have MySQL v5.6 in our production environment that we need to upgrade?
-
-The GA of this feature is planned before MySQL v5.6 retirement. However, the feature is production ready and fully supported by Azure so you should run it with confidence on your environment. As a recommended best practice, we strongly suggest you to run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production. For more information, see [how to perform point-in-time restore](howto-restore-server-portal.md#point-in-time-restore) to create a point in time copy of your server.
-
-### Will this cause downtime of the server and if so, how long?
-
-Yes, the server will be unavailable during the upgrade process so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server.The upgrades of Basic SKU servers are expected to take longer time as it is on standard storage platform. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. Consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
-
-### What will happen if we do not choose to upgrade our MySQL v5.6 server before February 5, 2021?
-
-You can still continue running your MySQL v5.6 server as before. Azure **will never** perform force upgrade on your server. However, the restrictions documented in [Azure Database for MySQL versioning policy](concepts-version-policy.md) will apply.
-
-## Next steps
-
-Learn about [Azure Database for MySQL versioning policy](concepts-version-policy.md).
mysql How To Manage Single Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-manage-single-server-cli.md
- Title: Manage server - Azure CLI - Azure Database for MySQL
-description: Learn how to manage an Azure Database for MySQL server from the Azure CLI.
----- Previously updated : 9/22/2020--
-# Manage an Azure Database for MySQL Single server using the Azure CLI
--
-This article shows you how to manage your Single servers deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
-
-## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-You'll need to log in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
-
-```azurecli-interactive
-az login
-```
-
-Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
-
-```azurecli
-az account set --subscription <subscription id>
-```
-
-If you have not already created a server , refer to this [quickstart](quickstart-create-mysql-server-database-using-azure-cli.md) to create one.
-
-## Scale compute and storage
-You can scale up your pricing tier , compute and storage easily using the following command. You can see all the server operation you can perform [az mysql server overview](/cli/azure/mysql/server)
-
-```azurecli-interactive
-az mysql server update --resource-group myresourcegroup --name mydemoserver --sku-name GP_Gen5_4 --storage-size 6144
-```
-
-Here are the details for arguments above :
-
-**Setting** | **Sample value** | **Description**
-||
-name | mydemoserver | Enter a unique name for your Azure Database for MySQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters.
-resource-group | myresourcegroup | Provide the name of the Azure resource group.
-sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration. Follows the convention {pricing tier}_{compute generation}_{vCores} in shorthand. See the [pricing tiers](./concepts-pricing-tiers.md) for more information.
-storage-size | 6144 | The storage capacity of the server (unit is megabytes). Minimum 5120 and increases in 1024 increments.
-
-> [!Important]
-> - Storage can be scaled up (however, you cannot scale storage down)
-> - Scaling up from Basic to General purpose or Memory optimized pricing tier is not supported. You can manually scale up with either [using a bash script](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/upgrade-from-basic-to-general-purpose-or-memory-optimized-tiers/ba-p/830404) or [using MySQL Workbench](https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-scale-up-azure-database-for-mysql-from-basic-tier-to/ba-p/369134)
--
-## Manage MySQL databases on a server
-You can use any of these commands to create, delete , list and view database properties of a database on your server
-
-| Cmdlet | Usage| Description |
-| | | |
-|[az mysql db create](/cli/azure/sql/db#az-mysql-db-create)|```az mysql db create -g myresourcegroup -s mydemoserver -n mydatabasename``` |Creates a database|
-|[az mysql db delete](/cli/azure/sql/db#az-mysql-db-delete)|```az mysql db delete -g myresourcegroup -s mydemoserver -n mydatabasename```|Delete your database from your server. This command does not delete your server. |
-|[az mysql db list](/cli/azure/sql/db#az-mysql-db-list)|```az mysql db list -g myresourcegroup -s mydemoserver```|lists all the databases on the server|
-|[az mysql db show](/cli/azure/sql/db#az-mysql-db-show)|```az mysql db show -g myresourcegroup -s mydemoserver -n mydatabasename```|Shows more details of the database|
-
-## Update admin password
-You can change the administrator role's password with this command
-```azurecli-interactive
-az mysql server update --resource-group myresourcegroup --name mydemoserver --admin-password <new-password>
-```
-
-> [!Important]
-> Make sure password is minimum 8 characters and maximum 128 characters.
-> Password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
-
-## Delete a server
-If you would just like to delete the MySQL Single server, you can run [az mysql server delete](/cli/azure/mysql/server#az-mysql-server-delete) command.
-
-```azurecli-interactive
-az mysql server delete --resource-group myresourcegroup --name mydemoserver
-```
-
-## Next steps
-- [Restart a server](howto-restart-server-cli.md)-- [Restore a server in a bad state](howto-restore-server-cli.md)-- [Monitor and tune the server](concepts-monitoring.md)
mysql How To Migrate Rds Mysql Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-migrate-rds-mysql-data-in-replication.md
- Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication
-description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using Data-in Replication.
----- Previously updated : 09/24/2021--
-# Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication
--
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-
-You can use methods such as MySQL dump and restore, MySQL Workbench Export and Import, or Azure Database Migration Service to migrate your MySQL databases to Azure Database for MySQL. By using a combination of open-source tools such as mysqldump or mydumper and myloader with Data-in Replication, you can migrate your workloads with minimum downtime.
-
-Data-in Replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as *events* to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and execute the events in the binary log on the replica's local database.
-
-If you set up [Data-in Replication](../mysql/flexible-server/concepts-data-in-replication.md) to synchronize data from a source MySQL server to a target MySQL server, you can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
-
-In this tutorial, you'll learn how to set up Data-in Replication between a source server that runs Amazon Relational Database Service (RDS) for MySQL and a target server that runs Azure Database for MySQL.
-
-## Performance considerations
-
-Before you begin this tutorial, consider the performance implications of the location and capacity of the client computer you'll use to perform the operation.
-
-### Client location
-
-Perform dump or restore operations from a client computer that's launched in the same location as the database server:
--- For Azure Database for MySQL servers, the client machine should be in the same virtual network and the same availability zone as the target database server.-- For source Amazon RDS database instances, the client instance should exist in the same Amazon Virtual Private Cloud and availability zone as the source database server.
-In the preceding case, you can move dump files between client machines by using file transfer protocols like FTP or SFTP or upload them to Azure Blob Storage. To reduce the total migration time, compress files before you transfer them.
-
-### Cl